CN117115398A - Virtual-real fusion digital twin fluid phenomenon simulation method - Google Patents

Virtual-real fusion digital twin fluid phenomenon simulation method Download PDF

Info

Publication number
CN117115398A
CN117115398A CN202311014639.0A CN202311014639A CN117115398A CN 117115398 A CN117115398 A CN 117115398A CN 202311014639 A CN202311014639 A CN 202311014639A CN 117115398 A CN117115398 A CN 117115398A
Authority
CN
China
Prior art keywords
scene
fluid
real
dimensional
indoor scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311014639.0A
Other languages
Chinese (zh)
Inventor
高阳
刘雪梅
郝爱民
李瑾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
North China University of Water Resources and Electric Power
Original Assignee
Beihang University
North China University of Water Resources and Electric Power
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University, North China University of Water Resources and Electric Power filed Critical Beihang University
Priority to CN202311014639.0A priority Critical patent/CN117115398A/en
Publication of CN117115398A publication Critical patent/CN117115398A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/36Indoor scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/24Fluid dynamics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/56Particle system, point based geometry or rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computer Hardware Design (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure discloses a digital twin fluid phenomenon simulation method for virtual-real fusion. One embodiment of the method comprises the following steps: collecting an indoor scene depth map and an indoor scene RGB map, and carrying out complement processing on the indoor scene depth map according to the indoor scene RGB map; reconstructing the three-dimensional scene point cloud and determining the point cloud semantics of the three-dimensional scene point cloud; based on physical perception, carrying out reverse twin construction on the three-dimensional fluid scene to generate a twin fluid scene; continuously and updated calculating a tracking frame data set according to the human body trunk bone motion and the hand bone motion; the method comprises the steps of controlling a sensor to measure a real scene, obtaining real environment color information, and establishing a plurality of virtual light sources in the twin fluid scene to simulate the real light field information so as to display the digital twin fluid scene. The implementation method ensures the standardization of the research process and the accessibility of the expected research targets, and improves the realism and immersion of the mixed scene.

Description

Virtual-real fusion digital twin fluid phenomenon simulation method
Technical Field
The embodiment of the disclosure relates to the technical field of fluid simulation and interaction based on mixed reality, in particular to a digital twin fluid phenomenon simulation method based on virtual-real fusion.
Background
Traditional fluid simulation and interactive application based on computer graphics are often completely carried out in a virtual simulation environment, and do not have the ability of sensing and fusing a real scene; meanwhile, the simulation application mostly adopts an off-line calculation mode to render output sequence images and generate animations, which are limited by the solving efficiency and rendering precision of a physical system, and has weaker interactive feedback and control capability for users.
Compared with a fine geometric structure of a re-carved scene, the fluid interaction scene twin reconstruction is more focused on the real-time performance and robustness of the system, the simulation boundary geometric features need to be extracted in a complex scene in real time rapidly, and the stable operation of the tracking system is maintained under the conditions of low texture, variable illumination and rapid movement of a sensor, so that the problems need to be solved; in addition, how to quickly and accurately track and extract interactive plane structures and even semantic information in reconstruction is an important task for supporting more realistic and intelligent fluid simulation application. Reproducing fluids consistent with real scene behavior and enabling physics-based modeling and evolution in a mixed reality environment still faces a series of problems and challenges. Firstly, capturing fluid data of a real scene by adopting any means and acquiring data, wherein the data is required to be determined according to the actual application scene; secondly, the limitation of high time consumption and low precision of the speed field inferred from the continuous surface geometry or the continuous density field by the traditional iterative optimization method is broken, namely, the real-time recovery of all flow field data from the acquired data is a challenging problem; finally, how to quickly obtain the physical attribute affecting the fluid behavior from the observed data, and further support the simulation of the dynamic evolution of the fluid based on the physical, so as to meet the requirements of the virtual fluid on the perception and fusion of the real environment, and further research is needed. In terms of real-time virtual-real fusion interaction based on physical reality, how to solve close-range real-time interaction between a user and a scene boundary and a virtual object and how to feed back interaction force of the virtual scene to the user is also an important problem to further increase immersion and reality of user experience. The real-time rendering problem and the challenge of the large-scale fluid in the virtual-real fusion environment are simultaneously reflected in the scene information acquisition of the environment and the speed and the quality of the fluid rendering. Although there is a great deal of research currently being done to determine the light field in a real scene, a complete global illumination distribution requires the establishment of a close association of the light field and objects in the scene, and how to combine digital twinning of a three-dimensional scene and light field information acquisition in a better way is a key issue.
The development of virtual-real fusion technology has higher requirements on the space-time consistency of a virtual scene and the natural interaction of a virtual object, so that the physical simulation application facing the mixed reality scene has the capabilities of fast sensing and reconstructing the boundary of a simulation environment, accurately acquiring the attribute of real fluid, generating dynamic fluid, interacting and timely feeding back the interactive and evolvable virtual fluid and the real user behavior. The three-dimensional scene rapid reconstruction method based on the depth image is used for achieving three-dimensional scene rapid reconstruction based on the virtual-real fusion fluid simulation, fluid parameter acquisition and evolution simulation based on physical perception, virtual-real fusion man-machine interaction and control three-dimensional research contents, and a typical demonstration application scene is constructed based on the three-dimensional scene rapid reconstruction.
Disclosure of Invention
The disclosure is in part intended to introduce concepts in a simplified form that are further described below in the detailed description. The disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose digital twin fluid phenomenon simulation methods of virtual-real fusion to solve one or more of the technical problems mentioned in the background section above.
The technical problem that this disclosure solved is: three-dimensional scene based on depth image is quickly reconstructed to obtain real space information, scene boundaries are provided for virtual fluid simulation and interaction, and virtual fluid scene modeling based on real space features is realized; the reality and the efficiency of the simulation are improved by acquiring and evolving simulation based on the physical perception fluid parameters; the virtual-real fusion man-machine interaction control realizes the interaction between the virtual fluid and the real user by capturing the action information of the user and feeding back the action information to the virtual fluid, constructs virtual-real through user-scene-physical phenomenon virtual-real fusion interaction, and improves the experience of the virtual-real interaction.
Some embodiments of the present disclosure provide a digital twin fluid phenomenon simulation method of virtual-real fusion, including: collecting an indoor scene depth map and an indoor scene RGB map, and carrying out complement processing on the indoor scene depth map according to the indoor scene RGB map so as to generate a complement indoor scene depth map; reconstructing the three-dimensional scene point cloud according to the indoor scene RGB image, and determining the point cloud semantics of the three-dimensional scene point cloud; based on physical perception, carrying out reverse twin construction on the three-dimensional fluid scene to generate a twin fluid scene; continuously and updated calculating a tracking frame data set according to the human body trunk bone motion and the hand bone motion; the method comprises the steps of controlling a sensor to measure a real scene, obtaining real environment color information, and establishing a plurality of virtual light sources in the twin fluid scene to simulate the real light field information so as to display the digital twin fluid scene.
The above embodiments of the present disclosure have the following advantages: through the digital twin fluid phenomenon simulation method of virtual-real fusion of some embodiments of the present disclosure, standardability of a research process and accessibility of an expected research target are guaranteed to the greatest extent, realism and immersion of a mixed scene are improved, and man-machine interaction with abundant scenes, vivid details, real-time and high efficiency is realized.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a flow chart of some embodiments of a digital twinning fluid phenomenon simulation method of virtual-actual fusion according to the present disclosure.
Fig. 2 is a flow chart of fluid parameter acquisition and evolution simulation of some embodiments of a virtual-real fusion digital twin fluid phenomenon simulation method according to the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates a flow 100 of some embodiments of a digital twin fluid phenomenon simulation method of virtual-actual fusion according to the present disclosure. The digital twin fluid phenomenon simulation method based on virtual-real fusion comprises the following steps:
step 101, collecting an indoor scene depth map and an indoor scene RGB map, and performing complement processing on the indoor scene depth map according to the indoor scene RGB map to generate a complement indoor scene depth map.
Continuously acquiring an indoor scene depth map and an indoor scene RGB map of an indoor scene, taking effective depth information on the indoor scene depth map as priori, utilizing geometric range constraint provided by color information displayed by the indoor scene RGB map, and fusing features into an image reconstruction process of a self-encoder through a convolutional neural network to generate a complete and complemented indoor scene depth map so as to improve the depth information quality.
And 102, reconstructing the three-dimensional scene point cloud according to the indoor scene RGB diagram, and determining the point cloud semantics of the three-dimensional scene point cloud.
Extracting a neighborhood range of ORB (Oriented FAST and Rotated BRIEF) features and ORB features which are reasonably distributed on an indoor scene RGB image by utilizing parallel calculation, and extracting high-dimensional features of multi-level RGB-D image pairs through a two-dimensional convolutional neural network;
acquiring depth information of feature points and neighborhood according to the depth map of the indoor scene after completion, carrying out matching association on adjacent frame features through the feature and neighborhood geometric distribution information, and then respectively calculating camera pose under the condition of 2D-3D and 3D-3D space feature matching through an N-point perspective (PnP method) method and a nearest point iteration (ICP method), wherein the optimization functions are as follows:
wherein u is i Is a three-dimensional feature point p i Two-dimensional coordinate values in the image coordinate system,is at different viewing angles with p i Associated three-dimensional features, K is a camera reference matrix, T * Namely the sensor pose obtained after optimization,is->2-norm square of (c);
extracting key image frames and predicting the pose under a world coordinate system according to the definition degree and the motion range of the images, and projecting key frame pixel points to a three-dimensional space through parallel calculation by using restored high-quality depth information to form dense point cloud;
the discrete expressions under different view angles are updated and fused in parallel by utilizing a voxel filtering mode, so that the reconstructed three-dimensional scene point cloud initial expression is obtained;
and encoding geometric-semantic features of the initial point cloud through the graph neural network model, and respectively predicting complete geometric information and point cloud semantic categories by using a geometric completion decoder and a semantic classification decoder.
Step 103, performing inverse twin construction on the three-dimensional fluid scene based on the physical perception to generate a twin fluid scene.
Sequential sequence h of fluid surface height field and 2D tag l distinguishing fluid from solid region s Splicing input stack self-encoder with layer jump connection, training network to output 2D velocity field u of fluid surface s The loss function adopts L 2 A norm;
combining the height field timing sequence h with the 2D tag l s Using convolutional neural networks G param From the surface velocity field u of the generated multiframe s {t,t+1,t+2} Medium estimated fluid viscosityNamely:
loss function uses L 1 Norms, at the same time, using 3D convolutional network G v Inferring internal 3D velocity field information from the surface 2D velocity field along the gravitational axis;
the final layer output of the network is dot multiplied by the obstacle mask so that the velocity in the non-fluid region is 0, creating a twinned fluid scene. A fluid parameter acquisition and evolution simulation flow diagram of some embodiments of the virtual-real fusion digital twin fluid phenomenon simulation method of the present disclosure is shown in fig. 2.
Step 104, continuously and updated calculating a tracking frame data set according to the human body trunk bone motion and the hand bone motion.
Continuously updating a tracking frame data set according to the motion of human body trunk bones and hand bones, wherein the tracking frame data in the tracking frame data set comprises a basic tracking data list, the basic tracking data list comprises a rotation matrix, a scaling factor and displacement which change among each tracking frame data, the position information, the speed information, the palm orientation, the palm ball radius and other hand tracking data of each frame of bones, and certain specific behavior actions and gestures are recognized according to multi-frame data;
processing the rotation of the bone particles, and calculating a rotation matrix relative to the center of the bone where the particles are positioned for each particle for each frame;
when the translation of the particles is processed, the speeds of bones of a human body trunk where the particles are positioned and each bone of a hand are respectively multiplied by different translation coefficients, the translation coefficients are larger as the bones are close to the far end, and conversely, the translation coefficients are smaller, and the movement formula of each particle is as follows:
v t+1 =v t ×ratio+(p t ×Rot-p t ),
wherein v is t For the speed of the particle at the t frame, ratio is the coefficient of the translational speed of the bone where the particle is located, p t Is the position of the particle at the t-th frame, rot is the rotation matrix of the particle.
Step 105, the control sensor measures the real scene to obtain the real environment color information, and establishes a plurality of virtual light sources in the twin fluid scene to simulate the real light field information so as to display the digital twin fluid scene.
The method comprises the steps that a sensor is controlled to measure a real scene to obtain real environment color information, wherein the real environment color information is light field information, and the light field information is stored in an image form;
processing the image by using an image space global illumination algorithm, searching for a region with high saturation in all channels, and dividing;
using a VPL (virtual point light) algorithm to find the outline of the radiation spot by using outline tracking;
calculating the spot area, and selecting a large spot as a light source;
and estimating the direction and the position of the incoming light according to the spot center position in the environment image, and establishing a plurality of virtual light sources in the scene to simulate the real light field information so as to display the digital twin fluid scene.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (6)

1. A digital twin fluid phenomenon simulation method of virtual-real fusion, comprising:
collecting an indoor scene depth map and an indoor scene RGB map, and carrying out complement processing on the indoor scene depth map according to the indoor scene RGB map so as to generate a complement indoor scene depth map;
reconstructing a three-dimensional scene point cloud according to the indoor scene RGB image, and determining the point cloud semantics of the three-dimensional scene point cloud;
based on physical perception, carrying out reverse twin construction on the three-dimensional fluid scene to generate a twin fluid scene;
continuously and updated calculating a tracking frame data set according to the human body trunk bone motion and the hand bone motion;
the method comprises the steps of controlling a sensor to measure a real scene, obtaining real environment color information, and establishing a plurality of virtual light sources in the twin fluid scene to simulate the real light field information so as to display the digital twin fluid scene.
2. The method of claim 1, wherein the acquiring the indoor scene depth map and the indoor scene RGB map, and the complementing the indoor scene depth map according to the indoor scene RGB map, to generate the complemented indoor scene depth map, comprises:
continuously acquiring an indoor scene depth map and an indoor scene RGB map of an indoor scene, taking effective depth information on the indoor scene depth map as priori, utilizing geometric range constraint provided by color information displayed by the indoor scene RGB map, and fusing features into an image reconstruction process of a self-encoder through a convolutional neural network to generate a complete and complemented indoor scene depth map so as to improve the depth information quality.
3. The method of claim 1, wherein the reconstructing a three-dimensional scene point cloud from the indoor scene RGB map and determining point cloud semantics of the three-dimensional scene point cloud comprises:
extracting ORB features with reasonable distribution and neighborhood ranges of the ORB features on an indoor scene RGB image by utilizing parallel calculation, and extracting high-dimensional features of multi-level RGB-D image pairs through a two-dimensional convolutional neural network;
acquiring depth information of feature points and neighborhood according to the depth map of the indoor scene after completion, carrying out matching association on adjacent frame features through the feature and neighborhood geometric distribution information, and then respectively calculating camera pose under the condition of 2D-3D and 3D-3D space feature matching through an N-point perspective method and a nearest point iteration method, wherein the optimization functions are as follows:
wherein u is i Is a three-dimensional feature point p i Two-dimensional coordinate values in the image coordinate system,is at different viewing angles with p i Associated three-dimensional features, K is a camera reference matrix, T * Namely the sensor pose obtained after optimization,is->2-norm square of (c);
extracting key image frames and predicting the pose under a world coordinate system according to the definition degree and the motion range of the images, and projecting key frame pixel points to a three-dimensional space through parallel calculation by using restored high-quality depth information to form dense point cloud;
the discrete expressions under different view angles are updated and fused in parallel by utilizing a voxel filtering mode, so that the reconstructed three-dimensional scene point cloud initial expression is obtained;
and encoding geometric-semantic features of the initial point cloud through the graph neural network model, and respectively predicting complete geometric information and point cloud semantic categories by using a geometric completion decoder and a semantic classification decoder.
4. The method of claim 1, wherein the reverse twinning the three-dimensional fluid scene based on the physical perception to generate a twinned fluid scene comprises:
sequential sequence h of fluid surface height field and 2D tag l distinguishing fluid from solid region s Splicing input stack self-encoder with layer jump connection, training network to output 2D velocity field u of fluid surface s The loss function adopts L 2 A norm;
combining the height field timing sequence h with the 2D tag l s Using convolutional neural networks G param From the surface velocity field u of the generated multiframe s {t,t+1,t+2} Medium estimated fluid viscosityNamely:
loss function uses L 1 Norms, at the same time, using 3D convolutional network G v Inferring internal 3D velocity field information from the surface 2D velocity field along the gravitational axis;
the final layer output of the network is dot multiplied by the obstacle mask so that the velocity in the non-fluid region is 0, creating a twinned fluid scene.
5. The method of claim 4, wherein the continuously updated computing tracking frame data sets from human torso skeletal motion and hand skeletal motion comprises:
continuously updating a tracking frame data set according to the motion of human body trunk bones and hand bones, wherein the tracking frame data in the tracking frame data set comprises a basic tracking data list, the basic tracking data list comprises a rotation matrix, a scaling factor and displacement which change among each tracking frame data, the position information, the speed information, the palm orientation, the palm ball radius and other hand tracking data of each frame of bones, and certain specific behavior actions and gestures are recognized according to multi-frame data;
processing the rotation of the bone particles, and calculating a rotation matrix relative to the center of the bone where the particles are positioned for each particle for each frame;
when the translation of the particles is processed, the speeds of bones of a human body trunk where the particles are positioned and each bone of a hand are respectively multiplied by different translation coefficients, the translation coefficients are larger as the bones are close to the far end, and conversely, the translation coefficients are smaller, and the movement formula of each particle is as follows:
v t+1 =v t ×ratio+(p t ×Rot-p t ),
wherein v is t For the speed of the particle at the t frame, ratio is the coefficient of the translational speed of the bone where the particle is located, p t Is the position of the particle at the t-th frame, rot is the rotation matrix of the particle.
6. The method of claim 1, wherein the controlling the sensor to measure the real scene, obtain real environment color information, and establish a number of virtual light sources in the twin fluid scene to simulate the real light field information for presentation of the digital twin fluid scene, comprises:
the method comprises the steps that a sensor is controlled to measure a real scene to obtain real environment color information, wherein the real environment color information is light field information, and the light field information is stored in an image form;
processing the image by using an image space global illumination algorithm, searching for a region with high saturation in all channels, and dividing;
using a VPL algorithm to find the outline of the radiation spot by using outline tracking;
calculating the spot area, and selecting a large spot as a light source;
and estimating the direction and the position of the incoming light according to the spot center position in the environment image, and establishing a plurality of virtual light sources in the scene to simulate the real light field information so as to display the digital twin fluid scene.
CN202311014639.0A 2023-08-11 2023-08-11 Virtual-real fusion digital twin fluid phenomenon simulation method Pending CN117115398A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311014639.0A CN117115398A (en) 2023-08-11 2023-08-11 Virtual-real fusion digital twin fluid phenomenon simulation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311014639.0A CN117115398A (en) 2023-08-11 2023-08-11 Virtual-real fusion digital twin fluid phenomenon simulation method

Publications (1)

Publication Number Publication Date
CN117115398A true CN117115398A (en) 2023-11-24

Family

ID=88803133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311014639.0A Pending CN117115398A (en) 2023-08-11 2023-08-11 Virtual-real fusion digital twin fluid phenomenon simulation method

Country Status (1)

Country Link
CN (1) CN117115398A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117809498A (en) * 2024-01-09 2024-04-02 北京千乘科技有限公司 Virtual-real interaction multidimensional twinning projection road network system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117809498A (en) * 2024-01-09 2024-04-02 北京千乘科技有限公司 Virtual-real interaction multidimensional twinning projection road network system

Similar Documents

Publication Publication Date Title
Chen et al. A survey on 3d gaussian splatting
Li et al. Vox-surf: Voxel-based implicit surface representation
EP3533218B1 (en) Simulating depth of field
US11989900B2 (en) Object recognition neural network for amodal center prediction
CN115222917A (en) Training method, device and equipment for three-dimensional reconstruction model and storage medium
CN117115398A (en) Virtual-real fusion digital twin fluid phenomenon simulation method
CN115115805A (en) Training method, device and equipment for three-dimensional reconstruction model and storage medium
CN107018400B (en) It is a kind of by 2D Video Quality Metrics into the method for 3D videos
CN115272608A (en) Human hand reconstruction method and equipment
Ouyang et al. Real-time neural character rendering with pose-guided multiplane images
Li et al. Immersive neural graphics primitives
Wu et al. [Retracted] 3D Film Animation Image Acquisition and Feature Processing Based on the Latest Virtual Reconstruction Technology
US20230290132A1 (en) Object recognition neural network training using multiple data sources
Li et al. Bringing instant neural graphics primitives to immersive virtual reality
CN117635801A (en) New view synthesis method and system based on real-time rendering generalizable nerve radiation field
Liu et al. Facial animation by optimized blendshapes from motion capture data
CN113673567B (en) Panorama emotion recognition method and system based on multi-angle sub-region self-adaption
CN114255328A (en) Three-dimensional reconstruction method for ancient cultural relics based on single view and deep learning
Yao et al. Neural Radiance Field-based Visual Rendering: A Comprehensive Review
Huixuan et al. Innovative Practice of Virtual Reality Technology in Animation Production
Liu Light image enhancement based on embedded image system application in animated character images
Tian et al. Research on Visual Design of Computer 3D Simulation Special Effects Technology in the Shaping of Sci-Fi Animation Characters
CA3143520C (en) Method of computing simulated surfaces for animation generation and other purposes
US20240257375A1 (en) Object recognition neural network for amodal center prediction
Baskar et al. 3D Image Reconstruction and Processing for Augmented and Virtual Reality Applications: A Computer Generated Environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination