CN116778285A - Big data fusion method and system for constructing digital twin base - Google Patents

Big data fusion method and system for constructing digital twin base Download PDF

Info

Publication number
CN116778285A
CN116778285A CN202310607507.2A CN202310607507A CN116778285A CN 116778285 A CN116778285 A CN 116778285A CN 202310607507 A CN202310607507 A CN 202310607507A CN 116778285 A CN116778285 A CN 116778285A
Authority
CN
China
Prior art keywords
scale
scene
digital twin
constructing
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310607507.2A
Other languages
Chinese (zh)
Inventor
张子谦
施康
李盛盛
佘运波
王沈亮
林峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nari Information and Communication Technology Co
Original Assignee
Nari Information and Communication Technology Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nari Information and Communication Technology Co filed Critical Nari Information and Communication Technology Co
Priority to CN202310607507.2A priority Critical patent/CN116778285A/en
Publication of CN116778285A publication Critical patent/CN116778285A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Remote Sensing (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Astronomy & Astrophysics (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A multi-source heterogeneous big data fusion method and system for constructing a digital twin base are characterized in that the method comprises the following steps: step 1, constructing a three-dimensional digital twin scene based on a first scale, a second scale and a third scale which need to be acquired; step 2, realizing scene construction under a first scale by adopting fusion of a map and satellite data, realizing scene construction under a second scale by adopting fusion of laser point cloud and inclined images, and sleeving a characteristic point texture image by adopting an objectification modeling method to realize scene construction under a third scale; and 3, stacking the three-dimensional scenes under the first scale, the second scale and the third scale layer by layer to realize seamless fusion of the visual structure in the three-dimensional digital twin scene. The method constructs the fusion technology of the video texture and the three-dimensional scene based on the feature point reference image, and improves the visual simulation interaction and service management level of the power grid.

Description

Big data fusion method and system for constructing digital twin base
Technical Field
The application relates to the field of power systems, in particular to a big data fusion method and a big data fusion system for constructing a digital twin base.
Background
Digital twin presentations, interactions and visualizations can be separated at the hardware level into presentation modes in the form of traditional screens (e.g., cell phones, desktop, large screen presentations), and VR/AR application modes that are evolving at a rapid rate. But can be basically divided into two modes of scene interaction and scene embedding in the presentation service logic. The scene is embedded into a virtual scene, so that a multi-element visual element is provided, and the user experience is often required to be improved through data preprocessing, loading and offline rendering. Scene interactivity requires scene rendering that provides a sense of twinning, which is a container into which visual elements are embedded. The scene rendering engine needs to make feedback on the user behavior, and the execution process needs to reflect the user interaction in real time. Of these two presentation modes, the scene interactive mode is the primary mode in digital twin applications. The scene embedded application is used as a content element to be embedded into a twin scene, and the multi-element information is converted into the forms of animation, video, diagrams and the like to be embedded into the twin scene, so that the fusion presentation of various information elements is realized.
The scene interactive data fusion model and the scene embedded data fusion model are two common basic modes of data fusion. The scene interactive type visualization has the characteristics of back-end data distribution and front-end real-time rendering. The front end performs data request based on scene configuration, and the back end receives the request and performs data distribution. The core is how to respond to a large number of data requests of the client rapidly, so that the problem that front-end loading is delayed due to untimely data scheduling and user experience is affected is avoided. In the flow, the data fusion back end processes, aggregates and caches data by relying on PMS and other distributed data sources based on client request information. These data can be largely divided into grid space-time elements, scene file data of the grid equipment, and support data for other scene presentations. Through the steps of type identification, format conversion, data preprocessing and the like, data are integrated or cut into types supported by a rendering layer, and are respectively cached by a cache manager according to the data types. The data to be visualized is sent to the front end with a suitable distribution strategy, and in order to support the front end's request, metadata and data indexes need to be built.
The scene embedded data presentation has the characteristics of back-end content generation and front-end display. In this flow, the request information contains the data content to be accessed, the style content to be associated, and the user-customized visual element. Based on these content, the visualization configurator of the backend requests the corresponding data from the PMS or other library. The configurator then parses or matches the data elements into corresponding visual data elements such as dimensional information, mathematical expression methods, numerical sequences, and the like. The configurator-associated style library drives a customized data fusion drawing engine to generate front-end scene embedded presentation data. The fusion rendering of the back end needs to consider the resolution of different clients, the environmental information of the scene and the like, and generates corresponding fusion data products. The visualization flow of the type allows customizing monitoring data of the power equipment facilities, analyzing calculation data, generating related basic images such as tables, data line diagrams and the like; statistical graphs such as a histogram, an error bar graph and the like; various CAD drawings, wiring diagrams, ledger diagrams of various equipment and facilities, etc. within a substation. All the generated fusion results may be distributed in the form of png, svg, avi video streams, etc.
However, there is a trans-scale feature in the grid equipment management, and there is still no digital twin base in the prior art that supports software and scene integration multiplexing. In the face of multiple system platforms, multiple operation environments, multiple basic software and continuously growing field application, various visual data engines adopt independent rendering engines, data models and data interfaces, so that the sharing and integration of specific application scenes are difficult to realize. A digital twin base capable of realizing software module multiplexing and scene multiplexing is needed, and digital twin integrated flexible supporting capability is realized.
In view of the foregoing, there is a need for a method and system for large data fusion for constructing a digital twin base.
Disclosure of Invention
In order to solve the defects in the prior art, the application provides a big data fusion method and a big data fusion system for constructing a digital twin base, which are used for carrying out active and passive information fusion aiming at the visual requirement characteristics of different types of data by taking the visual and exchange of information as driving when constructing the digital twin base of power transmission and transformation main equipment.
The application adopts the following technical scheme.
The application relates to a multi-source heterogeneous big data fusion method for constructing a digital twin base, which comprises the following steps: step 1, constructing a three-dimensional digital twin scene based on a first scale, a second scale and a third scale which need to be acquired; step 2, realizing scene construction under a first scale by adopting fusion of a map and satellite data, realizing scene construction under a second scale by adopting fusion of laser point cloud and inclined images, and sleeving a characteristic point texture image by adopting an objectification modeling method to realize scene construction under a third scale; and 3, stacking the three-dimensional scenes under the first scale, the second scale and the third scale layer by layer to realize seamless fusion of the visual structure in the three-dimensional digital twin scene.
Preferably, the scene in the first scale is used for bearing a topographic image, wherein the topographic image comprises topographic information, overground and underground integrated information and sea and land integrated information; the scene in the second scale is used for bearing urban building environment, and comprises terrain slice and above-ground building information and indoor and outdoor integrated information; the scene in the third scale is used for bearing object scenes of the power transmission and transformation equipment and comprises twin scene information of the power transmission and transformation equipment and three-dimensional component information of the power transmission and transformation equipment.
Preferably, the scene construction at the first scale includes: carrying out coordinate transformation and color homogenization on satellite data, then realizing slicing, carrying out edge splicing fusion on slices in different time periods to generate a global image, and adopting an image pyramid to generate image tiles step by step on the global image; and adding the image tiles into the map after coordinate transformation and registration, and obtaining a scene under a first scale through rendering.
Preferably, the scene construction at the second scale includes: and rendering the three-dimensional scene under the second scale by taking the digital elevation model and the digital surface model as three-dimensional substrates, taking the inclined image as a display background and taking the three-dimensional model constructed by the laser point cloud as a bearing background.
Preferably, the scene construction at the third scale includes: the method comprises the steps of realizing acquisition of real-time video by adopting an embedded camera, implanting the real-time video serving as a texture map into a three-dimensional model under a third scale, and obtaining a virtual scene through real-time rendering; in the texture mapping process, projection parameters of an embedded camera in a real-time video are calculated, and after coordinate mapping is carried out on textures of the real-time video based on the projection parameters, the real-time video is embedded into a three-dimensional model.
Preferably, the coordinate mapping of the texture of the real-time video further comprises: collecting an accurate image in a virtual scene as a camera positioning guide image, and extracting original feature points in the camera positioning guide image by utilizing a feature point recognition algorithm; judging the original feature points, and judging the original feature points as standby feature points if the original feature points have position fixing characteristics and feature significance meeting preset requirements in the current virtual scene; binding the standby characteristic points to the corresponding position spaces of the virtual scene, and realizing coordinate mapping of the real-time video based on the association relation between the standby characteristic points and the corresponding position spaces.
Preferably, the number of standby feature points is greater than a set threshold at any view angle in the virtual scene.
Preferably, the feature saliency is calculated based on the difference of image feature values between the standby feature point and other pixel points around the substitute feature point; and the image characteristic value difference comprises the difference under the influence of one or more of illumination, brightness and video acquisition resolution indexes of the virtual scene.
Preferably, the positioning guide image is used as a hidden texture map to be bound with the three-dimensional model of the virtual scene.
The second aspect of the application relates to a multi-source heterogeneous big data fusion system for constructing a digital twin base, and the system is used for realizing a multi-source heterogeneous big data fusion method for constructing the digital twin base in the first aspect of the application; and, moreover; the system comprises a dividing module, a constructing module and a fusing module; the dividing module is used for constructing a first scale, a second scale and a third scale which need to be acquired based on the three-dimensional digital twin scene; the construction module is used for realizing scene construction under a first scale by adopting fusion of a map and satellite data, realizing scene construction under a second scale by adopting fusion of a laser point cloud and an inclined image, and realizing scene construction under a third scale by sleeving a characteristic point texture image by adopting an objectification modeling method; and the fusion module is used for overlapping the three-dimensional scenes under the first scale, the second scale and the third scale layer by layer to realize seamless fusion of the visual structure in the three-dimensional digital twin scene.
Compared with the prior art, the big data fusion method and system for constructing the digital twin base can be applied to digital twin of a power grid, the interaction and the visualization are driven, the on-demand fusion strategy of the multi-element heterogeneous data is researched, the request models of a server side and a client side are comprehensively considered from two dimensions of application scenes and multiplexing capacity, and an extensible multi-element data fusion framework based on a container is constructed; according to the virtual scene interaction and the embedded presentation, researching a server-side high-efficiency cache and a back-end content generation module of the multi-element heterogeneous data; considering scene presentation scale and granularity, researching a data fusion component based on map data fusion, remote sensing image data fusion and three-dimensional scene fusion, and mainly solving the fusion integration problem of power transmission and transformation equipment and auxiliary facilities thereof and surrounding environments in a virtual scene; based on the research result of the free view video of the array camera, a fusion technology of the video texture and the three-dimensional scene based on the characteristic point reference image is constructed.
The beneficial effects of the application also include:
1. the data sharing service mode of the power grid resource business middle station and the digital twin base is focused, the digital twin data dimension is improved, and the method is oriented to typical application scenes such as operation and maintenance, overhaul management and emergency treatment of power transmission and transformation main equipment, realizes data intercommunication with the existing power grid resource business middle station, constructs the power transmission and transformation main equipment digital twin base, supports the three-dimensional data sharing service capability of the power grid resource business middle station, and develops fusion application with a new generation equipment asset lean management system.
2. With the rapid development of power informatization, the power grid resource data is more and more complex, the integrity, accuracy and timeliness of the power grid resource data can be ensured by the power grid resource business center, and data support is provided for the effective acquisition and management of the power grid field resource data.
3. Focusing on the automatic modeling technology, the generation of the digital twin model with the precision to millimeter level can be realized in a short time, the rapid accurate automatic modeling, the massive high-precision three-dimensional data display and the power grid multi-service simulation interaction are realized, and the application of the rapid accurate modeling in the power grid digital twin service is promoted. The technical result is applied to the fields of power grid planning simulation, three-dimensional visual design, operation and inspection auxiliary decision making, visual operation monitoring, visual safety supervision teaching culture and the like, and the power grid visual simulation interaction and service management level are greatly improved.
Drawings
FIG. 1 is a schematic diagram of three-dimensional scene data fusion in a big data fusion method for constructing a digital twin base of the present application;
fig. 2 is a schematic diagram of video scene fusion rendering based on camera position oriented texture in the big data fusion method for constructing a digital twin base of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application clearer, the technical solutions of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. The described embodiments of the application are only some, but not all, embodiments of the application. All other embodiments of the application not described herein, which are obtained from the embodiments described herein, should be within the scope of the application by those of ordinary skill in the art without undue effort based on the spirit of the present application.
In digital twinning applications, geospatial is a natural container for information aggregation. The trans-scale characteristic of the power grid equipment facility management is to support large-scale geographic scene data and also support a small-scale and fine-scale three-dimensional model and twin scene application. The free view video of the array camera studied in the application is also an important content of fusion. And the optimal free view video is overlapped in the virtual scene to form a fused video scene with spatial depth, so that user experience is enhanced, and analysis capability is improved.
The application relates to a big data fusion method for constructing a digital twin base, which comprises steps 1 to 3.
And step 1, realizing fusion service of the map and satellite data.
Currently, data rendering of mainstream maps is performed in tile mode. The map visualization engine may resolve the contradiction of asymmetric mapping between non-homogeneous entities in the geospatial and homogeneous pixels in the screen space using a tile data structure. After the geographic data of different sources are subjected to coordinate transformation and registration, the data tile production tool divides the data into packets according to tile cutting rules and fuses the data into cache tiles according to the specific display range and scaling level of the front-end display. The front-end engine requests scheduling based on the extent covered by the display area and the displayable geographic element content. In order to reduce the calculation pressure of the front end, the back end should construct and render the data content with lower interaction cost of the frame layer as much as possible, so as to ensure that vertex buffer data and index buffer data which can be directly used for rendering are constructed. The various satellite image data also need to be subjected to uniform coordinate transformation, color-homogenized and then sliced. The satellite images in the non-passing period also need to be fused and edge-connected to form a logically seamless global image, and the image tiles with coarse resolution are generated step by step according to the image pyramid.
The satellite can capture images of the earth at different times, and the map information also comprises the altitude, the topography and other information of different geographic positions.
And 2, realizing seamless fusion of the three-dimensional scene data.
The digital twin scenes of the power equipment relate to different presentation levels, the scales comprise the world, the region, the city, the neighborhood, the transformer substation room and the like, the related surrounding natural environment comprises plain, mountain land, sea islands, even unconventional spaces such as tunnels, bridges and culverts and the like, and the fusion difficulty of the fine twin scenes and the surrounding environment is great.
At present, three-dimensional data sources in a national power grid system are various, the structure is complex, sources such as oblique photography and laser radar are included, the content comprises various forms such as oblique images, laser point clouds and high-precision manual modeling, and a three-dimensional model of a bearing building of a power transmission and transformation facility has the characteristics of complex nested structure and various space arrangement rules. Therefore, the difficulty of digital twin scene fusion is the seamless rendering of multi-source heterogeneous data organization and complex structures at the visual level. The workflow adopts a four-layer logic architecture, adopts front-end and rear-end separation, and sequentially realizes the functions of rear-end scheduling, data organization, visual rendering and dynamic interaction.
Fig. 1 is a schematic diagram of three-dimensional scene data fusion in a big data fusion method for constructing a digital twin base. As shown in fig. 1, elements of three-dimensional digital twin scene construction are three-dimensional substrates of natural resources such as Digital Elevation Model (DEM), digital Surface Model (DSM), and the like, high-resolution remote sensing images are used as display backgrounds, vector topography, building surface model, underground three-dimensional structure model are used as bearing backgrounds, high-precision component three-dimensional models related to power transmission and transformation main equipment are used as cores, a three-dimensional scene is constructed through data analysis processing, three-dimensional modeling, data warehousing, service processing, index construction and later rendering, and the three-dimensional scene is combined with special information such as surrounding environment, earth surface coverage, supporting structure engineering, and the like, and an application scene is constructed through geometric simplification, texture compression and style configuration and multi-level caching. The seamless fusion problem between the topographic slices of different levels and the above-ground buildings and the supporting structure engineering is mainly considered, so that the fusion scene expression of two-dimensional integration, above-ground and underground integration, sea and land integration and indoor and outdoor integration and free roaming switching is formed.
The seamless fusion of the three-dimensional data is carried out according to the precision requirements of different targets, the large-scale environment is constructed by using terrain and images, the building environment information such as cities is constructed by using laser point cloud and inclined images, and the tile structure which is the same as vector data can be adopted in consideration of the efficiency problem. The fine power transmission and transformation equipment twinning scene needs to be subjected to high-precision objectification modeling, and characteristic point texture images capable of guiding video positioning are sleeved. The modeling result is objectified, the three-dimensional component library can be fully utilized, and the modeling cost is reduced. And the three-dimensional structures with different granularities are overlapped layer by layer according to the principle from low to high, and are segmented and fused through a geometric algorithm, and the textures are baked, so that the seamless fusion of the visual structure is completed.
And 3, realizing self-adaptive fusion of images based on the free viewpoint video scene embedded with the camera positioning and guiding characteristic point textures.
Currently, high-precision three-dimensional modeling can achieve extremely high modeling precision. The virtual scene formed by three-dimensional modeling can vividly restore the geometric structure of the real scene. And combining the real-time video serving as a texture map with the three-dimensional model to perform real-time rendering, so that the interactive virtual reality with any view angle of the three-dimensional structure is obtained. The key to realizing correct video texture mapping is to calculate projection parameters of a camera and calculate a coordinate mapping matrix of the video texture according to camera viewpoints. The application aims to realize the self-adaptive texture mapping of the camera at any position by embedding the positioning guide characteristic point textures into the high-precision three-dimensional model and guiding the position derivation of the camera of the video.
Fig. 2 is a schematic diagram of video scene fusion rendering based on camera position oriented texture in the big data fusion method for constructing a digital twin base of the present application. As shown in fig. 2, first, through continuous research in the fields of machine vision, image processing and deep learning, image feature point algorithms such as SIFT and ORB are widely applied to image stitching, correction and SLAM, and these methods can extract feature points in an image in real time and have scale and rotation invariance. Secondly, on the basis of high-precision modeling of the real three-dimensional environment, high-precision image data of the virtual scene are collected and used as camera positioning guide images. And identifying and extracting the characteristic points in the image by utilizing a characteristic point identification algorithm. The preferable characteristic index is obvious, the characteristic point with calculation stability and fixed position is used as the positioning guide characteristic point, and the characteristic point vector is recorded. Thirdly, the positioning guide feature points are associated and bound with the corresponding space positions in the virtual three-dimensional scene. Ensuring that at any viewing angle, there are enough guide features to be seen. One possible approach is to bind the localization guide image as a hidden texture map to the three-dimensional model. Fourth, when the video image needs to be fused with the three-dimensional scene, after the camera parameters are changed, the feature points of the implemented image are extracted by utilizing an image feature point algorithm, and the feature points are matched with the feature points in the positioning guide image. And acquiring three-dimensional space coordinates of the matched feature points based on the space mapping relation between the guide image and the three-dimensional scene. Fifth, parameters of the camera are calculated in real time by using the mapping relation between the three-dimensional coordinates of the matching feature points and the video image. Sixth, the video mapping is completed by using the calculated camera parameters.
By the method, seamless fusion of the visual structure can be realized.
The second aspect of the present application relates to a big data fusion system for constructing a digital twin base, which is used for implementing a big data fusion method for constructing a digital twin base in the first aspect of the present application.
It may be understood that, in order to implement each function in the method provided in the foregoing embodiment of the present application, the system includes a corresponding hardware structure and/or software module for executing each function. Those of skill in the art will readily appreciate that the various illustrative algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application can divide the functional modules of the system according to the method example, for example, each functional module can be divided corresponding to each function, and two or more functions can be integrated in one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
The system may be implemented by one or more device communication connections. The apparatus includes at least one processor, a bus system, and at least one communication interface. The processor is comprised of a central processing unit, field programmable gate array, application specific integrated circuit, or other hardware. The memory is composed of a read-only memory, a random access memory and the like. The memory may be stand alone and coupled to the processor via a bus. The memory may also be integrated with the processor. The hard disk can be a mechanical disk or a solid state disk, etc. The embodiment of the present application is not limited thereto. The above embodiments are typically implemented in software, hardware. When implemented using a software program, may be implemented in the form of a computer program product. The computer program product includes one or more computer instructions.
When the computer program instructions are loaded and executed on a computer, the corresponding functions are implemented according to the procedures provided by the embodiments of the present application. The computer program instructions referred to herein may be assembly instructions, machine instructions, or code written in a programming language implementation, or the like.
Compared with the prior art, the big data fusion method and system for constructing the digital twin base can be applied to digital twin of a power grid, the interaction and the visualization are driven, the on-demand fusion strategy of the multi-element heterogeneous data is researched, the request models of a server side and a client side are comprehensively considered from two dimensions of application scenes and multiplexing capacity, and an extensible multi-element data fusion framework based on a container is constructed; according to the virtual scene interaction and the embedded presentation, researching a server-side high-efficiency cache and a back-end content generation module of the multi-element heterogeneous data; considering scene presentation scale and granularity, researching a data fusion component based on map data fusion, remote sensing image data fusion and three-dimensional scene fusion, and mainly solving the fusion integration problem of power transmission and transformation equipment and auxiliary facilities thereof and surrounding environments in a virtual scene; based on the research result of the free view video of the array camera, a fusion technology of the video texture and the three-dimensional scene based on the characteristic point reference image is constructed.
Finally, it should be noted that the above embodiments are only for illustrating the technical solution of the present application and not for limiting the same, and although the present application has been described in detail with reference to the above embodiments, it should be understood by those skilled in the art that: modifications and equivalents may be made to the specific embodiments of the application without departing from the spirit and scope of the application, which is intended to be covered by the claims.

Claims (10)

1. A multi-source heterogeneous big data fusion method for constructing a digital twin base, the method comprising the steps of:
step 1, constructing a three-dimensional digital twin scene based on a first scale, a second scale and a third scale which need to be acquired;
step 2, realizing scene construction under a first scale by adopting fusion of a map and satellite data, realizing scene construction under a second scale by adopting fusion of laser point cloud and inclined images, and sleeving a characteristic point texture image by adopting an objectification modeling method to realize scene construction under a third scale;
and 3, stacking the three-dimensional scenes under the first scale, the second scale and the third scale layer by layer to realize seamless fusion of the visual structure in the three-dimensional digital twin scene.
2. A multi-source heterogeneous big data fusion method for constructing a digital twin base according to claim 1, wherein:
the scene in the first scale is used for bearing a topographic image and comprises topographic information, overground and underground integrated information and sea and land integrated information;
the scene in the second scale is used for bearing urban building environment and comprises terrain slice and overground building information and indoor and outdoor integrated information;
the scene in the third scale is used for bearing object scenes of power transmission and transformation equipment and comprises power transmission and transformation equipment twin scene information and power transmission and transformation equipment three-dimensional component information.
3. A multi-source heterogeneous big data fusion method for constructing a digital twin base according to claim 2, wherein:
the scene construction at the first scale comprises:
carrying out coordinate transformation and color homogenization on satellite data, then realizing slicing, carrying out edge splicing fusion on slices in different time periods to generate a global image, and adopting an image pyramid to generate image tiles step by step on the global image;
and adding the image tiles into a map after coordinate transformation and registration, and obtaining a scene under a first scale through rendering.
4. A multi-source heterogeneous big data fusion method for constructing a digital twin base according to claim 2, wherein:
the scene construction at the second scale comprises:
and rendering the three-dimensional scene under the second scale by taking the digital elevation model and the digital surface model as three-dimensional substrates, taking the inclined image as a display background and taking the three-dimensional model constructed by the laser point cloud as a bearing background.
5. A multi-source heterogeneous big data fusion method for constructing a digital twin base according to claim 2, wherein:
the scene construction at the third scale includes:
the method comprises the steps of realizing acquisition of real-time video by adopting an embedded camera, implanting the real-time video serving as a texture map into a three-dimensional model under a third scale, and obtaining a virtual scene through real-time rendering;
in the texture mapping process, projection parameters of an embedded camera in the real-time video are calculated, and after coordinate mapping is carried out on textures of the real-time video based on the projection parameters, the real-time video is implanted into a three-dimensional model.
6. A multi-source heterogeneous big data fusion method for constructing a digital twin base according to claim 5, wherein:
the coordinate mapping of the texture of the real-time video further comprises:
collecting an accurate image in the virtual scene as a camera positioning guide image, and extracting original feature points in the camera positioning guide image by utilizing a feature point recognition algorithm;
judging the original feature points, and judging the original feature points as standby feature points if the original feature points have position fixing characteristics in the current virtual scene and feature significance meeting preset requirements;
binding the standby characteristic points to the corresponding position space of the virtual scene, and realizing coordinate mapping of the real-time video based on the association relation between the standby characteristic points and the corresponding position space.
7. A multi-source heterogeneous big data fusion method for constructing a digital twin base according to claim 6, wherein:
the number of the standby feature points is greater than a set threshold under any view angle in the virtual scene.
8. A multi-source heterogeneous big data fusion method for constructing a digital twin base according to claim 7, wherein:
the feature clarity is calculated based on the difference of image feature values between the standby feature point and other pixel points around the substitute feature point;
and the image characteristic value difference comprises the difference when the virtual scene is influenced by one or more of illumination, brightness and video acquisition resolution indexes.
9. A multi-source heterogeneous big data fusion method for constructing a digital twin base according to claim 8, wherein:
and binding the positioning guide image serving as a hidden texture map with the three-dimensional model of the virtual scene.
10. A multi-source heterogeneous big data fusion system for constructing a digital twin base, characterized in that the system is used for realizing a multi-source heterogeneous big data fusion method for constructing a digital twin base as set forth in any of claims 1-9; and, moreover;
the system comprises a dividing module, a constructing module and a fusing module; wherein,,
the dividing module is used for constructing a first scale, a second scale and a third scale which need to be acquired based on the three-dimensional digital twin scene;
the construction module is used for realizing scene construction under a first scale by adopting fusion of a map and satellite data, realizing scene construction under a second scale by adopting fusion of a laser point cloud and an inclined image, and realizing scene construction under a third scale by sleeving a characteristic point texture image by adopting an objectification modeling method;
and the fusion module is used for overlapping the three-dimensional scenes under the first scale, the second scale and the third scale layer by layer to realize seamless fusion of the visual structure in the three-dimensional digital twin scene.
CN202310607507.2A 2023-05-26 2023-05-26 Big data fusion method and system for constructing digital twin base Pending CN116778285A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310607507.2A CN116778285A (en) 2023-05-26 2023-05-26 Big data fusion method and system for constructing digital twin base

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310607507.2A CN116778285A (en) 2023-05-26 2023-05-26 Big data fusion method and system for constructing digital twin base

Publications (1)

Publication Number Publication Date
CN116778285A true CN116778285A (en) 2023-09-19

Family

ID=87992162

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310607507.2A Pending CN116778285A (en) 2023-05-26 2023-05-26 Big data fusion method and system for constructing digital twin base

Country Status (1)

Country Link
CN (1) CN116778285A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117475002A (en) * 2023-12-27 2024-01-30 青岛亿联建设集团股份有限公司 Building inclination measuring method based on laser scanning technology
CN117893648A (en) * 2024-01-23 2024-04-16 北京当境科技有限责任公司 Method and system for setting up animation interaction based on three-dimensional scene

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117475002A (en) * 2023-12-27 2024-01-30 青岛亿联建设集团股份有限公司 Building inclination measuring method based on laser scanning technology
CN117893648A (en) * 2024-01-23 2024-04-16 北京当境科技有限责任公司 Method and system for setting up animation interaction based on three-dimensional scene

Similar Documents

Publication Publication Date Title
CN109829022B (en) Internet map service system fusing monitoring video information and construction method
US9024947B2 (en) Rendering and navigating photographic panoramas with depth information in a geographic information system
US9424373B2 (en) Site modeling using image data fusion
CN112053446A (en) Real-time monitoring video and three-dimensional scene fusion method based on three-dimensional GIS
KR102176837B1 (en) System and method for fast rendering and editing 3d images in web browser
CN116778285A (en) Big data fusion method and system for constructing digital twin base
CN104766366A (en) Method for establishing three-dimensional virtual reality demonstration
CN110660125B (en) Three-dimensional modeling device for power distribution network system
Shan et al. Auxiliary use and detail optimization of computer VR technology in landscape design
Lu et al. Design and implementation of virtual interactive scene based on unity 3D
Toschi et al. Geospatial data processing for 3D city model generation, management and visualization
KR20100040328A (en) Geospatial data system for selectively retrieving and displaying geospatial texture data in successive additive layers of resolution and related methods
CN116502317B (en) Water conservancy and hydropower engineering multisource data fusion method and terminal equipment
CN114756937A (en) Visualization system and method based on UE4 engine and Cesium framework
KR20100047889A (en) Geospatial data system for selectively retrieving and displaying geospatial texture data based upon user-selected point-of-view and related methods
Rechichi Chimera: a BIM+ GIS system for cultural heritage
Fu et al. [Retracted] 3D City Online Visualization and Cluster Architecture for Digital City
Shahabi et al. Geodec: Enabling geospatial decision making
Zhang et al. Research on the Construction Method and Key Technologies of Digital Twin Base for Transmission and Transformation Main Equipment Based on the Power Grid Resource Business Platform
Qing et al. Research on Application of 3D Laser Point Cloud Technology in 3D Geographic Location Information Modeling of Electric Power
Chio et al. The establishment of 3D LOD2 objectivization building models based on data fusion
Pan et al. Research on Key Technologies of Real Scene 3D Cloud Service Platform for Digital Twin Cities
CN116302579B (en) Space-time big data efficient loading rendering method and system for Web end
Wei Research on Smart City Platform Based on 3D Video Fusion
Wang et al. 3D Reconstruction and Rendering Models in Urban Architectural Design Using Kalman Filter Correction Algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination