CN110955978A - Method for realizing three-dimensional visual multi-service multi-terminal fusion - Google Patents

Method for realizing three-dimensional visual multi-service multi-terminal fusion Download PDF

Info

Publication number
CN110955978A
CN110955978A CN201911218665.9A CN201911218665A CN110955978A CN 110955978 A CN110955978 A CN 110955978A CN 201911218665 A CN201911218665 A CN 201911218665A CN 110955978 A CN110955978 A CN 110955978A
Authority
CN
China
Prior art keywords
service
data
scene
layer
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911218665.9A
Other languages
Chinese (zh)
Inventor
李�泳
王洪松
王海湖
李义琛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DMS Corp.
Original Assignee
Beijing Dameisheng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dameisheng Technology Co ltd filed Critical Beijing Dameisheng Technology Co ltd
Priority to CN201911218665.9A priority Critical patent/CN110955978A/en
Publication of CN110955978A publication Critical patent/CN110955978A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/61Scene description

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a method for realizing three-dimensional visual multi-service multi-terminal fusion, which comprises the steps that a request layer defines at least one service scene based on a request which is input by a user and comprises search parameters corresponding to expected information, and acquires at least one service data which corresponds to the service scene and is stored in a first storage layer based on the service scene; the request layer transmits the service scene and/or the service data to the processing layer, and the processing layer acquires at least one 3D visual model corresponding to the service scene from the first storage layer based on the service scene and the service data; the processing layer generates 3D visual reappearing scene information based on data matching of the 3D visual model and the service data; the processing layer transmits the 3D visual reappearing scene information to the showing layer, and the showing layer creates a 3D visual entity scene based on the 3D visual reappearing scene information and can display the 3D visual entity scene.

Description

Method for realizing three-dimensional visual multi-service multi-terminal fusion
Technical Field
The invention relates to the technical field of virtual simulation, in particular to a method for realizing three-dimensional visual multi-service multi-terminal fusion.
Background
Virtual simulation is also called virtual reality technology or simulation technology, and is a technology for simulating another real system by using one virtual system. Three-dimensional space modeling is a basic work and core technology of a three-dimensional visualization technology, and the essence of the three-dimensional space modeling is to convert data, structures or functions with relevance into a three-dimensional model capable of being visually displayed. However, as science and technology develops, visualized scenes can be used for specific scenes which are predictable, quantifiable, evaluable, and continuously optimized after interacting with data. For example, in smart city construction, both prediction of traffic scenarios and platform construction for geospatial public information require a large amount of different kinds of data to interact with existing scenarios. However, most of the existing technologies fail to enable interaction of multi-service data with multiple terminals.
For example, chinese patent No. CN105335821A discloses a method based on a centralized process of merging a plurality of system flows. The method comprises the following steps that S1, based on a Puyuan BPS process engine and FLEX technology, a task form is designed on a service process management platform; s2, designing a business process through a visual process designer; s3, examining and approving the task form according to the business process by a form engine consisting of an Excel-like visual form designer and an application interface; and S4, displaying various system data and completed task form results to a user through the report engine through the mobile terminal, dynamically checking the report, and timely processing the process approval transaction. The method of the invention solves the urgent requirements of enterprise process management and resource integration, thereby reducing the operation complexity of users, transmitting information in time, improving the office efficiency, responding to the business requirement quickly and reducing the operation and maintenance cost of enterprises.
Chinese patent No. CN105740339A discloses a civil administration big data fusion management system, which includes: the civil administration data acquisition module captures civil administration data from a civil administration business system, a relevant unit business system and a third-party platform; the civil administration data storage module establishes a plurality of databases and a multi-level data resource catalog; the civil administration data mining module analyzes the civil administration data; the civil affair data analysis module forms a plurality of civil affair analysis themes according to preset service types and service flows, and fuses the civil affair data in the databases with the civil affair analysis themes and the multilevel data resource directories; and the civil administration data management and presentation module generates a civil administration data management interface according to the fusion result and performs visual presentation. According to the method, the civil administration data of each civil administration platform and the third-party website are collected, the civil administration data can be acquired adaptively and in a multi-channel manner, data fusion based on an information resource catalog is formed, unified analysis is established for various civil administration services with different properties, and expandability is maintained.
Chinese patent No. CN106295983A discloses a method and system for visual statistical analysis of power marketing data. The method solves the problem that the mass data of the multi-type data source cannot be flexibly visualized and analyzed and data mining cannot be carried out in the data statistical analysis stage in the field. The method specifically comprises the following steps: 1. and analyzing the multi-channel mass data by using a big data technology. 2. And realizing the visual design of the data source and the data analysis process. 3. And integrating the analysis result by adopting a method of integrating data statistics and data mining. 4. The design and application systems are separated by the release of modular functions. The method is not limited to data source types any more, provides the processing capacity of mass data, enhances the self-defined flexible expansion capacity of the electricity marketing statistical analysis function, provides the functions of data analysis, data mining modeling and prediction, shortens the software development period, further improves the demand response capacity of the electricity marketing data statistical analysis, and powerfully supports the value-added application of service data.
In the prior art, the combination of the service data and the service scenes is mutually independent, and the fusion of a plurality of service data and service scenes is difficult to realize. In the fusion process of the service data and the service scene, the precision of the reproduction model is reduced due to the abundance of the service data.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a method for realizing three-dimensional visual multi-service multi-terminal fusion. The method comprises the following steps: the request layer defines at least one service scene based on a request which is input by a user and comprises a search parameter corresponding to the expected information; acquiring, by a processing layer, at least one service data corresponding to the service scene based on the service scene, and acquiring, by the processing layer, at least one 3D visualization model corresponding to the service scene from a second storage layer based on the service scene and the service data, the processing layer being capable of matching and generating 3D visualization recurrence scene information based on the 3D visualization model, the service data, and the scene in a data fusion/data integration manner; the processing layer transmits the 3D visual reappearing scene information to a showing layer, and the showing layer creates a 3D application scene capable of displaying at least one service datum based on the 3D visual reappearing scene information; the service data at least comprises static data stored in a first storage layer and dynamic data acquired from a third party, and under the condition that the processing layer is connected with the third party, the processing layer judges the legality of the dynamic data in advance based on the expected information of the client, filters the dynamic data and acquires the filtered dynamic data according to a configuration request, so that the processing layer can acquire the service data associated with the expected information of the user and can prevent illegal data from invading, and the reliability of the 3D application scene is reduced.
According to a preferred embodiment, the processing layer includes an API configuration module provided with several API interfaces, and the API configuration module responds to a matching request of the business data and/or the 3D visualization model and generates a matching request, where the matching request is performed according to the following steps: within a subscription duration range in which the API configuration module receives the matching request, counting the number of requests of the service data and/or the number of requests of the 3D visualization model sent by the corresponding terminal by each API interface within the subscription duration range; and when the subscription duration range is reached, the API configuration module encapsulates the corresponding service data and the 3D visualization model in an API request packet in a data structure and generates the matching request.
According to a preferred embodiment, the processing layer comprises an API service module, the API service module responds to the request of the user and verifies the validity of the request, and after the request of the user passes the validity verification, the API service module generates validity verification passing information and transmits the validity verification passing information to the requesting layer; the request layer defines at least one service scene based on the verification passing information and acquires at least one service data which corresponds to the service scene and is stored in a first storage layer based on the service scene; wherein the API service module comprises at least one of a direct access service mode and/or a user-defined task mode.
According to a preferred embodiment, the 3D visualization model is stored in the first storage layer as structured data and/or unstructured data, wherein, in the case that the processing layer has acquired the service scenario and/or the service data, the processing layer is capable of loading at least one 3D visualization model corresponding to the service scenario and/or the service data via at least one network terminal based on the service scenario and/or the service data.
According to a preferred embodiment, the network terminal comprises at least one of a Windows desktop terminal, a web page terminal and a mobile terminal; the processing layer can load structured data and/or unstructured data of a 3D visualization model corresponding to the service scene and/or the service data through a Windows desktop terminal based on the service scene and/or the service data and can load the structured data and/or the unstructured data into desktop 3D visualization scene information; and/or the processing layer can load structured data and/or unstructured data of a 3D visualization model corresponding to the business scene and/or the business data through a webpage terminal based on the business scene and/or the business data and can load the structured data and/or the unstructured data into webpage 3D visualization scene information; and/or the processing layer can load structured data and/or unstructured data of a 3D visualization model corresponding to the business scene and/or the business data through a mobile terminal based on the business scene and/or the business data and can load the structured data and/or the unstructured data into 3D visualization scene information of the mobile terminal.
According to a preferred embodiment, the request layer includes a service scenario encoder, where, under the authorization of the API service module, the service scenario encoder is capable of sending the service scenario defined by the user, the service scenario of the environment where the user is located, and the network virtual service scenario to the first storage layer to form a service scenario database after acquiring the service scenario defined by the user through an external interface, the service scenario of the environment where the user is located through a scenario identification module, and the network virtual service scenario through a network interface.
According to a preferred embodiment, the presentation layer comprises a 3D rendering configuration module and a graphical user interface module, the 3D rendering configuration module rendering the 3D visual entity scene based on the 3D visual rendering scene information, the graphical user interface module rendering the 3D visual entity scene in the form of a 3D image or video for presentation to the user.
According to a preferred embodiment, the invention also discloses a system for realizing three-dimensional visual multi-service multi-terminal fusion. The system comprises a request layer, a processing layer, a presentation layer, a first storage layer and/or a second storage layer; the request layer defines at least one service scene based on a request which is input by a user and comprises a search parameter corresponding to the expected information; acquiring, by a processing layer, at least one service data corresponding to the service scene based on the service scene, and acquiring, by the processing layer, at least one 3D visualization model corresponding to the service scene from a second storage layer based on the service scene and the service data, the processing layer being capable of matching and generating 3D visualization recurrence scene information based on the 3D visualization model, the service data, and the scene in a data fusion/data integration manner; the processing layer transmits the 3D visual reappearing scene information to a showing layer, and the showing layer creates a 3D application scene capable of displaying at least one service datum based on the 3D visual reappearing scene information; the service data at least comprises static data stored in a first storage layer and dynamic data acquired from a third party, and under the condition that the processing layer is connected with the third party, the processing layer judges the legality of the dynamic data in advance based on the expected information of the client, filters the dynamic data and acquires the filtered dynamic data according to a configuration request, so that the processing layer can acquire the service data associated with the expected information of the user and can prevent illegal data from invading, and the reliability of the 3D application scene is reduced.
According to a preferred embodiment, the processing layer includes an API configuration module provided with several API interfaces, and the API configuration module responds to a matching request of the business data and/or the 3D visualization model and generates a matching request, where the matching request is performed according to the following steps: within a subscription duration range in which the API configuration module receives the matching request, counting the number of requests of the service data and/or the number of requests of the 3D visualization model sent by the corresponding terminal by each API interface within the subscription duration range; and when the subscription duration range is reached, the API configuration module encapsulates the corresponding service data and the 3D visualization model in an API request packet in a data structure and generates the matching request.
According to a preferred embodiment, the processing layer comprises an API service module, the API service module responds to the request of the user and verifies the validity of the request, and after the request of the user passes the validity verification, the API service module generates validity verification passing information and transmits the validity verification passing information to the requesting layer; the request layer defines at least one service scene based on the verification passing information and acquires at least one service data which corresponds to the service scene and is stored in a first storage layer based on the service scene; wherein the API service module comprises at least one of a direct access service mode and/or a user-defined task mode.
The invention provides a method for realizing three-dimensional visual multi-service multi-terminal fusion, which aims at different terminals and transmits different data. The present invention can previously judge the data of each terminal and is applied to the construction of a 3D reproduction scene when it has an association with a request of a client. Particular advantages include at least:
(1) the traditional visual display tools comprise chart display tools such as reports, tables, bar graphs and bar graphs, and the influence of data on a business scene is difficult to observe visually, so that the decision accuracy is reduced. For data which needs to be mined is difficult to display, information such as a plurality of reports, tables, bar graphs and bar graphs can be superposed into a corresponding business scene through the method to form a three-dimensional visual entity scene, so that a decision maker can directly, universally and easily perceive the influence of the business data on the business scene, and the decision maker can make a more accurate decision on blood-taking.
(2) Due to the abundance of the service data, the method aims to avoid the loss of the service data and improve the processing capacity of the data processing layer. The API configuration module matches corresponding occupation weight coefficients when acquiring the service data and the 3D visual model, and calculates occupation weight values of the API requests on corresponding interfaces of the corresponding API based on the occupation weight coefficients so as to solve data congestion caused by large data request access amount and effectively improve the feedback rate of the whole processing layer and the configuration rate of the service data and the 3D visual model.
(3) The API service module can prevent the invasion of illegal requests. Among numerous data, if there is illegal request data, the illegal request data directly causes mismatching between the service data and the 3D visualization model, thereby directly reducing the accuracy of the final 3D visualization entity scene, and also directly resulting in the processing efficiency of the request layer 1.
Drawings
Fig. 1 is a schematic diagram of a method for implementing three-dimensional visualization multi-service multi-terminal convergence in a preferred embodiment of the present invention; and
fig. 2 is a schematic diagram of a system for implementing three-dimensional visualization multi-service multi-terminal convergence in a preferred embodiment of the present invention.
List of reference numerals
1: request layer 3: revealing layer
2: treatment layer 4: first storage layer
5: a second storage layer
Detailed Description
This is described in detail below with reference to figures 1 and 2.
In the description of the present invention, the terms "first", "second", "third" and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, features defined as "first," "second," "third," and so forth may explicitly or implicitly include one or more of such features. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the description of the present invention, the term "three-dimensional visualization" means: an IT technology for expressing a visual scene by using a three-dimensional space has the characteristics of being three-dimensional, visual, real and the like; by using the technology, scenes, objects, facilities and equipment in production and life can be processed through the following steps of 1: the mode 1 is expressed (reproduced) in a computer. Meanwhile, the use of some special physical devices can give people a real sense of immersion. The technology is integrated with other services, so that the services can be visualized to enable the other services, and the services are easier to understand and process. For example, simulating body structures with three-dimensional visualization may be easier for a physician to master.
In the description of the present invention, the term "multi-service" means: the work of each link in the production process is a general term. In the process plant industry, there are simulation service, equipment management service, security management service, fire management service, and the like.
In the description of the present invention, the term "multi-terminal" means: currently supporting the hardware devices used for Informatization (IT) include, but are not limited to: computer client, webpage end, mobile end, VR equipment and AR equipment.
Example 1
The embodiment discloses a method for realizing three-dimensional visual multi-service multi-terminal fusion, and under the condition of not causing conflict or contradiction, the whole and/or part of contents of the preferred embodiments of other embodiments can be used as a supplement of the embodiment. Preferably, the method may be implemented by the system of the present invention and/or other alternative modules. For example, the method of the present invention is implemented by using various modules in the system of the present invention. In the present invention, the English abbreviation 3D is an abbreviation of 3-Dimensions, wherein the text means three-dimensional.
In the present invention, the service scenario includes, but is not limited to, at least one of an oil field, a chemical plant, a power plant, a nuclear power plant, a warehouse, an industrial pipeline, power supply, oil transportation, and the like. The service data includes dynamic data and static data corresponding to the scene. Dynamic data refers to data generated at a certain frequency (milliseconds, seconds, minutes, etc.). The dynamic data generally comes from information acquired by the internet of things in real time, the data volume is very large in a short time, but a large part of the data volume is not needed by the user; therefore, it is necessary to perform filtering at the server end according to the range of the scene, and only load the content related to the current scene (e.g., data for detecting water pump vibration and noise, etc.), while other service data can be discarded or otherwise processed. The 3D visualization model can comprise static data of a corresponding scene and can be obtained only according to some parameters in the current scene; the triggering mode comprises the following steps: click, camera movement, etc. Such as pipes, equipment, buildings, structures, etc. in a factory, and design data, procurement data, and instructions for use associated with the same.
According to an alternative embodiment, as shown in fig. 1, this embodiment provides a method for implementing three-dimensional visualization multi-service multi-terminal convergence. The method comprises the following steps:
s1: the request layer 1 defines at least one service scenario based on a request input by a user and including search parameters corresponding to desired information;
s2: acquiring at least one service data of the service scene by the processing layer 2 based on the service scene;
s3: the processing layer 2 acquires at least one 3D visual model corresponding to the service scene from the second storage layer 5 based on the service scene and the service data, and the processing layer 2 can generate 3D visual reappearing scene information in a data fusion/data integration mode based on the 3D visual model, the service data and the scene in a matching mode;
s4: the processing layer 2 transmits the 3D visually reproduced scene information to the presentation layer 3. The presentation layer 3 may create a 3D application scene capable of displaying at least one service data based on the 3D visualization reproduction scene information.
Preferably, the business data comprises at least static data stored in the first storage layer 4 and dynamic data obtained from third parties. For example, the source of the business data may be entered manually (static data). The traffic data may also be input by external devices such as various types of IoT devices, various types of sensors, various types of monitors through wireless communication (dynamic data). Preferably, in step 3, in the case that the processing layer 2 establishes a connection with a third party, the processing layer 2 judges the validity of the dynamic data in advance based on the desired information of the client, filters the dynamic data, and acquires the filtered dynamic data according to the configuration request, so that the processing layer 2 can obtain service data associated with the desired information of the user and can prevent intrusion of illegal data, thereby reducing the reliability of the 3D application scenario.
The method and the device obtain the corresponding service data and service scene based on the request of the user, and can associate, match and present the service data and the service scene to form the three-dimensional visual multi-service multi-terminal 3D application scene, so that necessary decisions can be effectively made according to the 3D application scene. For example, in a factory operation, the quality of a product is inseparable from the operation of safety patrol data (HSE), training service data (OTS) pipelines. At present, the method is based on safety inspection data (HSE) and training service data (OTS) which are independent of a pipeline scene. The field worker can only simply make a brief analysis of the safety patrol data (HSE), but cannot make an association analysis of the safety patrol data (HSE) with the pipeline of the plant operation. The field worker can only make a brief evaluation of the worker's operation based on training service data (OTS), but cannot perform correlation analysis of safety patrol data (HSE) with the production line of the plant operation. When defining a service scenario based on a client request or acquiring service data according to the service scenario, (1) association by keyword, which is the most common association method, such as ID, name, etc., may be employed. (2) And performing correlation through correlation, and correlating out the business data which is considered to be related by the system according to the keywords. For example, by a search engine of hundredths, ***, and/or dog. The relevance algorithm may employ Apriori algorithm or Eclat algorithm.
Moreover, the invention can optimize the service data according to the 3D application scene by using various service data, and replan the service scene to realize. Conventional visualization presentation tools include chart presentation tools such as reports, tables, bar charts, and bar charts. For data which needs to be mined is difficult to display, information such as a plurality of reports, tables, bar graphs and bar graphs can be superposed into a corresponding business scene through the method to form a three-dimensional visual entity scene, so that a decision maker can directly, universally and easily perceive the influence of the business data on the business scene, and the decision maker can make a more accurate decision.
Considering that the service data in the factory is complicated and dynamic (for example, even if the factory is a small finished product warehouse, various detection devices such as more than ten cameras, temperature sensors, humidity sensors, flame detectors and the like can be available); how to effectively organize the data so that the data can be effectively displayed on various terminals (such as a touch screen mobile phone with limited visual range) constitutes a technical challenge. In the invention, the processing layer 2 can perform relevance analysis on the current three-dimensional scene and the service data according to the three-dimensional display requirement of the user, and generate 3D visual reproduction scene information in a data fusion/data integration mode based on the relevance analysis, so that the 3D visual reproduction scene information just contains the effective requirement information of the user established on the relevance analysis. Therefore, the user is allowed to utilize various terminals to realize loading of the local 3D visualization model and the business data and/or the business scene associated with the local 3D visualization model and the user request according to needs without loading all three-dimensional data and business data of a factory or a workshop. For example, for a touch screen mobile phone or a Java virtual machine with poor hardware performance, the technical scheme of the invention enables a user to complete 3D visual inspection in a manner of displaying a limited area scene. In the invention, the construction for generating the 3D visual reappearing scene can adopt a space difference algorithm, a distance reciprocal weighting algorithm, a Kriging interpolation algorithm and a radial basis function interpolation algorithm.
Example 2
①, in the range of subscription duration of the matching request received by the API configuration module, each API interface counts the number of requests of service data and/or 3D visualization models sent by the corresponding terminal and/or the number of requests of 3D visualization models in the range of subscription duration, ②, when reaching the range of subscription duration, the API configuration module encapsulates the corresponding service data and 3D visualization models in a data structure in an API request package and generates the matching request, for example, the subscription duration may be 50ms, 40ms or even 10ms, which is sufficient for the API configuration module to satisfy the data transmission of each terminal, when the API configuration module acquires the service data and 3D visualization models, the API configuration module configures the corresponding service data and 3D visualization models to generate the matching request, and the corresponding occupancy of the API configuration model and the visualization model for the API configuration module is calculated to satisfy the data transmission efficiency of each terminal, and the API configuration module determines the occupancy of the corresponding API configuration model and the visualization model for the API configuration module, and the API configuration module solves the problem of the occupancy of the API configuration and the API configuration efficiency.
Among numerous data, if there is illegal request data, the illegal request data directly causes mismatching between the service data and the 3D visualization model, thereby directly reducing the accuracy of the final 3D visualization entity scene, and also directly resulting in the processing efficiency of the request layer 1. In order to solve the technical problem, it is preferable that the processing layer includes an API service module. The API service module responds to the request of the user and verifies the legality of the request, and after the request of the user passes the legality verification, the API service module generates legality verification passing information and transmits the legality verification passing information to the request layer 1. The request layer 1 defines at least one service scenario based on the verification pass information and acquires at least one service data corresponding to the service scenario and stored in the first storage layer 4 based on the service scenario. When the request layer 1 acquires the request, the API service module comprises at least one of a direct access service mode and/or a user-defined task mode.
Preferably, the 3D visualization model is stored in the first storage layer 4 as structured data and/or unstructured data. When the processing layer 2 acquires the service scene and/or the service data, the processing layer 2 can load at least one 3D visualization model corresponding to the service scene and/or the service data through at least one network terminal based on the service scene and/or the service data. In the invention, the processing layer 2 can perform relevance analysis on the current three-dimensional scene and the service data according to the three-dimensional display requirement of the user, and generate 3D visual reproduction scene information in a data fusion/data integration mode based on the relevance analysis, so that the 3D visual reproduction scene information just contains the effective requirement information of the user established on the relevance analysis. Therefore, the user is allowed to utilize various terminals to realize loading of the local 3D visualization model and the business data and/or the business scene associated with the local 3D visualization model and the user request according to needs without loading all three-dimensional data and business data of a factory or a workshop.
Preferably, the network terminal includes at least one of a Windows desktop terminal, a web page terminal and a mobile terminal. The processing layer 2 can load structured data and/or unstructured data of the 3D visualization model corresponding to the service scene and/or the service data through the Windows desktop terminal based on the service scene and/or the service data and can load the structured data and/or the unstructured data into desktop 3D visualization scene information. And/or the processing layer 2 can load the structured data and/or the unstructured data of the 3D visualization model corresponding to the business scene and/or the business data through the webpage terminal based on the business scene and/or the business data and can load the structured data and/or the unstructured data into the webpage 3D visualization scene information. And/or the processing layer 2 can load structured data and/or unstructured data of a 3D visualization model corresponding to the business scene and/or the business data through the mobile terminal based on the business scene and/or the business data and can load the structured data and/or the unstructured data into the 3D visualization scene information of the mobile terminal. Preferably, since different terminals have different performances, the model granularity and the fineness need to be processed according to the characteristics of the terminals so as to ensure higher efficiency on different terminals. I.e., on a desktop program, the model granularity may be larger and the fineness may be higher. But at the mobile end, a relatively small grain size and coarse fineness is required.
Preferably, the request layer 1 includes a service scene encoding processor. Under the authorization of the API service module, the service scene encoder can send the service scene defined by the user, the service scene of the environment where the user is located and the network virtual service scene to the first storage layer to form a service scene database after the service scene defined by the user is obtained through the external interface, the service scene of the environment where the user is located is obtained through the scene recognition module and the network virtual service scene is obtained through the network interface.
Preferably, the presentation layer 3 includes a 3D rendering configuration module and a graphical user interface module. The presentation layer may be a multi-terminal, for example, a smart phone, an IPAD, a large display screen, or other device with a graphical user interface. The 3D reappearing configuration module reappears the 3D visual entity scene based on the 3D visual reappearing scene information, and the graphical user interface module enables the 3D visual entity scene to be presented to a user in a 3D image or video mode. For example, the 3D rendition configuration module constructs at least one object attribute associated with the business scene based on the 3D visual rendition scene information. The graphical user interface module displays attributes of each object for the user based on the object attributes. Furthermore, the 3D rendering configuration module is configured to cause modification of at least one further object or objects related to the object within the 3D visualization entity scene in case of modification of the property of the one object. Therefore, in decision making, if one attribute of the 3D visualization model is modified, the calculation cost of the presentation layer can be saved, and the calculation efficiency of the presentation layer can also be improved. Preferably, the graphical user interface module may be configured to display a 3D visual entity scene corresponding to the business scene on a web browser or display screen.
EXAMPLE 3
The embodiment discloses a system for realizing three-dimensional visual multi-service multi-terminal fusion, which is suitable for executing all the method steps recorded by the invention to achieve the expected technical effect. The preferred embodiments of the present invention are described in whole and/or in part in the context of other embodiments, which can supplement the present embodiment, without resulting in conflict or inconsistency.
According to an alternative embodiment, as shown in fig. 1, this embodiment is a system for implementing three-dimensional visualization multi-service multi-terminal convergence. The system comprises a requesting layer 1, a handling layer 2, a presentation layer 3, a first storage layer 4 and/or a second storage layer 5. The request layer 1 defines at least one service scenario based on a request input by a user including search parameters corresponding to desired information. And acquiring at least one service data corresponding to the service scene by the processing layer 2 based on the service scene. And the processing layer 2 obtains at least one 3D visualization model corresponding to the business scenario from the second storage layer 5 based on the business scenario, the business data. The processing layer 2 can generate 3D visualization reappearing scene information based on the 3D visualization model, the business data and the scene in a data fusion/data integration mode in a matching mode. The processing layer 2 transmits the 3D visually reproduced scene information to the presentation layer 3. The presentation layer 3 creates a 3D application scene capable of displaying at least one service data based on the 3D visualization reproduction scene information.
Preferably, the business data comprises at least static data stored in the first storage layer 4 and dynamic data obtained from third parties. Under the condition that the processing layer 2 establishes connection with a third party, the processing layer 2 judges the legality of dynamic data in advance based on the expected information of a client, filters the dynamic data and obtains the filtered dynamic data according to a configuration request, so that the processing layer 2 can obtain service data associated with the expected information of the user and can prevent illegal data from invading, and the reliability of a 3D application scene is reduced.
Conventional visualization presentation tools include chart presentation tools such as reports, tables, bar charts, and bar charts. For data which needs to be mined is difficult to display, information such as a plurality of reports, tables, bar graphs and bar graphs can be superposed into a corresponding business scene through the method to form a three-dimensional visual entity scene, so that a decision maker can directly, universally and easily perceive the influence of the business data on the business scene, and the decision maker can make a more accurate decision.
Considering that the service data in the factory is complicated and dynamic (for example, even if the factory is a small finished product warehouse, various detection devices such as more than ten cameras, temperature sensors, humidity sensors, flame detectors and the like can be available); how to effectively organize the data so that the data can be effectively displayed on various terminals (such as a touch screen mobile phone with limited visual range) constitutes a technical challenge. In the invention, the processing layer 2 can perform relevance analysis on the current three-dimensional scene and the service data according to the three-dimensional display requirement of the user, and generate 3D visual reproduction scene information in a data fusion/data integration mode based on the relevance analysis, so that the 3D visual reproduction scene information just contains the effective requirement information of the user established on the relevance analysis. Therefore, the user is allowed to utilize various terminals to realize loading of the local 3D visualization model and the business data and/or the business scene associated with the local 3D visualization model and the user request according to needs without loading all three-dimensional data and business data of a factory or a workshop. For example, for a touch screen mobile phone or a Java virtual machine with poor hardware performance, the technical scheme of the invention enables a user to complete 3D visual inspection in a manner of displaying a limited area scene.
Due to the great variety of service data, the processing capability of the data processing layer 2 is improved and the loss of the service data is avoided. Preferably, the processing layer 2 includes an API configuration module provided with a plurality of API interfaces, and the API configuration module responds to the matching request of the business data and/or the 3D visualization model and generates the matching request. The matching request is carried out according to the following steps: and in the subscription duration range of the API configuration module for receiving the matching request, counting the request quantity of the service data and/or the request quantity of the 3D visualization model sent by the corresponding terminal by each API interface in the subscription duration range. And when the subscription duration range is reached, the API configuration module encapsulates the corresponding service data and the 3D visualization model in the API request packet in a data structure and generates a matching request.
Preferably, the processing layer includes an API service module that responds to the user's request and verifies the legitimacy of the request. After the request of the user passes the validity verification, the API service module generates validity verification-passing information and transmits the validity verification-passing information to the request layer 1. The request layer 1 defines at least one service scenario based on the verification pass information and acquires at least one service data corresponding to the service scenario and stored in the first storage layer 4 based on the service scenario. Wherein the API service module comprises at least one of a direct access service mode and/or a user-defined task mode.
The word "module" as used herein describes any type of hardware, software, or combination of hardware and software that is capable of performing the functions associated with the "module".
It should be noted that the above-mentioned embodiments are exemplary, and that those skilled in the art, having benefit of the present disclosure, may devise various arrangements that are within the scope of the present disclosure and that fall within the scope of the invention. It should be understood by those skilled in the art that the present specification and figures are illustrative only and are not limiting upon the claims. The scope of the invention is defined by the claims and their equivalents.

Claims (10)

1. A method for realizing three-dimensional visual multi-service multi-terminal fusion is characterized by comprising the following steps:
the request layer (1) defines at least one service scenario based on a request input by a user and including a search parameter corresponding to desired information;
acquiring, by a processing layer (2), at least one service data corresponding to the service scenario based on the service scenario, and acquiring, by the processing layer (2), at least one 3D visualization model corresponding to a current service scenario from a second storage layer (5) based on the service scenario and the service data, the processing layer (2) being capable of matching and generating 3D visualization recurrence scenario information based on the 3D visualization model, the service data, and the current scenario in a data fusion/data integration manner;
the processing layer (2) transmits the 3D visual reappearing scene information to the showing layer (3), and the showing layer (3) creates a 3D application scene capable of showing at least one service data based on the 3D visual reappearing scene information.
2. The method according to claim 1, characterized in that the processing layer (2) comprises an API configuration module provided with several API interfaces, said API configuration module responding to a request for matching the business data and/or the 3D visualization model,
wherein the matching request is performed according to the following steps:
within a subscription duration range in which the API configuration module receives the matching request, counting the number of requests of the service data and/or the number of requests of the 3D visualization model sent by the corresponding terminal by each API interface within the subscription duration range;
when the subscription duration range is reached, the API configuration module encapsulates the corresponding service data and the 3D visualization model in an API request packet in a data structure and generates the matching request;
the API configuration module matches corresponding occupation weight coefficients when acquiring the service data and the 3D visual model, and calculates occupation weight values of the API requests on corresponding interfaces of the corresponding APIs based on the occupation weight coefficients.
3. The method according to claim 1 or 2, characterized in that the processing layer (2) comprises an API service module which responds to the user's request and verifies the legitimacy of the request,
after the request of the user passes the validity verification, the API service module generates validity verification passing information and transmits the validity verification passing information to the request layer (1);
the request layer (1) defines at least one service scene based on the verification passing information and acquires at least one service data which corresponds to the service scene and is stored in a first storage layer (4) based on the service scene;
wherein the API service module comprises at least one of a direct access service mode and/or a user-defined task mode.
4. The method according to one of the preceding claims, characterized in that the 3D visualization model is stored in the first storage layer (4) as structured data and/or unstructured data,
wherein, when the service scenario and/or the service data is acquired by the processing layer (2), the processing layer (2) can load at least one 3D visualization model corresponding to the service scenario and/or the service data through at least one network terminal based on the service scenario and/or the service data.
5. The method of any of the preceding claims, wherein the network terminal comprises at least one of a Windows desktop terminal, a web page terminal and a mobile terminal;
the processing layer (2) can load structured data and/or unstructured data of a 3D visualization model corresponding to the service scene and/or the service data through a Windows desktop terminal based on the service scene and/or the service data and can load the structured data and/or the unstructured data into desktop 3D visualization scene information; and/or
The processing layer (2) can load structured data and/or unstructured data of a 3D visualization model corresponding to the business scene and/or the business data through a webpage terminal based on the business scene and/or the business data and can load the structured data and/or the unstructured data into webpage 3D visualization scene information; and/or
The processing layer (2) can load structured data and/or unstructured data of a 3D visualization model corresponding to the business scene and/or the business data through a mobile terminal based on the business scene and/or the business data and can load the structured data and/or the unstructured data into 3D visualization scene information of the mobile terminal.
6. The method according to one of the preceding claims, characterized in that the request layer (1) comprises a service scene coding processor,
under the authorization of the API service module, the service scene encoder can send the service scene defined by the user, the service scene of the environment in which the user is located and the service scene of the network virtualization to the first storage layer to form a service scene database after acquiring the service scene defined by the user through an external interface, the service scene of the environment in which the user is located through a scene identification module and the service scene of the network virtualization through a network interface.
7. The method according to one of the preceding claims, wherein the presentation layer (3) comprises a 3D rendition configuration module and a graphical user interface module,
the 3D rendering configuration module renders the 3D visual entity scene based on the 3D visual rendering scene information,
the graphical user interface module presents the 3D visual entity scene in the form of a 3D image or video to the user.
8. A system for realizing three-dimensional visual multi-service multi-terminal fusion is characterized by comprising a request layer (1), a processing layer (2), a presentation layer (3), a first storage layer (4) and/or a second storage layer (5);
the request layer (1) defines at least one service scenario based on a user-entered request comprising search parameters corresponding to desired information;
acquiring, by the processing layer (2), at least one service data corresponding to the service scenario based on the service scenario, and acquiring, by the processing layer (2), at least one 3D visualization model corresponding to a current service scenario from a second storage layer (5) based on the service scenario and the service data, the processing layer (2) being capable of matching and generating 3D visualization recurrence scenario information based on the 3D visualization model, the service data and the current scenario in a data fusion/data integration manner;
the processing layer (2) transmits the 3D visual reappearing scene information to the showing layer (3), and the showing layer (3) creates a 3D application scene capable of showing at least one service data based on the 3D visual reappearing scene information.
9. The system for realizing three-dimensional visual multi-service multi-terminal fusion according to claim 8, characterized in that the processing layer (2) comprises an API configuration module provided with several API interfaces, the API configuration module responds to the service data and/or the matching request of the 3D visual model and generates the matching request,
wherein the matching request is performed according to the following steps:
within a subscription duration range in which the API configuration module receives the matching request, counting the number of requests of the service data and/or the number of requests of the 3D visualization model sent by the corresponding terminal by each API interface within the subscription duration range;
and when the subscription duration range is reached, the API configuration module encapsulates the corresponding service data and the 3D visualization model in an API request packet in a data structure and generates the matching request.
10. The system for realizing three-dimensional visualization multi-service multi-terminal convergence according to claim 8 or 9, wherein the processing layer comprises an API service module, the API service module responds to the request of the user and verifies the validity of the request,
after the request of the user passes the validity verification, the API service module generates validity verification passing information and transmits the validity verification passing information to the request layer (1);
the request layer (1) defines at least one service scene based on the verification passing information and acquires at least one service data which corresponds to the service scene and is stored in a first storage layer (4) based on the service scene;
wherein the API service module comprises at least one of a direct access service mode and/or a user-defined task mode.
CN201911218665.9A 2019-12-02 2019-12-02 Method for realizing three-dimensional visual multi-service multi-terminal fusion Pending CN110955978A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911218665.9A CN110955978A (en) 2019-12-02 2019-12-02 Method for realizing three-dimensional visual multi-service multi-terminal fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911218665.9A CN110955978A (en) 2019-12-02 2019-12-02 Method for realizing three-dimensional visual multi-service multi-terminal fusion

Publications (1)

Publication Number Publication Date
CN110955978A true CN110955978A (en) 2020-04-03

Family

ID=69979530

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911218665.9A Pending CN110955978A (en) 2019-12-02 2019-12-02 Method for realizing three-dimensional visual multi-service multi-terminal fusion

Country Status (1)

Country Link
CN (1) CN110955978A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114081624A (en) * 2021-11-10 2022-02-25 武汉联影智融医疗科技有限公司 Virtual simulation system of surgical robot

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103106680A (en) * 2013-02-16 2013-05-15 赞奇科技发展有限公司 Implementation method for three-dimensional figure render based on cloud computing framework and cloud service system
CN106647586A (en) * 2017-01-20 2017-05-10 重庆邮电大学 Virtual machine room visualization monitoring management system based on B/S architecture and realization method
CN106898047A (en) * 2017-02-24 2017-06-27 朱庆 The adaptive network method for visualizing of oblique model and multivariate model dynamic fusion
CN108388475A (en) * 2018-02-27 2018-08-10 广州联智信息科技有限公司 A kind of method and system based on terminal type provisioning API resource

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103106680A (en) * 2013-02-16 2013-05-15 赞奇科技发展有限公司 Implementation method for three-dimensional figure render based on cloud computing framework and cloud service system
CN106647586A (en) * 2017-01-20 2017-05-10 重庆邮电大学 Virtual machine room visualization monitoring management system based on B/S architecture and realization method
CN106898047A (en) * 2017-02-24 2017-06-27 朱庆 The adaptive network method for visualizing of oblique model and multivariate model dynamic fusion
CN108388475A (en) * 2018-02-27 2018-08-10 广州联智信息科技有限公司 A kind of method and system based on terminal type provisioning API resource

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114081624A (en) * 2021-11-10 2022-02-25 武汉联影智融医疗科技有限公司 Virtual simulation system of surgical robot

Similar Documents

Publication Publication Date Title
US10733079B2 (en) Systems and methods for end-to-end testing of applications using dynamically simulated data
US20150154249A1 (en) Data ingestion module for event detection and increased situational awareness
CN105139453A (en) Three-dimensional model display system
EP2907050B1 (en) Cloud platform for managing design data
He et al. A data-driven approach to designing for privacy in household iot
CN104573904A (en) Data visualizing system for monitoring user and software behaviors during network transaction
CN109672582A (en) Complete trails monitoring method, equipment, storage medium and device
EP2766812A2 (en) Business activity monitoring runtime
WO2017087304A1 (en) Automatic extraction of tasks associated with communications
CN114791846B (en) Method for realizing observability aiming at cloud-originated chaos engineering experiment
CN114116065B (en) Method and device for acquiring topological graph data object and electronic equipment
CN107241312B (en) A kind of right management method and device
CN111402400A (en) Pipeline engineering display method, device, equipment and storage medium
CN110955978A (en) Method for realizing three-dimensional visual multi-service multi-terminal fusion
US10824645B1 (en) System and method for synchronizing incident response profiles across distinct computing platforms
CN116861708B (en) Method and device for constructing multidimensional model of production equipment
CN106156232B (en) Network information propagation monitoring method and device
CN104754010A (en) Information processing method and service platform
CN115203172B (en) Model construction and model data subscription method and device, electronic equipment and medium
Ngamassi et al. Social Media Visual Analytic Toolkits for Disaster Management: A Review of the Literature.
US20190057465A1 (en) Home recommendation tool
Rieser et al. Sensor interoperability for disaster management
CN115422202A (en) Service model generation method, service data query method, device and equipment
CN114787734A (en) Operational anomaly feedback ring system and method
CN113704658B (en) Network information presentation method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200525

Address after: 100193 321, floor 3, building 8, Zhongguancun Software Park, No. 8, Dongbei Wangxi Road, Haidian District, Beijing

Applicant after: DMS Corp.

Address before: Room 203, 2nd Floor, Building 4, East Courtyard, No. 10 Wangdong Road, Northwest Haidian District, Beijing 100094

Applicant before: BEIJING DAMEISHENG TECHNOLOGY Co.,Ltd.