CN111476886B - Smart building three-dimensional model rendering method and building cloud server - Google Patents

Smart building three-dimensional model rendering method and building cloud server Download PDF

Info

Publication number
CN111476886B
CN111476886B CN202010262008.0A CN202010262008A CN111476886B CN 111476886 B CN111476886 B CN 111476886B CN 202010262008 A CN202010262008 A CN 202010262008A CN 111476886 B CN111476886 B CN 111476886B
Authority
CN
China
Prior art keywords
rendering
building
target
space
rendering unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010262008.0A
Other languages
Chinese (zh)
Other versions
CN111476886A (en
Inventor
张志云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Yuntuo Intelligent Technology Co ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202011114192.0A priority Critical patent/CN112288866A/en
Priority to CN202010262008.0A priority patent/CN111476886B/en
Priority to CN202011114200.1A priority patent/CN112288867A/en
Publication of CN111476886A publication Critical patent/CN111476886A/en
Application granted granted Critical
Publication of CN111476886B publication Critical patent/CN111476886B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/04Architectural design, interior design

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Remote Sensing (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides a smart building three-dimensional model rendering method and a building cloud server, which classify building object entities under various smart building simulation spaces based on preset building functions, thereby taking into account the difference of different building functions of a smart building simulation system, improving the situation of rendering conflict in the rendering process, and in addition, by combining the rendering data type information and the simulation rendering stream information of a target building three-dimensional model, rendering state sequences of the rendering unit spaces of the target building three-dimensional model and the simulation rendering stream information are compared, and then rendering each model resource in the target building three-dimensional model under each corresponding rendering unit space of the smart building simulation spaces is respectively carried out, thus being convenient for carrying out rapid rendering aiming at some important rendering unit spaces based on the simulation rendering situation in the previous simulation, and improving the rendering efficiency, reducing the waiting time of the user.

Description

Smart building three-dimensional model rendering method and building cloud server
Technical Field
The invention relates to the technical field of intelligent buildings, in particular to a three-dimensional model rendering method of an intelligent building and a building cloud server.
Background
With the rapid development of the technology of the internet of things and the 5G technology, the internet of things plays an increasingly important role, and through the intelligent building system constructed by adopting the technology of the internet of things, more humanized and intelligent terminal solution services can be provided while the intelligent building is realized. Currently, when a smart building is planned, a three-dimensional model rendering is performed on a smart building system in advance, for example, the operation conditions of each smart component (e.g., a human-computer interaction terminal, a security terminal, and a mobile application terminal) in the smart building system are rendered in advance, so as to facilitate subsequent service updating.
Generally speaking, a plurality of different rendering unit spaces exist in a smart building simulation space, and the inventor finds through creative research that, in a conventional scheme, differences of different building functions of a smart building system are not considered, so that a situation of rendering conflict in a rendering process is easily caused, and in the rendering process, a user may need to quickly render some important rendering unit spaces based on a simulation rendering situation in a previous simulation, but the conventional scheme cannot meet the requirement, and further, for the user, each waiting time may be long in an actual simulation rendering process.
Disclosure of Invention
In order to overcome at least the above-mentioned deficiencies in the prior art, the present invention provides a method for rendering a three-dimensional model of a smart building and a building cloud server, which classifies building object entities in each building simulation space of the smart building based on a predetermined building function, thereby taking into account the differences in different building functions of the smart building system, improving the situation of rendering conflicts during the rendering process, and further, by combining the rendering data type information and the simulation rendering stream information of the target building three-dimensional model to compare the rendering state sequences of the rendering unit spaces of the two building three-dimensional models, rendering each model resource in the target building three-dimensional model in each corresponding rendering unit space of the building simulation space of the smart building, so as to facilitate the rapid rendering of some important rendering unit spaces based on the simulation rendering situation during the previous simulation, the rendering efficiency is improved, and the waiting time of the user is reduced.
In a first aspect, the invention provides a method for rendering a three-dimensional model of a smart building, which is applied to a building cloud server, wherein the building cloud server is in communication connection with a plurality of building service terminals, and the method comprises the following steps:
building object entities of the target building three-dimensional model in the intelligent building simulation space of each intelligent building object are obtained from each building service terminal, the building object entities in the intelligent building simulation space of each intelligent building object are classified according to the preset building functions, and a building object entity set of each building function is generated respectively;
determining a target rendering unit space in each intelligent building simulation space according to the rendering data type information of the target building three-dimensional model, and respectively determining rendering component information of a first renderable component of the target rendering unit space in a building object entity set of the corresponding building function aiming at the target rendering unit space in each intelligent building simulation space to obtain a first rendering state sequence of the target rendering unit space, wherein the target rendering unit space is a rendering unit space matched with the rendering data type information of the target building three-dimensional model in advance;
determining key-point response rendering unit spaces in the building simulation spaces of the intelligent buildings according to the simulation rendering stream information of the three-dimensional model of the target building, respectively acquiring second renderable components of the key-point response rendering unit spaces aiming at the key-point response rendering unit spaces in the building simulation spaces of the intelligent buildings, and determining rendering component information of the second renderable component in the building object entity set of the corresponding building function to obtain a second rendering state sequence of the key response rendering unit space, the key response rendering unit space is the rendering unit space of which the rendering key response index in the simulation rendering stream information of the target building three-dimensional model is greater than a set key response index threshold value, the rendering key point response index is used for representing the change degree of the rendering unit space in unit time;
and according to the matching relation between the first rendering state sequence and the second rendering state sequence, rendering each model resource in the target building three-dimensional model under each corresponding rendering unit space of the intelligent building simulation space.
In a possible implementation manner of the first aspect, the step of classifying building object entities under each intelligent building simulation space according to a predetermined building function and generating a building object entity set of each building function respectively includes:
acquiring a building object corresponding to each preset building function, forming a building object sequence of each preset building function, and acquiring related building object information of each target building object of each intelligent building simulation space and the building object of the building object sequence;
calculating the density of key building objects of each target building function according to the target building objects and the building object information related to the building objects of the building object sequence, and selecting the building objects from the building object sequence according to the density of the key building objects of each target building function to obtain an initial building object distribution space;
if the total building object distribution density of the initial building object distribution space is greater than the maximum total building object distribution density required by the total building object distribution density, dispersing first key building objects in the initial building object distribution space to a first distribution density, and gathering second key building objects in the initial building object distribution space to the first distribution density, wherein the second key building objects refer to key building objects of which the unit density of the building units in which the key building objects are located is less than a set degree, and the first key building objects refer to key building objects of which the unit density of the building units in which the key building objects are located is not less than the set degree;
calculating the total building object distribution density of the updated initial building object distribution space;
if the total building object distribution density of the initial building object distribution space after the updating is larger than the maximum total building object distribution density, executing the above processing on the initial building object distribution space after the updating again;
if the total building object distribution density of the initial building object distribution space after the updating is less than or equal to the maximum total building object distribution density, taking the initial building object distribution space before the updating as a first updating distribution space, and sequencing the target building functions according to the sequence of the building functions from low priority to high priority to obtain a target building function sequence;
and classifying the building object entities under the building simulation space of each intelligent building according to the target building function sequence, and respectively generating a building object entity set of each building function.
In a possible implementation manner of the first aspect, the rendering data type information includes rendering scene type information, and the step of determining the target rendering unit space in each of the building simulation spaces of the smart building according to the rendering data type information of the three-dimensional model of the smart building includes:
and acquiring rendering scene type information of the target building three-dimensional model, and acquiring target rendering unit spaces in the building simulation spaces of the intelligent buildings according to the rendering scene type information and the corresponding relation between each preset rendering scene type information and the target rendering unit space in each building unit.
In a possible implementation manner of the first aspect, the step of obtaining a first rendering state sequence of the target rendering unit space by determining, for the target rendering unit space in each smart building simulation space, rendering component information of a first renderable component of the target rendering unit space in a building object entity set of the corresponding building function, includes:
aiming at a target rendering unit space in each intelligent building simulation space, respectively obtaining a geometry shader matched with the target rendering unit space, and obtaining a model rendering unit corresponding to the geometry shader when the geometry shader continuously colors a model rendering component corresponding to one model rendering unit in the intelligent building simulation space in a preset time period as a target model rendering unit;
judging whether the rendering and coloring features of the target model rendering unit are matched with the rendering and coloring features of a preset decision node of a state decision unit or not, if the rendering and coloring features are not matched, adjusting the rendering and coloring features of the target model rendering unit to a model rendering unit matched with the rendering and coloring features of the decision node of the state decision unit, and inputting the rendering and coloring features to the state decision unit;
calculating an input model rendering unit by adopting the state decision unit, acquiring rendering component information corresponding to the input model rendering unit, tracking each rendering change control of the target rendering unit space in the target model rendering unit, and acquiring a control tracking special effect of each rendering change control in the target model rendering unit;
determining a rendering component with a rendering change control key response index larger than a preset response index in rendering component information corresponding to the input model rendering unit as a first renderable component, and converting a control special effect vector of each rendering change control in the input model rendering unit to obtain a control tracking special effect of each rendering change control in the input model rendering unit;
determining a first control tracking special effect set of the whole model rendering unit according to the control tracking special effect of each rendering change control in the target model rendering unit, and determining a second control tracking special effect set of the first renderable component according to the control tracking special effect of each rendering change control in the first renderable component;
determining a control tracking special effect set of the first renderable component according to the first control tracking special effect set, the second control tracking special effect set and a preset proportion, and determining rendering component information of the first renderable component of the target rendering unit space in a building object entity set of the building function corresponding to the first renderable component of the target rendering unit space according to the control tracking special effect of each rendering change control in the target model rendering unit and the control tracking special effect set to obtain a first rendering state sequence of the target rendering unit space.
In a possible implementation manner of the first aspect, the step of determining, according to the control tracking special effect of each rendering change control in the target model rendering unit and the control tracking special effect set, rendering component information of a first renderable component of the target rendering unit space in a building object entity set of a corresponding building function to obtain a first rendering state sequence of the target rendering unit space includes:
determining a control tracking special effect of each rendering change control in the target model rendering unit and a matching special effect of the control tracking special effect set, acquiring a first key control special effect of each rendering change control in the target model rendering unit according to the matching special effect, and acquiring a key control special effect of each rendering change control in the target model rendering unit according to the first key control special effect of each rendering change control in the target model rendering unit and the rendering component information;
or calculating a matching special effect of the control tracking special effect of each rendering change control in the target model rendering unit and the control tracking special effect set to obtain a first key control special effect of each rendering change control in the target model rendering unit, and calculating the first key control special effect of each rendering change control in the target model rendering unit according to a preset rendering interval to obtain the second key control special effect of each rendering change control in the target model rendering unit, wherein a difference of a special effect rendering range between the second key control special effect and the first key control special effect is smaller than the preset rendering interval, acquiring a key control special effect of each rendering change control in the target model rendering unit according to the second key control special effect of each rendering change control in the target model rendering unit and the rendering component information;
and determining rendering component information of the first renderable component in a building object entity set of the corresponding building function according to the key control special effect of each rendering change control in the target model rendering unit so as to obtain a first rendering state sequence of the target rendering unit space.
In one possible implementation manner of the first aspect, the step of determining a highlight response rendering unit space in each smart building simulation space according to the simulation rendering flow information of the target building three-dimensional model includes:
acquiring simulation rendering stream information of the target building three-dimensional model, wherein the simulation rendering stream information comprises a plurality of simulation rendering dynamic information respectively corresponding to a plurality of rendering unit spaces;
when determining that a plurality of simulation rendering dynamic information corresponding to any one rendering unit space all meet a preset simulation rendering dynamic condition, determining an initial simulation rendering area of a first simulation rendering dynamic area matched with the preset simulation rendering dynamic condition according to the simulation rendering dynamic information of the rendering unit space and the range size of the simulation rendering dynamic area, wherein the preset simulation rendering dynamic condition comprises: the simulation rendering dynamic area is larger than the set range;
determining a plurality of simulated rendering dynamic areas matched with the preset simulated rendering dynamic conditions to correspond to the initial simulated rendering area of the rendering unit space according to the simulated rendering dynamic information of the rendering unit space, the range size of the simulated rendering dynamic area, the initial simulated rendering area of the first simulated rendering dynamic area and the density of the preset simulated rendering dynamic area;
if the rendering component corresponding to the rendering unit space in the rendering unit space is matched with the initial simulation rendering area of the function level change interval, and if the rendering component is the first rendering component of the function level change interval, acquiring a rendering unit space matched with a previous simulation rendering dynamic area adjacent to the function level change interval as a screening rendering unit space, and identifying one rendering unit space from which the screening rendering unit space is removed in the rendering component as a target rendering unit space matched with the function level change interval;
if the rendering component is not the first rendering component of the function level change interval, acquiring a target rendering unit space matched with the function level change interval, identifying the target rendering unit space in the rendering component, and identifying at least one active simulation rendering object of the target rendering unit space, wherein the rendering unit space corresponds to a plurality of simulation rendering dynamic areas;
in the simulated rendering dynamic area, according to rendering scene information of at least one active simulated rendering object of the target rendering unit space in the plurality of rendering components, calculating a rendering dynamic distance between any two adjacent rendering components of the at least one active simulated rendering object of the target rendering unit space in the simulated rendering dynamic area, and a scene feature of the at least one active simulated rendering object of the target rendering unit space in the simulated rendering dynamic area;
counting the continuous rendering time of the simulated rendering dynamic region, determining an average rendering key response index and a rendering key response index variance of the target rendering unit space in the simulated rendering dynamic region according to the rendering dynamic distance and the scene characteristics, and calculating a key response characteristic parameter of the target rendering unit space in the simulated rendering dynamic region according to the average rendering key response index and the rendering key response index variance;
and calculating the key response scores of the rendering unit spaces according to the key response characteristic parameters of each rendering unit space in the matched simulated rendering dynamic region, and determining the rendering unit spaces with the key response scores larger than the set scores as the key response rendering unit spaces.
In a possible implementation manner of the first aspect, the step of rendering each model resource in the target building three-dimensional model in each corresponding rendering unit space of the smart building simulation space according to a matching relationship between the first rendering state sequence and the second rendering state sequence includes:
matching the rendering state sequence of each target rendering unit space in the first rendering state sequence with the rendering state sequence of each matched key response rendering unit space in the second rendering state sequence to obtain a plurality of matching degrees, wherein each matched key response rendering unit space in the second rendering state sequence is matched with the arrangement sequence of the corresponding target rendering unit space in the respective rendering state sequence, and the matching degree is determined according to the coincidence degree between the rendering state sequence of the target rendering unit space and the rendering state sequence of the matched key response rendering unit space;
and rendering each model resource in the target building three-dimensional model under each corresponding rendering unit space of the intelligent building simulation space according to the matching degrees.
In a possible implementation manner of the first aspect, the step of rendering each model resource in the target building three-dimensional model in each corresponding rendering unit space of the smart building simulation space according to the plurality of matching degrees includes:
when the matching degree between the rendering state sequence of any one target rendering unit space and the rendering state sequence of the matched key response rendering unit space is greater than the set matching degree, taking the target rendering unit space and the key response rendering unit space as a rendering combination unit space;
when the matching degree between the rendering state sequence of any one target rendering unit space and the rendering state sequence of the matched key response rendering unit space is not greater than the set matching degree, independently taking the target rendering unit space and the key response rendering unit space as an independent rendering unit space;
in the process of rendering each model resource in the target building three-dimensional model, when the rendering unit space corresponding to the model resource exists in the rendering combination unit space, the rendering of the model resource is synchronously completed in the rendering combination unit space, and when the rendering unit space corresponding to the model resource exists in the independent rendering unit space, the rendering of the model resource is completed in the independent rendering unit space.
In a second aspect, an embodiment of the present invention further provides a smart building three-dimensional model rendering apparatus, which is applied to a building cloud server, where the building cloud server is in communication connection with a plurality of building service terminals, and the apparatus includes:
the classification module is used for acquiring building object entities of the target building three-dimensional model in the intelligent building simulation space of each intelligent building object from each building service terminal, classifying the building object entities in the intelligent building simulation space according to the preset building functions, and respectively generating a building object entity set of each building function;
a first determining module, configured to determine, according to rendering data type information of the target building three-dimensional model, a target rendering unit space in each smart building simulation space, and for the target rendering unit space in each smart building simulation space, respectively determine rendering component information of a first renderable component of the target rendering unit space in a building object entity set of a corresponding building function, to obtain a first rendering state sequence of the target rendering unit space, where the target rendering unit space is a rendering unit space that is pre-matched with the rendering data type information of the target building three-dimensional model;
a second determining module, configured to determine a highlight response rendering unit space in each smart building simulation space according to the simulation rendering stream information of the target building three-dimensional model, and respectively obtain second renderable components of the highlight response rendering unit space for the highlight response rendering unit space in each smart building simulation space, and determining rendering component information of the second renderable component in the building object entity set of the corresponding building function to obtain a second rendering state sequence of the key response rendering unit space, the key response rendering unit space is the rendering unit space of which the rendering key response index in the simulation rendering stream information of the target building three-dimensional model is greater than a set key response index threshold value, the rendering key point response index is used for representing the change degree of the rendering unit space in unit time;
and the rendering module is used for rendering each model resource in the target building three-dimensional model under each corresponding rendering unit space of the intelligent building simulation space according to the matching relation between the first rendering state sequence and the second rendering state sequence.
In a third aspect, an embodiment of the present invention further provides a smart building system, where the smart building system includes a building cloud server and a plurality of building service terminals connected to the building cloud server in communication, and the method includes:
the building service terminal is used for sending building object entities of the target building three-dimensional model in the intelligent building simulation space of each intelligent building object to the building cloud server;
the building cloud server is used for acquiring building object entities of the target building three-dimensional model in the intelligent building simulation space of each intelligent building object from each building service terminal, classifying the building object entities in the intelligent building simulation space according to the preset building functions, and respectively generating a building object entity set of each building function;
the building cloud server is used for determining a target rendering unit space in each intelligent building simulation space according to rendering data type information of the target building three-dimensional model, respectively determining rendering component information of a first renderable component of the target rendering unit space in a building object entity set of a building function corresponding to the first renderable component to obtain a first rendering state sequence of the target rendering unit space, and the target rendering unit space is a rendering unit space matched with rendering data type information of the target building three-dimensional model in advance;
the building cloud server is used for determining key response rendering unit spaces in the building simulation spaces of the intelligent buildings according to the simulation rendering stream information of the three-dimensional model of the target building, respectively acquiring second renderable components of the key response rendering unit spaces aiming at the key response rendering unit spaces in the building simulation spaces of the intelligent buildings, and determining rendering component information of the second renderable component in the building object entity set of the corresponding building function to obtain a second rendering state sequence of the key response rendering unit space, the key response rendering unit space is the rendering unit space of which the rendering key response index in the simulation rendering stream information of the target building three-dimensional model is greater than a set key response index threshold value, the rendering key point response index is used for representing the change degree of the rendering unit space in unit time;
and the building cloud server is used for rendering each model resource in the target building three-dimensional model under each corresponding rendering unit space of the intelligent building simulation space according to the matching relation between the first rendering state sequence and the second rendering state sequence.
In a fourth aspect, an embodiment of the present invention further provides a building cloud server, where the building cloud server includes a processor, a machine-readable storage medium, and a network interface, where the machine-readable storage medium, the network interface, and the processor are connected through a bus system, the network interface is configured to be communicatively connected to at least one building service terminal, the machine-readable storage medium is configured to store a program, an instruction, or code, and the processor is configured to execute the program, the instruction, or the code in the machine-readable storage medium to perform the method for rendering a three-dimensional model of a smart building in any one of the possible designs in the first aspect or the first aspect.
In a fifth aspect, an embodiment of the present invention provides a computer-readable storage medium, in which instructions are stored, and when executed, cause a computer to perform the method for rendering a three-dimensional model of a building for a smart building in the first aspect or any one of the possible designs of the first aspect.
Based on any one of the aspects, the building object entities under the building simulation spaces of the intelligent buildings are classified based on the preset building functions, so that the difference of different building functions of the building system of the intelligent buildings is considered, the condition of rendering conflict in the rendering process is improved, in addition, the rendering data type information and the simulation rendering stream information of the target building three-dimensional model are combined, the rendering state sequences of the rendering unit spaces of the two rendering units are compared, then, the model resources in the target building three-dimensional model are respectively rendered under each corresponding rendering unit space of the building simulation spaces of the intelligent buildings, the rapid rendering can be conveniently carried out aiming at some important rendering unit spaces based on the simulation rendering condition in the previous simulation, the rendering efficiency is improved, and the waiting time of users is reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic view of an application scenario of a smart building system according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a method for rendering a three-dimensional model of a building of an intelligent building according to an embodiment of the present invention;
fig. 3 is a schematic functional block diagram of a three-dimensional model rendering apparatus for a smart building according to an embodiment of the present invention;
fig. 4 is a block diagram illustrating a structure of a building cloud server for implementing the intelligent building three-dimensional model rendering method according to the embodiment of the present invention.
Detailed Description
The present invention is described in detail below with reference to the drawings, and the specific operation methods in the method embodiments can also be applied to the apparatus embodiments or the system embodiments.
Fig. 1 is an interactive schematic diagram of a smart building system 10 according to an embodiment of the present invention. The smart building system 10 may include a building cloud server 100 and a building service terminal 200 communicatively connected to the internet-of-things cloud building cloud server 100. The intelligent building system 10 shown in fig. 1 is only one possible example, and in other possible embodiments, the intelligent building system 10 may include only one of the components shown in fig. 1 or may include other components.
In this embodiment, the building service terminal 200 may include a mobile device, a tablet computer, a laptop computer, or the like, or any combination thereof. In some embodiments, the mobile device may include a smart home device, a wearable device, a smart mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof. In some embodiments, the smart home devices may include control devices of smart electrical devices, smart monitoring devices, smart televisions, smart cameras, and the like, or any combination thereof. In some embodiments, the wearable device may include a smart bracelet, a smart lace, smart glass, a smart helmet, a smart watch, a smart garment, a smart backpack, a smart accessory, or the like, or any combination thereof. In some embodiments, the smart mobile device may include a smartphone, a personal digital assistant, a gaming device, and the like, or any combination thereof. In some embodiments, the virtual reality device and/or the augmented reality device may include a virtual reality helmet, virtual reality glass, a virtual reality patch, an augmented reality helmet, augmented reality glass, an augmented reality patch, or the like, or any combination thereof. For example, the virtual reality device and/or augmented reality device may include various virtual reality products and the like.
In this embodiment, the internet of things cloud building cloud server 100 and the building service terminal 200 in the smart building system 10 may cooperatively perform the network security protection method of the internet of things mobile base station described in the following method embodiment, and the following method embodiment may be referred to for the specific steps performed by the building cloud server 100 and the building service terminal 200.
In this embodiment, the smart building system 10 can be implemented in various application scenarios, such as a blockchain application scenario, an intelligent home application scenario, and an intelligent control application scenario.
To solve the technical problem in the background art, fig. 2 is a schematic flow chart of a smart building three-dimensional model rendering method according to an embodiment of the present invention, which can be executed by the building cloud server 100 shown in fig. 1, and the smart building three-dimensional model rendering method is described in detail below.
Step S110, building object entities of the target building three-dimensional model in the intelligent building simulation space of each intelligent building object are obtained from each building service terminal, the building object entities in the intelligent building simulation space of each intelligent building object are classified according to the preset building functions, and building object entity sets of each building function are respectively generated.
Step S120, determining a target rendering unit space in each intelligent building simulation space according to rendering data type information of the target building three-dimensional model, and respectively determining rendering component information of a first renderable component of the target rendering unit space in a building object entity set of the corresponding building function according to the target rendering unit space in each intelligent building simulation space to obtain a first rendering state sequence of the target rendering unit space.
Step S130, confirming a key response rendering unit space in each intelligent building simulation space according to the simulation rendering stream information of the target building three-dimensional model, respectively obtaining second renderable components of the key response rendering unit space aiming at the key response rendering unit space in each intelligent building simulation space, and confirming the rendering component information of the second renderable components in the building object entity set of the corresponding building function to obtain a second rendering state sequence of the key response rendering unit space.
Step S140, rendering each model resource in the target building three-dimensional model under each corresponding rendering unit space of the intelligent building simulation space according to the matching relation between the first rendering state sequence and the second rendering state sequence.
In this embodiment, the target rendering unit space is a rendering unit space that can be pre-matched with rendering data type information of the target building three-dimensional model, and in detail, for different target building three-dimensional models (e.g., an office building three-dimensional model, a house building three-dimensional model, etc.), different rendering unit spaces corresponding to different presets of service requirements of the respective internet of things may be preset. The key-response rendering unit space may be a rendering unit space in which a rendering key-response index in the simulation rendering stream information of the target building three-dimensional model is greater than a set key-response index threshold, and the rendering key-response index may be used to represent a degree of change of the rendering unit space in unit time. The service usage requirement of the internet of things may be determined according to actual requirements, and may include, for example, sensing devices for collecting physical quantities, chemical quantities, biomass, and the like, typically, a series of sensing facilities and signal processing devices such as rain, illumination, vehicle flow, carbon dioxide concentration, GPS, radio signal strength, blood oxygen data, heartbeat data, and the like, and is not limited in particular.
Based on the above steps, the present embodiment classifies building object entities under each smart building simulation space based on the predetermined building function, thereby taking into account the difference of different building functions of the smart building system, and improving the situation of rendering conflict during rendering, and in addition, by combining the rendering data type information and the simulation rendering stream information of the target building three-dimensional model, rendering each model resource in the target building three-dimensional model under each corresponding rendering unit space of the smart building simulation space after comparing the rendering state sequence of the rendering unit spaces of the two rendering units, can be facilitated to render for some corresponding rendering unit spaces based on the simulation rendering situation during the previous simulation
In one possible implementation manner, for step S110, in order to improve the accuracy of the division and reduce redundant information to improve the classification accuracy, the present embodiment may acquire a building object corresponding to each predetermined building function, form a building object sequence of each predetermined building function, and acquire building object information associated with each target building object of each smart building simulation space and the building object of the building object sequence.
On the basis, the density of the key building objects of each target building function can be calculated according to the target building objects and the building object information related to the building objects of the building object sequence, and the building objects are selected from the building object sequence according to the density of the key building objects of each target building function, so that the initial building object distribution space is obtained.
In one possible example, if the total building object distribution density of the initial building object distribution space is greater than the maximum total building object distribution density required by the total building object distribution density, then a first key building object in the initial building object distribution space is dispersed to a first distribution density and a second key building object in the initial building object distribution space is aggregated to the first distribution density.
It should be noted that the second key building object may refer to a key building object whose unit intensity of the building unit where the key building object is located is less than a set intensity, the first key building object may refer to a key building object whose unit intensity of the building unit where the key building object is located is not less than a set intensity, the first distribution density may be set according to actual requirements, but the first partition density should not be too different from the maximum total building object distribution density required by the total building object distribution density.
Then, the total building object distribution density of the initial building object distribution space after the updating is carried out, and if the total building object distribution density of the initial building object distribution space after the updating is larger than the maximum total building object distribution density, the processing is carried out on the initial building object distribution space after the updating again.
For another example, if the total building object distribution density of the initial building object distribution space after the current update is less than or equal to the maximum total building object distribution density, the initial building object distribution space before the current update may be used as the first update distribution space, and the target building functions may be sorted according to the building functions from low priority to high priority to obtain the target building function sequence.
On the basis, building object entities under the building simulation space of each intelligent building can be classified according to the building function sequence of the target building, and a building object entity set of each building function is generated respectively.
For example, the target building functions may be grouped according to the target building function sequence, each group including a first building function and a second building function related to the function hierarchy of the target building function sequence and corresponding to a difference in the hierarchy of the function hierarchy, the first building function having a lower priority than the second building function.
Then, each packet is sequentially taken as a target packet in the order from low priority to high priority in the hierarchy difference from the function hierarchy, and the target packet is subjected to the following second update processing: the critical building objects for the first building function of the target grouping in the first update distribution space are increased by a set number and the critical building objects for the second building function of the target grouping in the first update distribution space are decreased by the set number.
On the basis, whether the total building object distribution density of the updated first update distribution space is greater than the total building object distribution density requirement or not can be judged, and if the total building object distribution density of the updated first update distribution space is greater than the total building object distribution density requirement, the updated first update distribution space is used as the final building object distribution space. And if the total building object distribution density of the updated first updating distribution space is not greater than the total building object distribution density requirement, taking the next group as a new target group, and performing second updating processing on the new target group.
For another example, if the total building object distribution density of the initial building object distribution space is less than the minimum total building object distribution density required to be greater than the total building object distribution density, then the following third update process is performed on the initial building object distribution space: the first key building object in the initial building object distribution space is increased by a first distribution density and the second key building object in the initial building object distribution space is decreased by the first distribution density.
On the basis, calculating the total building object distribution density of the initial building object distribution space after the current update, and if the total building object distribution density of the initial building object distribution space after the current update is smaller than the minimum total building object distribution density, executing third update processing on the initial building object distribution space after the current update again. Or if the total building object distribution density of the initial building object distribution space after the update is greater than or equal to the minimum total building object distribution density, taking the initial building object distribution space before the update as a second update distribution space, and sequencing the target building functions according to the sequence from low priority to high priority of the building functions to obtain a target building function sequence.
Thus, the target building functions can be grouped according to the target building function sequence, each group comprises a first building function and a second building function which are related to the function level of the target building function sequence and are consistent with the level difference of the function level, and the priority of the first building function is lower than that of the second building function.
Then, each packet is sequentially taken as a target packet in the order from low priority to high priority in the hierarchy difference from the function hierarchy, and the following fourth update processing is performed on the target packet: the critical building objects for the first building function of the target grouping in the second update distribution space are decreased by a set number and the critical building objects for the second building function of the target grouping in the second update distribution space are increased by the set number.
Further, this embodiment may determine whether the total building object distribution density of the second updated distribution space updated this time is greater than the total building object distribution density requirement, if the total building object distribution density of the second updated distribution space updated this time is greater than the total building object distribution density requirement, the second updated distribution space updated this time is used as the final building object distribution space, and if the total building object distribution density of the second updated distribution space updated this time is not greater than the total building object distribution density requirement, the next group is used as the new target group, and the fourth update processing is performed on the new target group.
In this way, the building object entities of each building object in the final building object distribution space of each target building function can be classified as a building object entity set of the building function.
In a possible implementation manner, the rendering data type information may include rendering scene type information, and for step S120, the embodiment may acquire the rendering scene type information of the target building three-dimensional model, and obtain the target rendering unit space in each smart building simulation space according to the rendering scene type information and a preset corresponding relationship between each rendering scene type information and the target rendering unit space in each building unit.
In a possible implementation manner, still referring to step S120, in this embodiment, for the target rendering unit space in each smart building simulation space, a geometry shader matching with the target rendering unit space is respectively obtained, and a model rendering unit corresponding to a model rendering unit in the smart building simulation space that is continuously rendered by the geometry shader within a preset time period is obtained as the target model rendering unit.
On the basis, whether the rendering and coloring features of the target model rendering unit are matched with the rendering and coloring features of the preset decision node of the state decision unit or not can be judged, if the rendering and coloring features are not matched, the rendering and coloring features of the target model rendering unit are adjusted to the model rendering unit matched with the rendering and coloring features of the decision node of the state decision unit, and the model rendering unit is input to the state decision unit.
Then, a state decision unit is adopted to calculate the input model rendering unit, rendering component information corresponding to the input model rendering unit is obtained, each rendering change control in a target rendering unit space in the target model rendering unit is tracked, and a control tracking special effect of each rendering change control in the target model rendering unit is obtained, so that a rendering component with a rendering change control key response index larger than a preset response index in the rendering component information corresponding to the input model rendering unit can be determined as a first renderable component, a control special effect vector of each rendering change control in the input model rendering unit is converted, and a control tracking special effect of each rendering change control in the input model rendering unit is obtained.
Then, according to the control tracking special effect of each rendering change control in the target model rendering unit, determining a first control tracking special effect set of the whole model rendering unit, and determining a second control tracking special effect set of the first renderable component according to the control tracking special effect of each rendering change control in the first renderable component, whereby a set of control-tracking effects for the first renderable component can be determined from the first set of control-tracking effects, the second set of control-tracking effects, and the preset scale, and determining rendering component information of a first renderable component of the target rendering unit space in a building object entity set of the corresponding building function according to the control tracking special effect and the control tracking special effect set of each rendering change control in the target model rendering unit, and obtaining a first rendering state sequence of the target rendering unit space.
For example, in one possible example, a control tracking special effect of each rendering change control in the target model rendering unit and a matching special effect of the control tracking special effect set may be determined, a first key control special effect of each rendering change control in the target model rendering unit may be obtained according to the matching special effect, and a key control special effect of each rendering change control in the target model rendering unit may be obtained according to the first key control special effect of each rendering change control in the target model rendering unit and the rendering component information.
For another example, in another possible example, a matching effect of the control tracking effect of each rendering change control in the target model rendering unit and the control tracking effect set may be calculated to obtain a first key control effect of each rendering change control in the target model rendering unit, and the first key control effect of each rendering change control in the target model rendering unit may be calculated according to a preset rendering interval to obtain a second key control effect of each rendering change control in the target model rendering unit.
It should be noted that a difference between a special effect rendering range of the second key control special effect and the first key control special effect is smaller than a preset rendering interval, so that the key control special effect of each rendering change control in the target model rendering unit is obtained according to the second key control special effect and the rendering component information of each rendering change control in the target model rendering unit.
Therefore, the rendering component information of the first renderable component in the building object entity set of the corresponding building function can be determined and obtained according to the key control special effect of each rendering change control in the target model rendering unit, so that the first rendering state sequence of the target rendering unit space can be obtained.
In a possible implementation manner, with respect to step S130, the present embodiment may acquire simulation rendering stream information of the target building three-dimensional model, where the simulation rendering stream information specifically may include a plurality of simulation rendering dynamic information respectively corresponding to a plurality of rendering unit spaces. And then when determining that the plurality of simulated rendering dynamic information corresponding to any one rendering unit space all meet the preset simulated rendering dynamic condition, determining an initial simulated rendering area of the first simulated rendering dynamic area matched with the preset simulated rendering dynamic condition according to the simulated rendering dynamic information of the rendering unit space and the range size of the simulated rendering dynamic area. The preset simulation rendering dynamic condition may include: the dynamic area of the simulation rendering is larger than the set range.
And then, according to the simulated rendering dynamic information of the rendering unit space, the range size of the simulated rendering dynamic region, the initial simulated rendering region of the first simulated rendering dynamic region and the preset density of the simulated rendering dynamic region, determining that the plurality of simulated rendering dynamic regions matched with the preset simulated rendering dynamic condition correspond to the initial simulated rendering region of the rendering unit space. If the rendering unit space position of the rendering assembly corresponding to the rendering unit space in the rendering unit space is matched with the initial simulation rendering area of the function level change interval, and if the rendering assembly is the first rendering assembly of the function level change interval, the rendering unit space matched with the previous simulation rendering dynamic area adjacent to the function level change interval is obtained as a screening rendering unit space, and one rendering unit space with the screening rendering unit space removed is identified in the rendering assembly and serves as a target rendering unit space matched with the function level change interval.
For another example, if the rendering component is not the first rendering component of the function-level change interval, a target rendering unit space matching the function-level change interval is obtained, the target rendering unit space is identified in the rendering component, and at least one active simulation rendering object of the target rendering unit space is identified, where each rendering unit space corresponds to a plurality of simulation rendering dynamic regions.
Therefore, in the simulated rendering dynamic area, according to the rendering scene information of the at least one active simulated rendering object of the target rendering unit space in the plurality of rendering components, the rendering dynamic distance between any two adjacent rendering components of the at least one active simulated rendering object of the target rendering unit space in the simulated rendering dynamic area and the scene characteristics of the at least one active simulated rendering object of the target rendering unit space in the simulated rendering dynamic area can be calculated.
Then, the continuous rendering time of the simulated rendering dynamic region may be counted, and according to the rendering dynamic distance and the scene characteristics, the average rendering key response index and the rendering key response index variance of the target rendering unit space in the simulated rendering dynamic region may be determined (for example, the average rendering key response index may be obtained by multiplying the continuous rendering time by the rendering dynamic distance and the scene characteristics, so as to obtain the corresponding rendering key response index variance according to the average rendering key response index), calculating key response characteristic parameters of the target rendering unit space in the simulated rendering dynamic region according to the average rendering key response index and the rendering key response index variance, for example, the emphasis response characteristic parameter of the target rendering unit space in the simulated rendering dynamic region can be obtained by averaging the product of the rendering emphasis response index and the rendering emphasis response index variance.
Therefore, the key response scores of the rendering unit spaces can be calculated according to the key response characteristic parameters of the rendering unit spaces in the matched simulated rendering dynamic areas, the rendering unit spaces with the key response scores larger than the set scores are determined as the key response rendering unit spaces, and the key rendering unit spaces are accurately positioned, so that the rendering of each model resource in the target building three-dimensional model can be performed under each corresponding rendering unit space of the intelligent building simulated space after the rendering state sequences of the rendering unit spaces of the target building three-dimensional model are compared by combining the rendering data type information and the simulated rendering stream information of the target building three-dimensional model.
In a possible implementation manner, for step S140, the rendering state sequence of each target rendering unit space in the first rendering state sequence may be matched with the rendering state sequence of each matched stress response rendering unit space in the second rendering state sequence, so as to obtain a plurality of matching degrees.
It should be noted that each matched emphasis response rendering unit space in the second rendering state sequence is matched with the arrangement sequence of the corresponding target rendering unit space in the respective rendering state sequence. The matching degree can be determined according to the coincidence degree between the rendering state sequence of the target rendering unit space and the rendering state sequence of the matched key response rendering unit space.
And then respectively rendering each model resource in the target building three-dimensional model under each corresponding rendering unit space of the intelligent building simulation space according to the matching degrees.
For example, when the matching degree between the rendering state sequence of any one target rendering unit space and the rendering state sequence of the matched emphasis response rendering unit space is greater than the set matching degree, the target rendering unit space and the emphasis response rendering unit space are regarded as one rendering combination unit space.
For another example, when the matching degree between the rendering state sequence of any one target rendering unit space and the rendering state sequence of the matched stress response rendering unit space is not greater than the set matching degree, the target rendering unit space and the stress response rendering unit space are individually used as an independent rendering unit space.
In the process of rendering each model resource in the three-dimensional model of the target building, when the rendering unit space corresponding to the model resource exists in the rendering combination unit space, the rendering of the model resource is synchronously completed in the rendering combination unit space, and when the rendering unit space corresponding to the model resource exists in the independent rendering unit space, the rendering of the model resource is completed in the independent rendering unit space.
Therefore, by combining the rendering data type information and the simulation rendering stream information of the target building three-dimensional model, rendering is respectively carried out on each model resource in the target building three-dimensional model under each corresponding rendering unit space of the intelligent building simulation space after the rendering state sequences of the rendering unit spaces of the target building three-dimensional model and the simulation rendering unit space are compared, rapid rendering can be conveniently carried out on some important rendering unit spaces based on the simulation rendering condition during previous simulation, the rendering efficiency is improved, and the waiting time of users is reduced.
It should be particularly noted that after the key response rendering unit space in each smart building simulation space is determined, the second rendering state sequence of the key response rendering unit space may be further obtained according to a similar operation manner of obtaining the first rendering state sequence of the target rendering unit space in the foregoing embodiment, which is not described herein again.
Fig. 3 is a schematic diagram of functional modules of a smart building three-dimensional model rendering apparatus 300 according to an embodiment of the present invention, which can classify the smart building three-dimensional model rendering apparatus 300 according to a method embodiment executed by the building cloud server 100, that is, the following functional modules corresponding to the smart building three-dimensional model rendering apparatus 300 can be used to execute various method embodiments executed by the building cloud server 100. The three-dimensional building model rendering apparatus 300 may include a classification module 310, a first determination module 320, a second determination module 320, and a rendering module 340, and the functions of the functional modules of the three-dimensional building model rendering apparatus 300 will be described in detail below.
The classification module 310 is configured to obtain, from each building service terminal, building object entities of the target building three-dimensional model in the building simulation space of each smart building object, classify the building object entities in the building simulation space of each smart building according to a predetermined building function, and generate a building object entity set of each building function. The classifying module 310 may be configured to perform the step S110, and the detailed implementation of the classifying module 310 may refer to the detailed description of the step S110.
The first determining module 320 is configured to determine a target rendering unit space in each smart building simulation space according to rendering data type information of the target building three-dimensional model, and determine, for the target rendering unit space in each smart building simulation space, rendering component information of a first renderable component of the target rendering unit space in a building object entity set of a corresponding building function, respectively, to obtain a first rendering state sequence of the target rendering unit space, where the target rendering unit space is a rendering unit space that is pre-matched with the rendering data type information of the target building three-dimensional model. The first determining module 320 may be configured to perform the step S120, and for a detailed implementation of the first determining module 320, reference may be made to the detailed description of the step S120.
The second determining module 320 is configured to determine a highlight response rendering unit space in each building simulation space of the smart building according to the simulation rendering stream information of the target building three-dimensional model, respectively obtain second renderable components of the highlight response rendering unit space for the highlight response rendering unit space in each building simulation space of the smart building, determine rendering component information of the second renderable components in the building object entity set of the corresponding building function, and obtain a second rendering state sequence of the highlight response rendering unit space, where the highlight response rendering unit space is a rendering unit space in which a rendering highlight response index in the simulation rendering stream information of the target building three-dimensional model is greater than a set highlight response index threshold, and the rendering highlight response index is used for representing a change degree of the rendering unit space in unit time. The second determining module 320 may be configured to perform the step S130, and as for a detailed implementation of the second determining module 320, reference may be made to the detailed description of the step S130.
And the rendering module 340 is configured to render, according to the matching relationship between the first rendering state sequence and the second rendering state sequence, each model resource in the target building three-dimensional model in each corresponding rendering unit space of the smart building simulation space. The rendering module 340 may be configured to perform the step S140, and the detailed implementation of the rendering module 340 may refer to the detailed description of the step S140.
Further, fig. 4 is a schematic structural diagram of a building cloud server 100 for performing the method for rendering a three-dimensional model of a building of a smart building according to an embodiment of the present invention. As shown in fig. 4, the building cloud server 100 may include a network interface 110, a machine-readable storage medium 120, a processor 130, and a bus 140. The processor 130 may be one or more, and one processor 130 is illustrated in fig. 4 as an example. The network interface 110, the machine-readable storage medium 120, and the processor 130 may be connected by a bus 140 or otherwise, as exemplified by the connection by the bus 140 in fig. 4.
The machine-readable storage medium 120 is a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the rendering method for the three-dimensional model of the building (for example, the classification module 310, the first determination module 320, the second determination module 320, and the rendering module 340 of the rendering apparatus 300 for the three-dimensional model of the building shown in fig. 3). The processor 130 detects the software program, instructions and modules stored in the machine-readable storage medium 120, so as to execute various functional applications and data processing of the terminal device, that is, to implement the aforementioned method for rendering a three-dimensional model of a building of an intelligent building, which is not described herein again.
The machine-readable storage medium 120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the machine-readable storage medium 120 may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data rate Synchronous Dynamic random access memory (DDR SDRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), and direct memory bus RAM (DR RAM). It should be noted that the memories of the systems and methods described herein are intended to comprise, without being limited to, these and any other suitable memory of a publishing node. In some examples, the machine-readable storage medium 120 may further include memory located remotely from the processor 130, which may be connected to the building cloud server 100 over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor 130 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method embodiments may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 130. The processor 130 may be a general-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor.
The building cloud server 100 may interact with other devices (e.g., the building service terminal 200) through the network interface 110. Network interface 110 may be a circuit, bus, transceiver, or any other device that may be used to exchange information. Processor 130 may send and receive information using network interface 110.
Finally, it should be noted that: as will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
For the above-mentioned apparatus embodiments, since they basically correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims. It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A three-dimensional building model rendering method for a smart building is applied to a building cloud server, wherein the building cloud server is in communication connection with a plurality of building service terminals, and the method comprises the following steps:
building object entities of the target building three-dimensional model in the intelligent building simulation space of each intelligent building object are obtained from each building service terminal, the building object entities in the intelligent building simulation space of each intelligent building object are classified according to the preset building functions, and a building object entity set of each building function is generated respectively;
determining a target rendering unit space in each intelligent building simulation space according to the rendering data type information of the target building three-dimensional model, and respectively determining rendering component information of a first renderable component of the target rendering unit space in a building object entity set of the corresponding building function aiming at the target rendering unit space in each intelligent building simulation space to obtain a first rendering state sequence of the target rendering unit space, wherein the target rendering unit space is a rendering unit space matched with the rendering data type information of the target building three-dimensional model in advance;
determining key-point response rendering unit spaces in the building simulation spaces of the intelligent buildings according to the simulation rendering stream information of the three-dimensional model of the target building, respectively acquiring second renderable components of the key-point response rendering unit spaces aiming at the key-point response rendering unit spaces in the building simulation spaces of the intelligent buildings, and determining rendering component information of the second renderable component in the building object entity set of the corresponding building function to obtain a second rendering state sequence of the key response rendering unit space, the key response rendering unit space is the rendering unit space of which the rendering key response index in the simulation rendering stream information of the target building three-dimensional model is greater than a set key response index threshold value, the rendering key point response index is used for representing the change degree of the rendering unit space in unit time;
and according to the matching relation between the first rendering state sequence and the second rendering state sequence, rendering each model resource in the target building three-dimensional model under each corresponding rendering unit space of the intelligent building simulation space.
2. The intelligent building three-dimensional model rendering method according to claim 1, wherein the step of classifying the building object entities under each intelligent building simulation space according to the preset building functions and respectively generating the building object entity set of each building function comprises:
acquiring a building object corresponding to each preset building function, forming a building object sequence of each preset building function, and acquiring related building object information of each target building object of each intelligent building simulation space and the building object of the building object sequence;
calculating the density of key building objects of each target building function according to the target building objects and the building object information related to the building objects of the building object sequence, and selecting the building objects from the building object sequence according to the density of the key building objects of each target building function to obtain an initial building object distribution space;
if the total building object distribution density of the initial building object distribution space is greater than the maximum total building object distribution density required by the total building object distribution density, dispersing first key building objects in the initial building object distribution space to a first distribution density, and gathering second key building objects in the initial building object distribution space to the first distribution density, wherein the second key building objects refer to key building objects of which the unit density of the building units in which the key building objects are located is less than a set degree, and the first key building objects refer to key building objects of which the unit density of the building units in which the key building objects are located is not less than the set degree;
calculating the total building object distribution density of the updated initial building object distribution space;
if the total building object distribution density of the initial building object distribution space after the updating is larger than the maximum total building object distribution density, executing the above processing on the initial building object distribution space after the updating again;
if the total building object distribution density of the initial building object distribution space after the updating is less than or equal to the maximum total building object distribution density, taking the initial building object distribution space before the updating as a first updating distribution space, and sequencing the target building functions according to the sequence of the building functions from low priority to high priority to obtain a target building function sequence;
and classifying the building object entities under the building simulation space of each intelligent building according to the target building function sequence, and respectively generating a building object entity set of each building function.
3. The method for rendering the intelligent building three-dimensional model according to claim 1, wherein the rendering data type information includes rendering scene type information, and the step of determining the target rendering unit space in each intelligent building simulation space according to the rendering data type information of the target building three-dimensional model comprises:
and acquiring rendering scene type information of the target building three-dimensional model, and acquiring target rendering unit spaces in the building simulation spaces of the intelligent buildings according to the rendering scene type information and the corresponding relation between each preset rendering scene type information and the target rendering unit space in each building unit.
4. The method for rendering the intelligent building three-dimensional model according to claim 1, wherein the step of determining rendering component information of a first renderable component of the target rendering unit space in the building object entity set of the corresponding building function for the target rendering unit space in the intelligent building simulation space respectively to obtain a first rendering state sequence of the target rendering unit space comprises:
aiming at a target rendering unit space in each intelligent building simulation space, respectively obtaining a geometry shader matched with the target rendering unit space, and obtaining a model rendering unit corresponding to the geometry shader when the geometry shader continuously colors a model rendering component corresponding to one model rendering unit in the intelligent building simulation space in a preset time period as a target model rendering unit;
judging whether the rendering and coloring features of the target model rendering unit are matched with the rendering and coloring features of a preset decision node of a state decision unit or not, if the rendering and coloring features are not matched, adjusting the rendering and coloring features of the target model rendering unit to a model rendering unit matched with the rendering and coloring features of the decision node of the state decision unit, and inputting the rendering and coloring features to the state decision unit;
calculating an input model rendering unit by adopting the state decision unit, acquiring rendering component information corresponding to the input model rendering unit, tracking each rendering change control of the target rendering unit space in the target model rendering unit, and acquiring a control tracking special effect of each rendering change control in the target model rendering unit;
determining a rendering component with a rendering change control key response index larger than a preset response index in rendering component information corresponding to the input model rendering unit as a first renderable component, and converting a control special effect vector of each rendering change control in the input model rendering unit to obtain a control tracking special effect of each rendering change control in the input model rendering unit;
determining a first control tracking special effect set of the whole model rendering unit according to the control tracking special effect of each rendering change control in the target model rendering unit, and determining a second control tracking special effect set of the first renderable component according to the control tracking special effect of each rendering change control in the first renderable component;
determining a control tracking special effect set of the first renderable component according to the first control tracking special effect set, the second control tracking special effect set and a preset proportion, and determining rendering component information of the first renderable component of the target rendering unit space in a building object entity set of the building function corresponding to the first renderable component of the target rendering unit space according to the control tracking special effect of each rendering change control in the target model rendering unit and the control tracking special effect set to obtain a first rendering state sequence of the target rendering unit space.
5. The intelligent building three-dimensional model rendering method according to claim 4, wherein the step of determining rendering component information of a first renderable component of the target rendering unit space in the building object entity set of the corresponding building function according to the control tracking special effect of each rendering change control in the target model rendering unit and the control tracking special effect set to obtain a first rendering state sequence of the target rendering unit space comprises:
determining a control tracking special effect of each rendering change control in the target model rendering unit and a matching special effect of the control tracking special effect set, acquiring a first key control special effect of each rendering change control in the target model rendering unit according to the matching special effect, and acquiring a key control special effect of each rendering change control in the target model rendering unit according to the first key control special effect of each rendering change control in the target model rendering unit and the rendering component information;
or calculating a matching special effect of the control tracking special effect of each rendering change control in the target model rendering unit and the control tracking special effect set to obtain a first key control special effect of each rendering change control in the target model rendering unit, and calculating the first key control special effect of each rendering change control in the target model rendering unit according to a preset rendering interval to obtain the second key control special effect of each rendering change control in the target model rendering unit, wherein a difference of a special effect rendering range between the second key control special effect and the first key control special effect is smaller than the preset rendering interval, acquiring a key control special effect of each rendering change control in the target model rendering unit according to the second key control special effect of each rendering change control in the target model rendering unit and the rendering component information;
and determining rendering component information of the first renderable component in a building object entity set of the corresponding building function according to the key control special effect of each rendering change control in the target model rendering unit so as to obtain a first rendering state sequence of the target rendering unit space.
6. The method for rendering the intelligent building three-dimensional model as claimed in any one of claims 1-5, wherein the step of determining the emphasis response rendering unit space in the simulation space of each intelligent building according to the simulation rendering stream information of the target building three-dimensional model comprises:
acquiring simulation rendering stream information of the target building three-dimensional model, wherein the simulation rendering stream information comprises a plurality of simulation rendering dynamic information respectively corresponding to a plurality of rendering unit spaces;
when determining that a plurality of simulation rendering dynamic information corresponding to any one rendering unit space all meet a preset simulation rendering dynamic condition, determining an initial simulation rendering area of a first simulation rendering dynamic area matched with the preset simulation rendering dynamic condition according to the simulation rendering dynamic information of the rendering unit space and the range size of the simulation rendering dynamic area, wherein the preset simulation rendering dynamic condition comprises: the simulation rendering dynamic area is larger than the set range;
determining a plurality of simulated rendering dynamic areas matched with the preset simulated rendering dynamic conditions to correspond to the initial simulated rendering area of the rendering unit space according to the simulated rendering dynamic information of the rendering unit space, the range size of the simulated rendering dynamic area, the initial simulated rendering area of the first simulated rendering dynamic area and the density of the preset simulated rendering dynamic area;
if the rendering component corresponding to the rendering unit space in the rendering unit space is matched with the initial simulation rendering area of the function level change interval, and if the rendering component is the first rendering component of the function level change interval, acquiring a rendering unit space matched with a previous simulation rendering dynamic area adjacent to the function level change interval as a screening rendering unit space, and identifying one rendering unit space from which the screening rendering unit space is removed in the rendering component as a target rendering unit space matched with the function level change interval;
if the rendering component is not the first rendering component of the function level change interval, acquiring a target rendering unit space matched with the function level change interval, identifying the target rendering unit space in the rendering component, and identifying at least one active simulation rendering object of the target rendering unit space, wherein the rendering unit space corresponds to a plurality of simulation rendering dynamic areas;
in the simulated rendering dynamic area, according to rendering scene information of at least one active simulated rendering object of the target rendering unit space in a plurality of rendering components, calculating a rendering dynamic distance between any two adjacent rendering components of the at least one active simulated rendering object of the target rendering unit space in the simulated rendering dynamic area, and a scene feature of the at least one active simulated rendering object of the target rendering unit space in the simulated rendering dynamic area;
counting the continuous rendering time of the simulated rendering dynamic region, determining an average rendering key response index and a rendering key response index variance of the target rendering unit space in the simulated rendering dynamic region according to the rendering dynamic distance and the scene characteristics, and calculating a key response characteristic parameter of the target rendering unit space in the simulated rendering dynamic region according to the average rendering key response index and the rendering key response index variance;
and calculating the key response scores of the rendering unit spaces according to the key response characteristic parameters of each rendering unit space in the matched simulated rendering dynamic region, and determining the rendering unit spaces with the key response scores larger than the set scores as the key response rendering unit spaces.
7. The method as claimed in claim 1, wherein the step of rendering the respective model resources of the target building three-dimensional model in each corresponding rendering unit space of the building simulation space according to the matching relationship between the first rendering state sequence and the second rendering state sequence comprises:
matching the rendering state sequence of each target rendering unit space in the first rendering state sequence with the rendering state sequence of each matched key response rendering unit space in the second rendering state sequence to obtain a plurality of matching degrees, wherein each matched key response rendering unit space in the second rendering state sequence is matched with the arrangement sequence of the corresponding target rendering unit space in the respective rendering state sequence, and the matching degree is determined according to the coincidence degree between the rendering state sequence of the target rendering unit space and the rendering state sequence of the matched key response rendering unit space;
and rendering each model resource in the target building three-dimensional model under each corresponding rendering unit space of the intelligent building simulation space according to the matching degrees.
8. The method for rendering the intelligent building three-dimensional model according to claim 7, wherein the step of rendering each model resource in the target building three-dimensional model in each corresponding rendering unit space of the intelligent building simulation space according to the matching degrees comprises:
when the matching degree between the rendering state sequence of any one target rendering unit space and the rendering state sequence of the matched key response rendering unit space is greater than the set matching degree, taking the target rendering unit space and the key response rendering unit space as a rendering combination unit space;
when the matching degree between the rendering state sequence of any one target rendering unit space and the rendering state sequence of the matched key response rendering unit space is not greater than the set matching degree, independently taking the target rendering unit space and the key response rendering unit space as an independent rendering unit space;
in the process of rendering each model resource in the target building three-dimensional model, when the rendering unit space corresponding to the model resource exists in the rendering combination unit space, the rendering of the model resource is synchronously completed in the rendering combination unit space, and when the rendering unit space corresponding to the model resource exists in the independent rendering unit space, the rendering of the model resource is completed in the independent rendering unit space.
9. A building cloud server, characterized in that the building cloud server comprises a processor, a machine-readable storage medium, and a network interface, the machine-readable storage medium, the network interface, and the processor are connected through a bus system, the network interface is used for being connected with at least one building service terminal in a communication manner, the machine-readable storage medium is used for storing programs, instructions, or codes, and the processor is used for executing the programs, instructions, or codes in the machine-readable storage medium to execute the method for rendering the three-dimensional model of the smart building according to any one of claims 1 to 8.
10. A computer-readable storage medium, wherein the computer-readable storage medium is configured with a program, instructions or code, which when executed, implements the smart building three-dimensional model rendering method of any one of claims 1-8.
CN202010262008.0A 2020-04-06 2020-04-06 Smart building three-dimensional model rendering method and building cloud server Active CN111476886B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202011114192.0A CN112288866A (en) 2020-04-06 2020-04-06 Intelligent building three-dimensional model rendering method and building system
CN202010262008.0A CN111476886B (en) 2020-04-06 2020-04-06 Smart building three-dimensional model rendering method and building cloud server
CN202011114200.1A CN112288867A (en) 2020-04-06 2020-04-06 Smart building three-dimensional model rendering method and smart building system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010262008.0A CN111476886B (en) 2020-04-06 2020-04-06 Smart building three-dimensional model rendering method and building cloud server

Related Child Applications (2)

Application Number Title Priority Date Filing Date
CN202011114192.0A Division CN112288866A (en) 2020-04-06 2020-04-06 Intelligent building three-dimensional model rendering method and building system
CN202011114200.1A Division CN112288867A (en) 2020-04-06 2020-04-06 Smart building three-dimensional model rendering method and smart building system

Publications (2)

Publication Number Publication Date
CN111476886A CN111476886A (en) 2020-07-31
CN111476886B true CN111476886B (en) 2020-12-04

Family

ID=71750591

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202011114192.0A Withdrawn CN112288866A (en) 2020-04-06 2020-04-06 Intelligent building three-dimensional model rendering method and building system
CN202011114200.1A Withdrawn CN112288867A (en) 2020-04-06 2020-04-06 Smart building three-dimensional model rendering method and smart building system
CN202010262008.0A Active CN111476886B (en) 2020-04-06 2020-04-06 Smart building three-dimensional model rendering method and building cloud server

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CN202011114192.0A Withdrawn CN112288866A (en) 2020-04-06 2020-04-06 Intelligent building three-dimensional model rendering method and building system
CN202011114200.1A Withdrawn CN112288867A (en) 2020-04-06 2020-04-06 Smart building three-dimensional model rendering method and smart building system

Country Status (1)

Country Link
CN (3) CN112288866A (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132942B (en) * 2020-09-30 2022-03-18 深圳星寻科技有限公司 Three-dimensional scene roaming real-time rendering method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103065355A (en) * 2012-12-26 2013-04-24 安科智慧城市技术(中国)有限公司 Method and device of achieving three-dimensional modeling of wisdom building
CN104573231A (en) * 2015-01-06 2015-04-29 上海同筑信息科技有限公司 BIM based smart building system and method
CN105931168A (en) * 2016-04-15 2016-09-07 广州葵翼信息科技有限公司 Smart city service configuration based on information grid service
CN106203784A (en) * 2016-06-29 2016-12-07 江苏三棱智慧物联发展股份有限公司 A kind of wisdom building system based on BIM and management method thereof

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049614B (en) * 2012-12-26 2016-02-17 安科智慧城市技术(中国)有限公司 A kind of method, device controlling acoustic wave movement track in wisdom building
CN103606184B (en) * 2013-11-21 2016-05-25 武大吉奥信息技术有限公司 A kind of device based on the integrated vector render engine of two and three dimensions
KR102479360B1 (en) * 2016-06-15 2022-12-20 삼성전자 주식회사 Method and apparatus for providing augmented reality service

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103065355A (en) * 2012-12-26 2013-04-24 安科智慧城市技术(中国)有限公司 Method and device of achieving three-dimensional modeling of wisdom building
CN104573231A (en) * 2015-01-06 2015-04-29 上海同筑信息科技有限公司 BIM based smart building system and method
CN105931168A (en) * 2016-04-15 2016-09-07 广州葵翼信息科技有限公司 Smart city service configuration based on information grid service
CN106203784A (en) * 2016-06-29 2016-12-07 江苏三棱智慧物联发展股份有限公司 A kind of wisdom building system based on BIM and management method thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Fog computing framework for location-based energy management in smart buildings;Abdelfettah Maatoug 等;《Multiagent and Grid Systems》;20190325;第15卷(第1期);39-56 *
基于BIM的智慧建造策略研究;王丽佳;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20140815(第8期);C038-98 *

Also Published As

Publication number Publication date
CN111476886A (en) 2020-07-31
CN112288866A (en) 2021-01-29
CN112288867A (en) 2021-01-29

Similar Documents

Publication Publication Date Title
CN111352670B (en) Virtual reality scene loading method and device, virtual reality system and equipment
CN111723226B (en) Information management method based on big data and Internet and artificial intelligence cloud server
CN104391879B (en) The method and device of hierarchical clustering
CN111476875B (en) Smart building Internet of things object simulation method and building cloud server
CN111310057B (en) Online learning mining method and device, online learning system and server
CN111723227B (en) Data analysis method based on artificial intelligence and Internet and cloud computing service platform
CN111970539B (en) Data coding method based on deep learning and cloud computing service and big data platform
CN111708931B (en) Big data acquisition method based on mobile internet and artificial intelligence cloud service platform
CN111476886B (en) Smart building three-dimensional model rendering method and building cloud server
CN115309985A (en) Fairness evaluation method and AI model selection method of recommendation algorithm
CN112069325B (en) Big data processing method based on block chain offline payment and cloud service pushing platform
CN113723607A (en) Training method, device and equipment of space-time data processing model and storage medium
CN112541556A (en) Model construction optimization method, device, medium, and computer program product
CN112465567B (en) Clothing style fashion prediction system and method
CN112911339A (en) Media data processing method and system based on remote interaction and cloud computing
CN112905792A (en) Text clustering method, device and equipment based on non-text scene and storage medium
CN112055076A (en) Multifunctional intelligent monitoring method and device based on Internet and server
CN112200170B (en) Image recognition method and device, electronic equipment and computer readable medium
CN113268646B (en) Abnormal user data determination method, device, server and storage medium
CN113115301B (en) Determination method, device and readable storage medium
CN115017166A (en) Vertical data construction method and device, electronic equipment and storage medium
CN115984604A (en) Target detection method and device, computer equipment and readable storage medium
CN116342214A (en) Service pushing and calibration parameter obtaining method and device
CN115439008A (en) Account behavior target category concentration degree evaluation method and system
CN117056663A (en) Data processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: School of information engineering and automation, Kunming University of technology, 253 Xuefu Road, Wuhua District, Kunming City, Yunnan Province

Applicant after: Zhang Zhiyun

Address before: School of information engineering, Huaqiao University, 269 Chenghua North Road, Fengze District, Quanzhou City, Fujian Province

Applicant before: Zhang Zhiyun

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201116

Address after: No. 617, Lijiagou village, Heshun Town, Linzhou City, Anyang City, Henan Province

Applicant after: Wang Rui

Address before: School of information engineering and automation, Kunming University of technology, 253 Xuefu Road, Wuhua District, Kunming City, Yunnan Province

Applicant before: Zhang Zhiyun

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231102

Address after: Room 606, 6th floor, block a, Yonghe Longzihu Plaza, 197 Ping'an Avenue, Zhengdong New District, Zhengzhou City, Henan Province, 450000

Patentee after: HENAN YUNTUO INTELLIGENT TECHNOLOGY Co.,Ltd.

Address before: No. 617, Lijiagou village, Heshun Town, Linzhou City, Anyang City, Henan Province 456564

Patentee before: Wang Rui