CN114859744B - Intelligent application visual control method and system based on big data - Google Patents

Intelligent application visual control method and system based on big data Download PDF

Info

Publication number
CN114859744B
CN114859744B CN202210489768.4A CN202210489768A CN114859744B CN 114859744 B CN114859744 B CN 114859744B CN 202210489768 A CN202210489768 A CN 202210489768A CN 114859744 B CN114859744 B CN 114859744B
Authority
CN
China
Prior art keywords
visual
data
external environment
control
modeling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210489768.4A
Other languages
Chinese (zh)
Other versions
CN114859744A (en
Inventor
景伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inner Mongolia Yunke Data Service Co ltd
Original Assignee
Inner Mongolia Yunke Data Service Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inner Mongolia Yunke Data Service Co ltd filed Critical Inner Mongolia Yunke Data Service Co ltd
Priority to CN202210489768.4A priority Critical patent/CN114859744B/en
Publication of CN114859744A publication Critical patent/CN114859744A/en
Application granted granted Critical
Publication of CN114859744B publication Critical patent/CN114859744B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Computer Graphics (AREA)
  • Theoretical Computer Science (AREA)
  • Manufacturing & Machinery (AREA)
  • Quality & Reliability (AREA)
  • Processing Or Creating Images (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses an intelligent application visual control method and system based on big data, and relates to the field of visual control, wherein the method comprises the following steps: according to the data fitting device, first modeling data of a first visual object are obtained; obtaining a first relevance by performing environmental impact analysis on information of a first external environment of the first visual object; if the first relevance is greater than the preset relevance, second modeling data are obtained; inputting the first modeling data and the second modeling data into a three-dimensional scene model to output a visual three-dimensional scene; obtaining a visual control according to the visual three-dimensional scene; and building a visual control layer according to the visual control to perform visual control. The technical problems that the existing visual modeling is inaccurate and adaptive feedback is difficult to carry out according to the positioning environment are solved, and the technical effects of improving control accuracy and user comfort level by carrying out environment positioning and association analysis on intelligent home are achieved.

Description

Intelligent application visual control method and system based on big data
Technical Field
The invention relates to the field of visual control, in particular to an intelligent application visual control method and system based on big data.
Background
Along with the progress of scientific technology, the requirements of people on life quality and living environment are continuously improved, the intelligent home is promoted to enter into the life and work of people, and as the safety and comfort modes of the intelligent home bring better life experience for people, the diversified demands of the intelligent home are increased along with the improvement, the technology of big data is gradually mature, the visual control of the home becomes a current research hotspot, the visual control of the intelligent home is still imperfect at present, the accuracy is not high, and the quality of the intelligent home is influenced.
At present, the visual modeling of the intelligent home is not accurate enough in the prior art, and adaptive feedback is difficult to carry out according to a positioning environment, so that the technical problems of the visual accuracy and the user experience comfort of the intelligent home are affected.
Disclosure of Invention
Aiming at the defects in the prior art, the purpose of the application is to solve the technical problems that in the prior art, the visual modeling of the intelligent home is inaccurate and the intelligent home is difficult to feed back adaptively according to the positioning environment by providing the intelligent application visual control method and system based on big data, so that the visual accuracy and the user experience comfort of the intelligent home are affected, and the technical effects of performing environment positioning and association analysis on the intelligent home, performing intelligent control on a visual control layer and improving the accuracy and the user comfort are achieved.
In one aspect, the present application provides a big data based intelligent application visualization control method, where the method is applied to a big data based intelligent application visualization control system, the system is communicatively connected to a data fitting device, and the method includes: acquiring data of a first visualized object according to the data fitting device to obtain first modeling data; obtaining information of a first external environment of the first visual object; obtaining a first relevance through environmental impact analysis on the information of the first external environment, wherein the first relevance is the relevance impact degree of the first external environment on the first visual object; if the first relevance is greater than the preset relevance, second modeling data are obtained; inputting the first modeling data and the second modeling data into a three-dimensional scene model, performing three-dimensional modeling on the first visualized object, and outputting a visualized three-dimensional scene; the visual control is obtained by carrying out workflow analysis on the visual three-dimensional scene; and building a visual control layer according to the visual control, and performing visual control according to the visual control layer.
On the other hand, the application also provides an intelligent application visualization control system based on big data, which comprises the following steps: the first obtaining unit is used for collecting data of a first visualized object according to the data fitting device to obtain first modeling data; a second obtaining unit configured to obtain information of a first external environment of the first visualized object; the first analysis unit is used for obtaining a first relevance through environmental impact analysis on the information of the first external environment, wherein the first relevance is the relevance impact degree of the first external environment on the first visual object; the third obtaining unit is used for obtaining second modeling data if the first relevance is greater than a preset relevance; the first input unit is used for inputting the first modeling data and the second modeling data into a three-dimensional scene model, carrying out three-dimensional modeling on the first visualized object and outputting a visualized three-dimensional scene; the fourth obtaining unit is used for obtaining a visual control by carrying out workflow analysis on the visual three-dimensional scene; the first control unit is used for building a visual control layer according to the visual control, and performing visual control according to the visual control layer.
In a third aspect, the present application provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of any of the methods described above when the program is executed by the processor.
In a fourth aspect, the present application also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of any of the methods described above.
One or more technical solutions provided in the present application have at least the following technical effects or advantages:
the method comprises the steps of acquiring data of a first visual object according to a data fitting device, further acquiring first modeling data, acquiring external environment information of the first visual object, generating information of a first external environment, carrying out visual object modeling association influence analysis on the acquired first external environment, outputting first association, further judging whether the first association is larger than preset association, acquiring second modeling data according to the information of the first external environment if the first association is larger than the preset association, inputting the first modeling data and the second modeling data into a three-dimensional scene model, carrying out three-dimensional modeling on the first visual object, outputting a visual three-dimensional scene, carrying out workflow analysis on controllable equipment in the generated visual three-dimensional scene, and further achieving the visual control mode according to the visual control layer, so that the intelligent control layer is positioned and association-built for intelligent control, and the intelligent control layer is improved in the accuracy and the user comfort.
The foregoing description is only an overview of the technical solutions of the present application, and may be implemented according to the content of the specification in order to make the technical means of the present application more clearly understood, and in order to make the above-mentioned and other objects, features and advantages of the present application more clearly understood, the following detailed description of the present application will be given.
Drawings
Other features, objects and advantages of the present invention will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the accompanying drawings in which:
fig. 1 is a schematic flow chart of an intelligent application visual control method based on big data in an embodiment of the application;
FIG. 2 is a schematic flow chart of environmental impact analysis of an intelligent application visual control method based on big data according to an embodiment of the present application;
fig. 3 is a schematic flow chart of external environment feedback control of an intelligent application visual control method based on big data in the embodiment of the application;
fig. 4 is a schematic structural diagram of an intelligent application visual control system based on big data according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an exemplary electronic device according to an embodiment of the present application.
Reference numerals illustrate: the device comprises a first obtaining unit 11, a second obtaining unit 12, a first analyzing unit 13, a third obtaining unit 14, a first input unit 15, a fourth obtaining unit 16, a first control unit 17, a bus 300, a receiver 301, a processor 302, a transmitter 303, a memory 304, and a bus interface 305.
Detailed Description
According to the intelligent application visual control method and system based on big data, the technical problems that in the prior art, visual modeling of an intelligent home is inaccurate, adaptive feedback is difficult to conduct according to a positioning environment, visual accuracy and user experience comfort of the intelligent home are affected are solved, and the technical effects that environment positioning and association analysis are conducted on the intelligent home, intelligent control is conducted on a visual control layer, and accuracy and user comfort are improved are achieved.
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application and not all of the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
The data acquisition, storage, use, processing and the like in the technical scheme meet the relevant regulations of national laws and regulations.
The intelligent home is used as an important implementation mode of family informatization, becomes an important component of social informatization development, and the maturity of big data technology provides a processing mode of data acquisition and management for further realizing the intelligent home, the intelligent home data is deeply analyzed, the visual control mode is more convenient, safe and efficient, and the visual management can lead the intelligent home control to be more visual.
However, the existing space model based on the intelligent home is not perfect enough, the combination of environment positioning and modeling cannot be performed, the visual modeling of the intelligent home is not accurate enough, and the adaptive feedback is difficult to perform according to the positioning environment, so that the technical problems of the visual accuracy and the user experience comfort of the intelligent home are affected.
Aiming at the technical problems, the technical scheme provided by the application has the following overall thought:
the application provides an intelligent application visual control method and system based on big data, which are used for acquiring data of a first visual object according to a data fitting device, further acquiring first modeling data, acquiring external environment information of the first visual object on the other hand, generating information of the first external environment, carrying out visual object modeling association influence analysis on the acquired first external environment, outputting first association, further judging whether the first association is larger than preset association, if the first association is larger than the preset association, acquiring second modeling data according to the information of the first external environment, inputting the first modeling data and the second modeling data into a three-dimensional scene model, carrying out three-dimensional modeling on the first visual object, outputting visual three-dimensional scene, carrying out workflow analysis on the generated visual equipment in the visual three-dimensional scene, further constructing a visual control layer according to the visual control, realizing the visual control layer, and carrying out intelligent home positioning and environment comfort analysis in an intelligent control layer, thereby achieving the accurate positioning effect and the intelligent home positioning and the intelligent control method.
In order to better understand the above technical solutions, the following detailed description will refer to the accompanying drawings and specific embodiments.
Example 1
As shown in fig. 1, an embodiment of the present application provides a big data based intelligent application visualization control method, where the method is applied to a big data based intelligent application visualization control system, where the system is communicatively connected to a data fitting device, and the method includes:
step S100: acquiring data of a first visualized object according to the data fitting device to obtain first modeling data;
specifically, the intelligent home is taken as an important implementation mode of family informatization, becomes an important component of social informatization development, and the maturity of big data technology provides a processing mode of data acquisition and management for further realizing the intelligent home, the intelligent home data is deeply analyzed, the visual control mode can be more convenient, safe and efficient, and the visual management can lead the intelligent home control to be more visual. However, the existing space model based on the intelligent home is not perfect enough, the combination of environment positioning and modeling cannot be performed, the visual modeling of the intelligent home is not accurate enough, and the adaptive feedback is difficult to perform according to the positioning environment, so that the technical problems of the visual accuracy and the user experience comfort of the intelligent home are affected.
Further, by providing a large data-based intelligent application visual control method, accurate control of visualization is achieved, a first visual object is obtained, wherein the first visual object is any home space based on visual control, data acquisition is conducted on the first visual object through the data fitting device, the data fitting device is conducted according to requirements of the data fitting device in multiple data acquisition modes including geometric data acquisition, image acquisition, video acquisition and the like, the data fitting device is output, accordingly data acquisition flexibility of the first visual object is guaranteed, and the data acquired and output by the data fitting device are used as first modeling data to conduct home space scene modeling.
Step S200: obtaining information of a first external environment of the first visual object;
step S300: obtaining a first relevance through environmental impact analysis on the information of the first external environment, wherein the first relevance is the relevance impact degree of the first external environment on the first visual object;
further, as shown in fig. 2, the step S300 of the embodiment of the present application further includes:
Step S310: obtaining spatial photosensitivity, spatial temperature sensitivity and air exchange property of the first visualization object;
step S320: configuring a first weight, a second weight, and a third weight based on the spatial photosensitivity, and the air exchange property;
step S330: according to the first weight, the second weight and the third weight, performing weight calculation on the space photosensitivity, the space temperature sensitivity and the air exchange performance, and outputting a first calculation result;
step S340: and obtaining the first relevance according to the first calculation result.
Specifically, information of a first external environment of the first visual object is obtained, wherein the information of the first external environment is an external environment of an area where the first visual object is located, when the areas where the first visual object is located are different, the geographic environment is changed accordingly, in order to ensure accurate modeling of a home environment, information acquisition is performed on the external environment of the first visual object, information of the first external environment is obtained, for example, influence of the external environment of the first visual object in different areas such as residential areas, suburban areas and traffic loops is also different, and in order to further accurately analyze the information of the first external environment, a first relevance representing the degree of the influence of the first external environment on the first visual object is obtained through environmental influence analysis.
Further, the process of performing environmental impact analysis on the information of the first external environment is as follows: firstly, the regional characteristics of the first visual object are obtained, so that analysis is performed according to the influence generated by the regional characteristics, the accurate positioning of the external environment is realized, the modeling accuracy of the home scene is improved, and the visual control accuracy is further improved.
Specifically, the spatial photosensitivity, and air exchange property of the first visual object in space are analyzed, and weights are further assigned to perform weight calculation, so that the calculation result is regarded as a first correlation. The spatial photosensitivity is that the first visual object is analyzed based on the light transmittance between the spatial structure and the external environment, such as the glass size, the household pattern and the like; the spatial temperature sensitivity is based on the temperature insulation between the spatial building material and the outside environment, for example, different floor heights or the thermal insulation performance of the spatial material are analyzed; the air exchange performance is based on ventilation performance of air wind directions between the space geographic position and the outside environment, such as greening of the outside environment, ventilation wind directions, whether chemical manufacturing factories exist or not, and therefore, according to the space photosensitivity, the space temperature sensitivity and the air exchange performance, a first weight, a second weight and a third weight which are in one-to-one correspondence are configured, the first relevance is output, comprehensive factors of the first relevance are guaranteed, and data analysis of the positioning of the outside environment is met.
Step S400: if the first relevance is greater than the preset relevance, second modeling data are obtained;
step S500: inputting the first modeling data and the second modeling data into a three-dimensional scene model, performing three-dimensional modeling on the first visualized object, and outputting a visualized three-dimensional scene;
specifically, since the first relevance is the degree of influence of the first external environment on the first visualized object between spatial photosensitivity, spatial temperature sensitivity and air exchange property, respectively, when the external environment of the first visualized object has no obvious influence characteristic, the modeling influence of the external environment on the first visualized object is smaller, and when the external environment of the first visualized object has a obvious influence characteristic, the modeling influence of the external environment on the first visualized object is larger, that is, if the first relevance is greater than a preset relevance, second modeling data is obtained, wherein the second modeling data is external environment scene modeling data generated based on sample data acquisition of information of the first external environment.
Further, the first modeling data and the second modeling data are input into a three-dimensional scene model, three-dimensional modeling is conducted on the first visual object, namely, the first visual object is modeled by analyzing home scene modeling data of the first visual object and external environment scene modeling data of the first external environment, so that the first visual object can be positioned in combination with the external environment, the first visual object is further accurately modeled in a mode of analyzing influence correlation of the external environment, the visual three-dimensional scene is output, and in order to guarantee accuracy of the visual three-dimensional scene, further inspection can be conducted through a scene calibration correction mode, and the effect of improving accuracy of visual modeling is achieved.
Step S600: the visual control is obtained by carrying out workflow analysis on the visual three-dimensional scene;
step S700: and building a visual control layer according to the visual control, and performing visual control according to the visual control layer.
Further, as shown in fig. 3, the system includes a visual control feedback module, and the steps in the embodiment of the present application further include S800:
step S810: according to the data fitting device, acquiring real-time data of the first external environment to obtain external environment real-time data;
step S820: inputting the external environment real-time data into the visual control feedback module, and outputting first feedback control data according to the visual control feedback module, wherein the first feedback control data is data for controlling the first visual object according to the first external environment;
step S830: and inputting the first feedback control data into the visual control layer for control.
Specifically, after the visual three-dimensional scene is modeled successfully, the corresponding visual control module is thinned, namely, the controllable electronic equipment in the visual three-dimensional scene is subjected to data input to generate household application control, namely, further, all controllable equipment in the household scene is subjected to use analysis, different logic controls are generated according to the control types of the logic controls, and the generated visual controls are used as display interfaces of visual control, so that intelligent control can be performed on equipment connected with each control according to the visual controls.
Further, according to the visual control, a visual control layer is built, the visual control layer is located in a display interface, and hierarchical classification is carried out according to the control mode complexity of the visual control, so that logic layering of the visual control is achieved, and accuracy of visual control is further improved.
When the external environment modeling data of the first visual object is comprehensively modeled, real-time data acquisition is performed on the first external environment according to the data fitting device by connecting a visual control feedback module, external environment real-time data are obtained, further, comprehensive analysis is performed on the obtained external environment real-time data according to the visual control feedback module, high-quality stability of the home environment is guaranteed according to user requirements, for example, when the external environment of the first visual object comprises a chemical plant and the like, emission periods are acquired, automatic feedback control is performed corresponding to control of air purification in the home, and therefore the external environment real-time data are input into the visual control feedback module, the first feedback control data are output according to the visual control feedback module and are input into a visual control layer built by a visual space for visual control, and the technical effects of performing intelligent feedback control on the visual control layer through environment positioning and association analysis on the intelligent home are achieved, and accuracy and user comfort are improved.
Further, step S320 in the embodiment of the present application further includes:
step S321: building a weight configuration model, wherein the weight configuration model comprises a data input layer, a data processing layer, a data judging layer and a data output layer, and the weight configuration model is embedded in a cloud processor;
step S322: the data input layer receives the spatial photosensitivity, the spatial photosensitivity and the air interchangeability, and performs data standardization processing according to the data processing layer after the data reception is finished to obtain a first ordering result;
step S323: the data judgment layer judges the first sequencing result and a preset database according to the first sequencing result, and when the logic relation of the preset database is met, a weight configuration result is obtained;
step S324: and outputting the weight configuration result according to the data output layer.
Specifically, the weight configuration model is a data model for carrying out comprehensive weight configuration based on the spatial photosensitivity, the spatial temperature sensitivity and the air exchange property of the first visual object, and each parameter can be subjected to weight configuration through further data analysis by the weight configuration model so that the output weight configuration result is more accurate, wherein the weight configuration result comprises a first weight, a second weight and a third weight, and the first weight corresponds to the spatial photosensitivity; the second weight corresponds to a spatial temperature sensitivity; the third weight corresponds to air interchangeability.
Further, the weight configuration model includes a data input layer, a data processing layer, a data judging layer and a data output layer, where the data input layer is configured to receive the spatial photosensitivity, the spatial photosensitivity and the air interchangeability, and perform data standardization processing according to the data processing layer after the data reception is completed, where the data standardization processing is capable of further improving the effectiveness of the data processing through the data standardization processing because the reference standards of the input data are not at the same reference level. And sorting according to the final spatial photosensitivity, the spatial photosensitivity and the air exchange performance, and obtaining a first sorting result. Further, the data judgment layer judges according to the first sorting result and a preset database, wherein the preset database comprises preset space photosensitivity, preset space temperature sensitivity and preset air interchangeability, and judges the size of the data according to the sequence of the first sorting result and preset data in the preset database, and when the data is larger than the data in the preset database, weight is increased to obtain a weight configuration result; and outputting the weight configuration result according to the data output layer, and effectively ensuring the reliability and data logicality of the external environment correlation analysis through weight adjustment of the weight configuration model.
Further, step S300 in the embodiment of the present application: after obtaining the first relevance by performing environmental impact analysis on the information of the first external environment, the method further comprises:
obtaining a first external environment characteristic according to the information of the first external environment, wherein the first external environment characteristic comprises a noise environment characteristic and an air environment characteristic;
obtaining a first external environment influence coefficient according to the noise environment characteristics and the air environment characteristics;
and according to the first external environment influence coefficient, gain is carried out on the first relevance, and a second relevance is output.
Specifically, after the environmental impact analysis is performed on the information of the first external environment to obtain a first relevance, further, external environmental characteristic analysis is performed according to the information of the first external environment, including noise environmental characteristics and air environmental characteristics, wherein the noise environmental characteristics are traffic noise generated by the external environment, the air environmental characteristics are air quality impact generated by the external environment, a first external environment impact coefficient is obtained according to the noise environmental characteristics and the air environmental characteristics, when the first external environment impact coefficient is larger, gain adjustment is required to be performed on the first relevance through further adjustment, specifically, when the characteristic intensity of the first external environment is larger, a larger impact is generated on control in the home environment, and therefore, when the first external environment impact coefficient is larger than the preset external environment impact coefficient, the relevance impact between the first visual object and the first external environment is increased, and the second relevance is output.
Further, the step S500 of inputting the first modeling data and the second modeling data into a three-dimensional scene model, performing three-dimensional modeling on the first visualized object, and outputting a visualized three-dimensional scene further includes:
step S510: obtaining a plurality of visual perspectives of the first visual object;
step S520: generating a plurality of external environment modeling viewing angles according to the plurality of visual viewing angles;
step S530: inputting the three-dimensional scene model according to the plurality of visual view angles and the plurality of external environment modeling view angles, wherein the three-dimensional scene model comprises a scene calibration layer;
step S540: performing three-dimensional space calibration according to the scene calibration layer in the three-dimensional scene model, and outputting a space calibration result;
step S550: and outputting the visual three-dimensional scene according to the space calibration result.
Specifically, home visual modeling is performed based on the first modeling data and the second modeling data, wherein the first modeling data is modeling data obtained by data acquisition based on the first visual object, and the second modeling data is modeling data obtained by data acquisition based on the first external environment of the first visual object, so that in order to ensure the combination of the first visual object and the first external environment modeling, the combination of the first visual object and the first external environment modeling is improved by analyzing and checking different visual angles, so that an accurate visual three-dimensional scene is output.
Further, by analyzing and checking different visual angles, the process of outputting an accurate visual three-dimensional scene is as follows: the method comprises the steps of obtaining a plurality of visual angles of a first visual object, generating a plurality of external environment modeling visual angles according to the visual angles, further performing three-dimensional scene simulation through the visual angles, and improving the accuracy of space simulation.
Further, the step S600 of obtaining the visual control by performing workflow analysis on the visual three-dimensional scene further includes:
step S610: performing controllable equipment connection based on the visual three-dimensional scene to obtain a plurality of control terminals;
step S620: screening for one time according to the plurality of control terminals, and outputting an identification control terminal;
step S630: generating a plurality of visual controls according to the identification control terminal;
Step S640: obtaining the frequency and the order of use of the plurality of visual controls according to a first user;
step S650: and screening the multiple visualizations in a ladder layer according to the frequency and the order of use, and outputting the visualized control.
Specifically, controllable equipment connection is performed based on the visual three-dimensional scene, namely equipment for realizing intelligent control in the first visual object is connected according to the current situation, so that a plurality of corresponding control terminals are obtained, primary screening is performed according to the plurality of control terminals, and an identification control terminal is output, wherein the identification control terminal is a terminal which can be used for visual control after primary screening, a plurality of visual controls are further generated according to the identification control terminal, the use frequency and the use sequence of the first user are recorded based on the generated plurality of visual controls, namely, the use frequency and the use sequence of the visual controls are analyzed, step-layer screening is further performed on the plurality of visualizations according to the use frequency and the use sequence, the intelligent control is continuously generated according to the use sequence of the user, the intelligent control is displayed in the visual control layer in a step-layer screening mode, and intelligent control is performed on the visual control layer.
Compared with the prior art, the invention has the following beneficial effects:
1. the method comprises the steps of acquiring data of a first visual object according to a data fitting device, further acquiring first modeling data, acquiring external environment information of the first visual object, generating information of a first external environment, carrying out visual object modeling association influence analysis on the acquired first external environment, outputting first association, further judging whether the first association is larger than preset association, acquiring second modeling data according to the information of the first external environment if the first association is larger than the preset association, inputting the first modeling data and the second modeling data into a three-dimensional scene model, carrying out three-dimensional modeling on the first visual object, outputting a visual three-dimensional scene, carrying out workflow analysis on controllable equipment in the generated visual three-dimensional scene, and further achieving the visual control mode according to the visual control layer, so that the intelligent control layer is positioned and association-built for intelligent control, and the intelligent control layer is improved in the accuracy and the user comfort.
2. Due to the fact that the weight configuration model is adopted, a weight configuration result= is obtained, and reliability and data logicality of the external environment relevance analysis are effectively guaranteed through weight adjustment of the weight configuration model.
3. By adopting the method, the first visual object is modeled by analyzing the home scene modeling data of the first visual object and the external environment scene modeling data of the first external environment, so that the combination of external environment positioning can be achieved, and the first visual object is further accurately modeled in a mode of analyzing the external environment influence association.
Example two
Based on the same inventive concept as the intelligent application visual control method based on big data in the foregoing embodiment, the present invention further provides an intelligent application visual control system based on big data, as shown in fig. 4, where the system includes:
the first obtaining unit 11 is configured to perform data acquisition on a first visualized object according to the data fitting device, so as to obtain first modeling data;
a second obtaining unit 12, wherein the second obtaining unit 12 is configured to obtain information of a first external environment of the first visualized object;
A first analysis unit 13, where the first analysis unit 13 is configured to obtain a first relevance by performing environmental impact analysis on information of the first external environment, where the first relevance is a degree of influence of the first external environment on the first visualized object;
a third obtaining unit 14, where the third obtaining unit 14 is configured to obtain second modeling data if the first correlation is greater than a preset correlation;
a first input unit 15, where the first input unit 15 is configured to input the first modeling data and the second modeling data into a three-dimensional scene model, perform three-dimensional modeling on the first visualized object, and output a visualized three-dimensional scene;
a fourth obtaining unit 16, where the fourth obtaining unit 16 is configured to obtain a visual control by performing workflow analysis on the visual three-dimensional scene;
the first control unit 17 is configured to build a visual control layer according to the visual control, and perform visual control according to the visual control layer.
Further, the system further comprises:
a fifth obtaining unit for obtaining spatial photosensitivity, spatial temperature sensitivity, and air exchange property of the first visualized object;
A first configuration unit configured to configure a first weight, a second weight, and a third weight based on the spatial photosensitivity, and the air exchange performance;
the first output unit is used for calculating weights of the space photosensitivity, the space temperature sensitivity and the air interchangeability according to the first weight, the second weight and the third weight and outputting a first calculation result;
and a sixth obtaining unit, configured to obtain the first association according to the first calculation result.
Further, the system further comprises:
the first building unit is used for building a weight configuration model, wherein the weight configuration model comprises a data input layer, a data processing layer, a data judging layer and a data output layer, and the weight configuration model is embedded in the cloud processor;
a seventh obtaining unit, configured to receive the spatial photosensitivity, the spatial photosensitivity and the air interchangeability by using the data input layer, and perform data normalization processing according to the data processing layer after the data reception is completed, so as to obtain a first ordering result;
An eighth obtaining unit, configured to determine, by the data determining layer according to the first sorting result and a preset database, and obtain a weight configuration result when a logic relationship of the preset database is satisfied;
and the second output unit is used for outputting the weight configuration result according to the data output layer.
Further, the system further comprises:
a ninth obtaining unit, configured to obtain a first external environment feature according to the information of the first external environment, where the first external environment feature includes a noise environment feature and an air environment feature;
a tenth obtaining unit configured to obtain a first external environment influence coefficient from the noise environmental characteristic and the air environmental characteristic;
and the third output unit is used for performing gain on the first relevance according to the first external environment influence coefficient and outputting a second relevance.
Further, the system further comprises:
an eleventh obtaining unit configured to obtain a plurality of visual perspectives of the first visual object;
The first generation unit is used for generating a plurality of external environment modeling visual angles according to the plurality of visual angles;
the second input unit is used for inputting the three-dimensional scene model according to the plurality of visual angles and the plurality of external environment modeling angles, wherein the three-dimensional scene model comprises a scene calibration layer;
the fourth output unit is used for carrying out three-dimensional space calibration according to the scene calibration layer in the three-dimensional scene model and outputting a space calibration result;
and the fifth output unit is used for outputting the visual three-dimensional scene according to the space calibration result.
Further, the system further comprises:
a twelfth obtaining unit, configured to perform controllable device connection based on the visual three-dimensional scene, and obtain a plurality of control terminals;
the sixth output unit is used for carrying out one-time screening according to the plurality of control terminals and outputting an identification control terminal;
the second generation unit is used for controlling the terminal according to the identification and generating a plurality of visual controls;
A thirteenth obtaining unit, configured to obtain, according to a first user, a frequency of use and an order of use of the plurality of visual controls;
and the seventh output unit is used for carrying out ladder layer screening on the visualizations according to the use frequency and the use sequence and outputting the visualized controls.
Further, the system further comprises:
a fourteenth obtaining unit, configured to collect real-time data of the first external environment according to the data fitting device, to obtain external environment real-time data;
the third input unit is used for inputting the external environment real-time data into the visual control feedback module and outputting first feedback control data according to the visual control feedback module, wherein the first feedback control data is data for controlling the first visual object according to the first external environment;
and the second control unit is used for inputting the first feedback control data into the visual control layer for control.
The foregoing various modifications and specific examples of the big data based intelligent application visual control method in the first embodiment of fig. 1 are equally applicable to the big data based intelligent application visual control system of this embodiment, and by the foregoing detailed description of the big data based intelligent application visual control method, those skilled in the art can clearly know the implementation method of the big data based intelligent application visual control system in this embodiment, so that the description will not be described in detail here for brevity.
Example III
The electronic device of the present application is described below with reference to fig. 5.
Fig. 5 illustrates a schematic structural diagram of an electronic device according to the present application.
Based on the inventive concept of the intelligent application visual control method based on big data in the foregoing embodiments, the present invention further provides an intelligent application visual control system based on big data, on which a computer program is stored, which when executed by a processor, implements the steps of any one of the foregoing methods of the intelligent application visual control system based on big data.
Where in FIG. 5, a bus architecture (represented by bus 300), bus 300 may comprise any number of interconnected buses and bridges, with bus 300 linking together various circuits, including one or more processors, represented by processor 302, and memory, represented by memory 304. Bus 300 may also link together various other circuits such as peripheral devices, voltage regulators, power management circuits, etc., as are well known in the art and, therefore, will not be described further herein. Bus interface 305 provides an interface between bus 300 and receiver 301 and transmitter 303. The receiver 301 and the transmitter 303 may be the same element, i.e. a transceiver, providing a means for communicating with various other systems over a transmission medium. The processor 302 is responsible for managing the bus 300 and general processing, while the memory 304 may be used to store data used by the processor 302 in performing operations.
The embodiment of the application provides an intelligent application visual control method based on big data, which is applied to an intelligent application visual control system based on big data, wherein the system is in communication connection with a data fitting device, and the method comprises the following steps: acquiring data of a first visualized object according to the data fitting device to obtain first modeling data; obtaining information of a first external environment of the first visual object; obtaining a first relevance through environmental impact analysis on the information of the first external environment, wherein the first relevance is the relevance impact degree of the first external environment on the first visual object; if the first relevance is greater than the preset relevance, second modeling data are obtained; inputting the first modeling data and the second modeling data into a three-dimensional scene model, performing three-dimensional modeling on the first visualized object, and outputting a visualized three-dimensional scene; the visual control is obtained by carrying out workflow analysis on the visual three-dimensional scene; and building a visual control layer according to the visual control, and performing visual control according to the visual control layer. The technical problems that in the prior art, the visual modeling of the intelligent home is inaccurate, and the adaptability feedback is difficult to carry out according to the positioning environment, so that the visual accuracy and the user experience comfort level of the intelligent home are affected are solved, and the technical effects that the intelligent home is subjected to environment positioning and association analysis, the intelligent control is carried out through a visual control layer, and the accuracy and the user comfort level are improved are achieved.
Those of ordinary skill in the art will appreciate that: the various numbers of first, second, etc. referred to in this application are merely for convenience of description and are not intended to limit the scope of embodiments of the present application, nor to indicate a sequence. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one" means one or more. At least two means two or more. "at least one," "any one," or the like, refers to any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one of a, b, or c (species ) may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable system. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device including one or more servers, data centers, etc. that can be integrated with the available medium. The usable medium may be a magnetic medium (e.g., a floppy Disk, a hard Disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
The various illustrative logical blocks and circuits described in the embodiments of the present application may be implemented or performed with a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic system, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the general purpose processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing systems, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
Although the present application has been described in connection with specific features and embodiments thereof, it will be apparent that various modifications and combinations can be made without departing from the spirit and scope of the application. Accordingly, the specification and figures are merely exemplary illustrations of the application as defined in the appended claims and are to be construed as covering any and all modifications, variations, combinations, or equivalents that are within the scope of the application. It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the present application and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (10)

1. The utility model provides a big data based intelligent application visual control method which is characterized in that the method is applied to an big data based intelligent application visual control system, the system is in communication connection with a data fitting device, the method comprises the following steps:
acquiring data of a first visualized object according to the data fitting device to obtain first modeling data;
obtaining information of a first external environment of the first visual object;
obtaining a first relevance through environmental impact analysis on the information of the first external environment, wherein the first relevance is the relevance impact degree of the first external environment on the first visual object;
if the first relevance is greater than the preset relevance, second modeling data are obtained;
inputting the first modeling data and the second modeling data into a three-dimensional scene model, performing three-dimensional modeling on the first visualized object, and outputting a visualized three-dimensional scene;
the visual control is obtained by carrying out workflow analysis on the visual three-dimensional scene;
and building a visual control layer according to the visual control, and performing visual control according to the visual control layer.
2. The method of claim 1, wherein the first correlation is obtained by performing an environmental impact analysis on information of the first external environment, the method further comprising:
obtaining spatial photosensitivity, spatial temperature sensitivity and air exchange property of the first visualization object;
configuring a first weight, a second weight, and a third weight based on the spatial photosensitivity, and the air exchange property;
according to the first weight, the second weight and the third weight, performing weight calculation on the space photosensitivity, the space temperature sensitivity and the air exchange performance, and outputting a first calculation result;
and obtaining the first relevance according to the first calculation result.
3. The method of claim 2, wherein the method further comprises:
building a weight configuration model, wherein the weight configuration model comprises a data input layer, a data processing layer, a data judging layer and a data output layer, and the weight configuration model is embedded in a cloud processor;
the data input layer receives the spatial photosensitivity, the spatial photosensitivity and the air interchangeability, and performs data standardization processing according to the data processing layer after the data reception is finished to obtain a first ordering result;
The data judgment layer judges the first sequencing result and a preset database according to the first sequencing result, and when the logic relation of the preset database is met, a weight configuration result is obtained;
and outputting the weight configuration result according to the data output layer.
4. The method of claim 1, wherein the method further comprises:
obtaining a first external environment characteristic according to the information of the first external environment, wherein the first external environment characteristic comprises a noise environment characteristic and an air environment characteristic;
obtaining a first external environment influence coefficient according to the noise environment characteristics and the air environment characteristics;
and according to the first external environment influence coefficient, gain is carried out on the first relevance, and a second relevance is output.
5. The method of claim 1, wherein the inputting the first modeling data and the second modeling data into a three-dimensional scene model three-dimensionally models the first visualized object, outputting a visualized three-dimensional scene, the method further comprising:
obtaining a plurality of visual perspectives of the first visual object;
generating a plurality of external environment modeling viewing angles according to the plurality of visual viewing angles;
Inputting the three-dimensional scene model according to the plurality of visual view angles and the plurality of external environment modeling view angles, wherein the three-dimensional scene model comprises a scene calibration layer;
performing three-dimensional space calibration according to the scene calibration layer in the three-dimensional scene model, and outputting a space calibration result;
and outputting the visual three-dimensional scene according to the space calibration result.
6. The method of claim 1, wherein the visualization control is obtained by workflow analysis of the visualized three-dimensional scene, the method further comprising:
performing controllable equipment connection based on the visual three-dimensional scene to obtain a plurality of control terminals;
screening for one time according to the plurality of control terminals, and outputting an identification control terminal;
generating a plurality of visual controls according to the identification control terminal;
obtaining the frequency and the order of use of the plurality of visual controls according to a first user;
and screening the multiple visualizations in a ladder layer according to the frequency and the order of use, and outputting the visualized control.
7. The method of claim 4, wherein the system includes a visual control feedback module, the method further comprising:
According to the data fitting device, acquiring real-time data of the first external environment to obtain external environment real-time data;
inputting the external environment real-time data into the visual control feedback module, and outputting first feedback control data according to the visual control feedback module, wherein the first feedback control data is data for controlling the first visual object according to the first external environment;
and inputting the first feedback control data into the visual control layer for control.
8. An intelligent application visualization control system based on big data, the system comprising:
the first obtaining unit is used for collecting data of a first visualized object according to the data fitting device to obtain first modeling data;
a second obtaining unit configured to obtain information of a first external environment of the first visualized object;
the first analysis unit is used for obtaining a first relevance through environmental impact analysis on the information of the first external environment, wherein the first relevance is the relevance impact degree of the first external environment on the first visual object;
The third obtaining unit is used for obtaining second modeling data if the first relevance is greater than a preset relevance;
the first input unit is used for inputting the first modeling data and the second modeling data into a three-dimensional scene model, carrying out three-dimensional modeling on the first visualized object and outputting a visualized three-dimensional scene;
the fourth obtaining unit is used for obtaining a visual control by carrying out workflow analysis on the visual three-dimensional scene;
the first control unit is used for building a visual control layer according to the visual control, and performing visual control according to the visual control layer.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method according to any of claims 1-7.
CN202210489768.4A 2022-05-07 2022-05-07 Intelligent application visual control method and system based on big data Active CN114859744B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210489768.4A CN114859744B (en) 2022-05-07 2022-05-07 Intelligent application visual control method and system based on big data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210489768.4A CN114859744B (en) 2022-05-07 2022-05-07 Intelligent application visual control method and system based on big data

Publications (2)

Publication Number Publication Date
CN114859744A CN114859744A (en) 2022-08-05
CN114859744B true CN114859744B (en) 2023-06-06

Family

ID=82634763

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210489768.4A Active CN114859744B (en) 2022-05-07 2022-05-07 Intelligent application visual control method and system based on big data

Country Status (1)

Country Link
CN (1) CN114859744B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105376125A (en) * 2015-12-08 2016-03-02 深圳众乐智府科技有限公司 Control method and device for intelligent home system
CN108388142A (en) * 2018-04-10 2018-08-10 百度在线网络技术(北京)有限公司 Methods, devices and systems for controlling home equipment
CN112465189A (en) * 2020-11-04 2021-03-09 上海交通大学 Method for predicting number of court settlement plans based on time-space correlation analysis

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5038307B2 (en) * 2005-08-09 2012-10-03 トタル イメルシオン Method, apparatus, and computer program for visualizing a digital model in a real environment
US8127239B2 (en) * 2007-06-08 2012-02-28 Apple Inc. Object transitions
US20100082678A1 (en) * 2008-09-30 2010-04-01 Rockwell Automation Technologies, Inc. Aggregation server with industrial automation control and information visualization placeshifting
US20140253553A1 (en) * 2012-06-17 2014-09-11 Spaceview, Inc. Visualization of three-dimensional models of objects in two-dimensional environment
US9704298B2 (en) * 2015-06-23 2017-07-11 Paofit Holdings Pte Ltd. Systems and methods for generating 360 degree mixed reality environments
CN105224605A (en) * 2015-09-02 2016-01-06 东北大学秦皇岛分校 Data under a kind of large data environment store and lookup method
CN109116812A (en) * 2017-06-22 2019-01-01 上海智建电子工程有限公司 Intelligent power distribution cabinet, energy conserving system and method based on SparkStreaming
CN109272155B (en) * 2018-09-11 2021-07-06 郑州向心力通信技术股份有限公司 Enterprise behavior analysis system based on big data
CN109976296A (en) * 2019-05-08 2019-07-05 西南交通大学 A kind of manufacture process visualization system and construction method based on virtual-sensor
US11314493B1 (en) * 2021-02-19 2022-04-26 Rockwell Automation Technologies, Inc. Industrial automation smart object inheritance
CN113434483B (en) * 2021-06-29 2022-02-15 无锡四维时空信息科技有限公司 Visual modeling method and system based on space-time big data
CN114266167A (en) * 2021-12-27 2022-04-01 卡斯柯信号有限公司 Visual modeling method, medium and electronic device for train operation basic environment
CN114237192B (en) * 2022-02-28 2022-05-06 广州力控元海信息科技有限公司 Digital factory intelligent control method and system based on Internet of things

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105376125A (en) * 2015-12-08 2016-03-02 深圳众乐智府科技有限公司 Control method and device for intelligent home system
CN108388142A (en) * 2018-04-10 2018-08-10 百度在线网络技术(北京)有限公司 Methods, devices and systems for controlling home equipment
CN112465189A (en) * 2020-11-04 2021-03-09 上海交通大学 Method for predicting number of court settlement plans based on time-space correlation analysis

Also Published As

Publication number Publication date
CN114859744A (en) 2022-08-05

Similar Documents

Publication Publication Date Title
US7430494B2 (en) Dynamic data stream histograms for no loss of information
CN109359170B (en) Method and apparatus for generating information
US20110161124A1 (en) Method and system for enterprise building automation
US8364742B2 (en) System for visualizing design and organization of wireless mesh networks in physical space
CN112733246A (en) Automatic building design method, device, terminal, storage medium and processor
CN108197203A (en) A kind of shop front head figure selection method, device, server and storage medium
US20120260180A1 (en) Complex System Function Status Diagnosis and Presentation
CN115174416B (en) Network planning system, method and device and electronic equipment
CN112039073A (en) Collaborative optimization method and system suitable for fault judgment of power distribution room equipment
CN115973144A (en) Method, device, electronic equipment and medium for identifying obstacle through automatic parking
CN114554503B (en) Networking information confirmation method and device and user equipment
CN114859744B (en) Intelligent application visual control method and system based on big data
CN111461487A (en) Indoor decoration engineering wisdom management system based on BIM
CN113094325B (en) Device deployment method, device, computer system and computer readable storage medium
CN111128357B (en) Hospital logistics energy consumption target object monitoring method and device and computer equipment
CN112419402A (en) Positioning method and system based on multispectral image and laser point cloud
CN113141570A (en) Underground scene positioning method and device, computing equipment and computer storage medium
US9568502B2 (en) Visual analytics of spatial time series data using a pixel calendar tree
EP3955587A1 (en) Machine learning device
CN115617633A (en) Icing monitoring terminal maintenance method and related equipment
CN114091133A (en) City information model modeling method and device, terminal equipment and storage medium
CN114091560A (en) Method, device and equipment for planning communication station address and readable storage medium
CN111562749A (en) AI-based general intelligent household scheme automatic design method and device
CN112311791B (en) Statistical method and system suitable for office business flow
CN117951800B (en) Method and system for establishing digital twin model of hydraulic structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant