CN111984709A - Visual big data middle station-resource calling and algorithm - Google Patents

Visual big data middle station-resource calling and algorithm Download PDF

Info

Publication number
CN111984709A
CN111984709A CN201910306977.9A CN201910306977A CN111984709A CN 111984709 A CN111984709 A CN 111984709A CN 201910306977 A CN201910306977 A CN 201910306977A CN 111984709 A CN111984709 A CN 111984709A
Authority
CN
China
Prior art keywords
data
scheduling
service
development
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201910306977.9A
Other languages
Chinese (zh)
Inventor
李晶磊
李燕芳
潘情
张云洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunnan Youth Academy Technology Co ltd
Original Assignee
Yunnan Youth Academy Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunnan Youth Academy Technology Co ltd filed Critical Yunnan Youth Academy Technology Co ltd
Priority to CN201910306977.9A priority Critical patent/CN111984709A/en
Publication of CN111984709A publication Critical patent/CN111984709A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/254Extract, transform and load [ETL] procedures, e.g. ETL data flows in data warehouses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/26Visual data mining; Browsing structured data

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a platform-resource calling and algorithm in visualized big data, which comprises an open system architecture, a data development IDE module, data management, an offline scheduling system, data integration, operation data visualization and an ETL system, wherein the information which can be disclosed and can provide services for the society and enterprises is effectively collected, the resource sharing is realized, the information guidance of effective enterprise management, achievement transformation and the like is provided for the enterprises and the society, the social economic operation efficiency is improved, and the entrepreneurial innovation industrialization rate is improved.

Description

Visual big data middle station-resource calling and algorithm
Technical Field
The invention relates to the technical field of electronic information, in particular to a visual big data middle station-resource calling and algorithm.
Background
Generally speaking, the public information service platform of the industrial park can adopt a hierarchical structure, a plurality of regional industrial park public information service platforms jointly form a national unified platform, and the regional industrial park public information service platforms are cooperatively constructed by a plurality of industrial park organizations in the region. The public information service platform of the regional startup park is constructed and operated by a unified management organization, and each startup park assists in providing shared data.
The existing entrepreneurship park usually only carries out simple resource integration, arranges and releases the information of enterprises and does not have deeper planning.
Disclosure of Invention
The invention aims to provide a visual big data center platform-resource calling and algorithm, integrate information resources and promote the requirement of resource sharing, the investment of departments and industrial parks at all levels in the aspect of an industrial innovation and industrialization system is not very high, but forms the patterns of 'information islands' and 'scattered pearls', the industrial parks are respectively responsible and repeatedly built, a public information service platform is built to effectively string the scattered pearls into 'strings', the publicly openable information which can provide services for the society and enterprises is effectively collected, the resource sharing is realized, effective enterprise management, achievement conversion and other information guidance are provided for the enterprises and the society, the social economic operation efficiency is improved, and the industrial innovation and industrialization rate is improved.
The visualized big data middle station-resource calling and algorithm comprises an open system architecture, a data development IDE module, data management, an offline scheduling system, data integration, operation data visualization and an ETL system;
the open system architecture comprises a control layer, a service layer and an application layer, wherein the control layer is the core of the offline processing of the service analysis basic platform, and a workflow scheduling engine receives the scheduling of the whole service analysis basic platform, including the example transfer of the workflow and the workflow scheduling, and coordinates and controls the execution of all tasks; the service layer provides services for the application layer or other external applications; the application layer directly interacts with the user based on the bottom layer service and provides a visual operation interface for the user;
The data development IDE module provides a one-stop integrated development environment, can meet the requirements of rapid warehouse modeling, data query, ETL development, algorithm development and the like in a business analysis environment, and provides functions of multi-user online collaborative development and file version control;
the data management provides functions of searching a data table, checking details of the data table, managing authority of the data table and collecting the data table within a tenant range for a user;
the offline scheduling system provides offline scheduling service of million-level tasks for users, and provides functions of a visual operation and maintenance interface, online log query and monitoring alarm;
the data integration provides rapid integration service of various heterogeneous data sources, and provides rapid data integration capability for the heterogeneous data of the cross-platform;
the operational data visualization provides all the functionality needed to create interactive, visual analysis;
the ETL system comprises two main lines which should coexist when the ETL system is established: planning & designing main line and data flow main line planning & designing main line: requirements and implementation, architecture, system implementation, testing and release; data flow main line: extracting, cleaning, normalizing and submitting.
Furthermore, the data development IDE module provides a visual workflow designer function, is similar to a tool of a button, supports a user to design and edit a flow, and performs corresponding development work on each task node in the flow; providing a local data uploading function and supporting quick cloud uploading of local text data; the data rapid integration capability of massive heterogeneous data sources is provided; releasing the cross-project, and rapidly deploying tasks and codes to scheduling systems of other projects; and (3) collaborative development: code version management, code lock management and conflict detection mechanism in a multi-user cooperative mode; the functions of searching a MaxCommute (original ODPS) table, searching resources, searching and quoting a self-defined function and inquiring data are provided, and a user can easily index data.
Furthermore, the data management can search global metadata information, support multiple search modes and intelligently sort results; flexible and extensible data categories can conveniently establish a dedicated navigation structure; the service attribute of the data is checked clearly; description of a table, a data developer, a belonging service line and storage information; field description, security level, primary foreign key identification; comprehensively evaluating the reliability, usability and stability of data, and quantitatively scoring; the data output condition is comprehensive and visual; the partition information comprises the number, size and production time of produced records; the production and consumption time of the data, the executed code, the log information and the change history of the data structure; data blood relationship information; dependency on the data tables upstream and downstream.
Furthermore, the number of jobs supported by the scheduling system reaches millions, the execution framework adopts a distributed architecture, the number of concurrent jobs can be linearly expanded, and the scheduling cycle with multiple time granularities is supported: the method comprises the following steps of controlling special states of nodes in minutes, hours, days, weeks, months and years, supporting idle running, suspension, one-time operation and the like, visually displaying a DAG (demand-oriented markup language) diagram of a scheduling task, greatly facilitating operation and maintenance management of online tasks by users, supporting a real-time task operation state monitoring and warning function, warning modes of short messages and mails, supporting online operation and maintenance operation functions of single-task re-running, multi-task re-running, process killing, success arrangement, suspension and the like, supporting supplementary data (serial execution multi-cycle examples), providing a global task statistical information summary interface, wherein task statistical contents comprise: the total number of scheduling tasks, the number of error scheduling tasks, the number of running scheduling tasks, the number of computing resource consumption Top10 scheduling tasks, the number of computing time consumption Top10 scheduling tasks, the task type distribution and other information.
Furthermore, the data integration support uses a plurality of data channels, can accurately identify dirty data, carry out filtering, collection and display, provide reliable dirty data processing for users, enable users to accurately control data quality, provide flow, data volume and dirty data detection and operation reporting of operation full links, have strong transmission speed, extremely optimized single-channel plug-in performance, ensure that a single process can fill a single machine network card (200 MB/s), a brand-new distributed model and infinite horizontal expansion of throughput, can provide GB-level and even TB-level data flow, ensure accurate and strong flow control, support three flow control modes of channel, record flow and byte flow, complete and sound fault-tolerant processing, can realize local/global retry of thread level, process level and operation level, and more clear and easy-to-use plug-in interfaces, let the plug-in developer focus on business development without paying attention to the framework details.
Further, the ETL system comprises data extraction, data cleaning, data conversion and data loading, wherein the data extraction has the following different implementation parties for different sources of source data, and 1) for a data source processing method which is the same as that of a database system for storing DWs, a direct link relation is established between a DW database server and an original business system, and then a Select statement can be written for direct access;
2) Processing methods for data sources other than DW database systems. For this kind of Data source, it can also establish database link by ODBC mode in general, if it can not establish database link, it can be accomplished by two modes, one is that export source Data into text or table file by tool, then import these source system files into ods (operating Data source), the other is accomplished by program interface;
3) for file type data sources, the data can be imported into a specified database using a database tool and then extracted from the specified database. Or the implementation can also be realized by means of tools, and components such as a plane data source and a plane target of the SSIS service of the SQLServer2005 are imported into the ODS;
the data cleansing, which may include several independent steps, includes valid value detection (e.g., whether the existing zip code is within the valid value range), consistency detection (e.g., whether the zip code is consistent with the city code), duplicate records deletion (e.g., whether the same customer appears twice and the related attributes are slightly different), and detection of whether there are complex business rules and processes to be enhanced (e.g., whether the platinum customer has a related credit status), and the results of the data cleansing steps are often stored semi-permanently, because the required conversion is often very difficult and irreversible;
The task of the data conversion mainly performs inconsistent data conversion, conversion of data granularity, and calculation of some service rules 1), inconsistent data conversion: the process is an integrated process, and unifies the same type of data of different business systems, for example, the code of the same supplier in the settlement system is XX0001, and the code in CRM is YY0001, so that the data are uniformly converted into one code after being extracted; 2) and conversion of data granularity: business systems typically store very detailed data, and data in data warehouses is used for analysis and does not require very detailed data. Generally, service system data is aggregated according to the granularity of a data warehouse;
the data loading is to organize the data into a simple and symmetrical frame model, which is called a dimension model, the frame greatly reduces the query time and simplifies the development process, and a plurality of query tools need the dimension frame and are necessary foundation for constructing the OLAP cube.
Drawings
FIG. 1 is an open system architecture diagram of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The visualized big data middle station-resource calling and algorithm comprises an open system architecture, a data development IDE module, data management, an offline scheduling system, data integration, operation data visualization and an ETL system;
the open system architecture comprises a control layer, a service layer and an application layer, wherein the control layer is the core of the offline processing of the service analysis basic platform, and a workflow scheduling engine receives the scheduling of the whole service analysis basic platform, including the example transferring and workflow scheduling of the workflow, and the AlisaDriver mainly coordinates and controls the execution of all tasks; the service layer provides services for the application layer or other external applications; the application layer directly interacts with the user based on the bottom layer service and provides a visual operation interface for the user;
the data development IDE module provides a one-stop integrated development environment, can meet the requirements of rapid warehouse modeling, data query, ETL development, algorithm development and the like in a business analysis environment, and provides functions of multi-user online collaborative development and file version control;
the data management provides functions of searching a data table, checking details of the data table, managing authority of the data table and collecting the data table within a tenant range for a user;
The offline scheduling system provides offline scheduling service of million-level tasks for users, and provides functions of a visual operation and maintenance interface, online log query and monitoring alarm;
the data integration provides rapid integration service of various heterogeneous data sources, and provides rapid data integration capability for the heterogeneous data of the cross-platform;
the operational data visualization provides all the functionality needed to create interactive, visual analysis;
the ETL system comprises two main lines which should coexist when the ETL system is established: planning & designing main line and data flow main line planning & designing main line: requirements and implementation, architecture, system implementation, testing and release; data flow main line: extracting, cleaning, normalizing and submitting.
Furthermore, the data development IDE module provides a visual workflow designer function, is similar to a tool of a button, supports a user to design and edit a flow, and performs corresponding development work on each task node in the flow; providing a local data uploading function and supporting quick cloud uploading of local text data; the data rapid integration capability of massive heterogeneous data sources is provided; releasing the cross-project, and rapidly deploying tasks and codes to scheduling systems of other projects; and (3) collaborative development: code version management, code lock management and conflict detection mechanism in a multi-user cooperative mode; the functions of searching a MaxCommute (original ODPS) table, searching resources, searching and quoting a self-defined function and inquiring data are provided, and a user can easily index data.
Furthermore, the data management can search global metadata information, support multiple search modes and intelligently sort results; flexible and extensible data categories can conveniently establish a dedicated navigation structure; the service attribute of the data is checked clearly; description of a table, a data developer, a belonging service line and storage information; field description, security level, primary foreign key identification; comprehensively evaluating the reliability, usability and stability of data, and quantitatively scoring; the data output condition is comprehensive and visual; the partition information comprises the number, size and production time of produced records; the production and consumption time of the data, the executed code, the log information and the change history of the data structure; data blood relationship information; dependency on the data tables upstream and downstream.
Furthermore, the number of jobs supported by the scheduling system reaches millions, the execution framework adopts a distributed architecture, the number of concurrent jobs can be linearly expanded, and the scheduling cycle with multiple time granularities is supported: the method comprises the following steps of controlling special states of nodes in minutes, hours, days, weeks, months and years, supporting idle running, suspension, one-time operation and the like, visually displaying a DAG (demand-oriented markup language) diagram of a scheduling task, greatly facilitating operation and maintenance management of online tasks by users, supporting a real-time task operation state monitoring and warning function, warning modes of short messages and mails, supporting online operation and maintenance operation functions of single-task re-running, multi-task re-running, process killing, success arrangement, suspension and the like, supporting supplementary data (serial execution multi-cycle examples), providing a global task statistical information summary interface, wherein task statistical contents comprise: the total number of scheduling tasks, the number of error scheduling tasks, the number of running scheduling tasks, the number of computing resource consumption Top10 scheduling tasks, the number of computing time consumption Top10 scheduling tasks, the task type distribution and other information.
Furthermore, the data integration support uses a plurality of data channels, can accurately identify dirty data, carry out filtering, collection and display, provide reliable dirty data processing for users, enable users to accurately control data quality, provide flow, data volume and dirty data detection and operation reporting of operation full links, have strong transmission speed, extremely optimized single-channel plug-in performance, ensure that a single process can fill a single machine network card (200 MB/s), a brand-new distributed model and infinite horizontal expansion of throughput, can provide GB-level and even TB-level data flow, ensure accurate and strong flow control, support three flow control modes of channel, record flow and byte flow, complete and sound fault-tolerant processing, can realize local/global retry of thread level, process level and operation level, and more clear and easy-to-use plug-in interfaces, let the plug-in developer focus on business development without paying attention to the framework details.
Further, the ETL system comprises data extraction, data cleaning, data conversion and data loading, wherein the data extraction has the following different implementation parties for different sources of source data, and 1) for a data source processing method which is the same as that of a database system for storing DWs, a direct link relation is established between a DW database server and an original business system, and then a Select statement can be written for direct access;
2) Processing methods for data sources other than DW database systems. For this kind of Data source, it can also establish database link by ODBC mode in general, if it can not establish database link, it can be accomplished by two modes, one is that export source Data into text or table file by tool, then import these source system files into ods (operating Data source), the other is accomplished by program interface;
3) for file type data sources, the data can be imported into a specified database using a database tool and then extracted from the specified database. Or the implementation can also be realized by means of tools, and components such as a plane data source and a plane target of the SSIS service of the SQLServer2005 are imported into the ODS;
the data cleansing, which may include several independent steps, includes valid value detection (e.g., whether the existing zip code is within the valid value range), consistency detection (e.g., whether the zip code is consistent with the city code), duplicate records deletion (e.g., whether the same customer appears twice and the related attributes are slightly different), and detection of whether there are complex business rules and processes to be enhanced (e.g., whether the platinum customer has a related credit status), and the results of the data cleansing steps are often stored semi-permanently, because the required conversion is often very difficult and irreversible;
The task of the data conversion mainly performs inconsistent data conversion, conversion of data granularity, and calculation of some service rules 1), inconsistent data conversion: the process is an integrated process, and unifies the same type of data of different business systems, for example, the code of the same supplier in the settlement system is XX0001, and the code in CRM is YY0001, so that the data are uniformly converted into one code after being extracted; 2) and conversion of data granularity: business systems typically store very detailed data, and data in data warehouses is used for analysis and does not require very detailed data. Generally, service system data is aggregated according to the granularity of a data warehouse;
the data loading is to organize the data into a simple and symmetrical frame model, which is called a dimension model, the frame greatly reduces the query time and simplifies the development process, and a plurality of query tools need the dimension frame and are necessary foundation for constructing the OLAP cube.
The preferred embodiments of the present invention have been disclosed merely to aid in the explanation of the invention, and it is not intended to be exhaustive or to limit the invention to the precise embodiments disclosed. Obviously, many modifications and variations will be apparent to those skilled in the art in light of the disclosure herein, and the embodiments were chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention. The invention is limited only by the claims and their full scope and equivalents.

Claims (6)

1. The visualized big data middle station-resource calling and algorithm comprises an open system architecture, a data development IDE module, data management, an offline scheduling system, data integration, operation data visualization and an ETL system;
the open system architecture comprises a control layer, a service layer and an application layer, wherein the control layer is the core of the offline processing of the service analysis basic platform, and a workflow scheduling engine receives the scheduling of the whole service analysis basic platform, including the example transfer of the workflow and the workflow scheduling, and coordinates and controls the execution of all tasks; the service layer provides services for the application layer or other external applications; the application layer directly interacts with the user based on the bottom layer service and provides a visual operation interface for the user;
the data development IDE module provides a one-stop integrated development environment, can meet the requirements of rapid warehouse modeling, data query, ETL development and algorithm development in a business analysis environment, and provides functions of multi-user online collaborative development and file version control;
the data management provides functions of searching a data table, checking details of the data table, managing authority of the data table and collecting the data table within a tenant range for a user;
The offline scheduling system provides offline scheduling service of million-level tasks for users, and provides functions of a visual operation and maintenance interface, online log query and monitoring alarm;
the data integration provides rapid integration service of various heterogeneous data sources, and provides rapid data integration capability for the heterogeneous data of the cross-platform;
the operational data visualization provides all the functionality needed to create interactive, visual analysis;
the ETL system comprises two main lines which should coexist when the ETL system is established: planning & designing main line and data flow main line planning & designing main line: requirements and implementation, architecture, system implementation, testing and release; data flow main line: extracting, cleaning, normalizing and submitting.
2. The visual big data stage-resource call and algorithm of claim 1, wherein: the data development IDE module provides a visual workflow designer function, is similar to a tool of a button, supports a user to design and edit a flow, and performs corresponding development work on each task node in the flow; providing a local data uploading function and supporting quick cloud uploading of local text data; the data rapid integration capability of massive heterogeneous data sources is provided; releasing the cross-project, rapidly deploying tasks and codes to a scheduling system of other projects, carrying out collaborative development, code version management, and code lock management and conflict detection mechanism in a multi-user collaborative mode; the functions of Max computer (original ODPS) table searching, resource searching and quoting, custom function searching and quoting and data query are provided, and users can easily index data.
3. The visual big data stage-resource call and algorithm of claim 1, wherein: the data management can search global metadata information, support various search modes and intelligently sort results; flexible and extensible data categories can conveniently establish a dedicated navigation structure; the service attribute of the data is checked clearly; description of a table, a data developer, a belonging service line and storage information; field description, security level, primary foreign key identification; comprehensively evaluating the reliability, usability and stability of data, and quantitatively scoring; the data output condition is comprehensive and visual; the partition information comprises the number, size and production time of produced records; the production and consumption time of the data, the executed code, the log information and the change history of the data structure; data blood relationship information; dependency on the data tables upstream and downstream.
4. The visual big data stage-resource call and algorithm of claim 1, wherein: the number of jobs supported by the scheduling system reaches millions, the execution framework adopts a distributed architecture, the number of concurrent jobs can be linearly expanded, and the scheduling cycle with multiple time granularities is supported: the method comprises the following steps of controlling special states of nodes in minutes, hours, days, weeks, months and years, supporting idle running, suspension, one-time operation and the like, visually displaying a DAG (demand-oriented markup language) diagram of a scheduling task, greatly facilitating operation and maintenance management of online tasks by users, supporting a real-time task operation state monitoring and warning function, warning modes of short messages and mails, supporting online operation and maintenance operation functions of single-task re-running, multi-task re-running, process killing, success arrangement, suspension and the like, supporting supplementary data (serial execution multi-cycle examples), providing a global task statistical information summary interface, wherein task statistical contents comprise: the total number of scheduling tasks, the number of error scheduling tasks, the number of running scheduling tasks, the number of computing resource consumption Top10 scheduling tasks, the number of computing time consumption Top10 scheduling tasks, the task type distribution and other information.
5. The visual big data stage-resource call and algorithm of claim 1, wherein: the data integration support uses a plurality of data channels, can accurately identify dirty data, carry out filtering, collection and display, provide reliable dirty data processing for users, ensure that the users accurately control the data quality, provide flow, data volume and dirty data detection of the whole operation link and report in operation, have strong transmission speed, and extremely optimized single-channel plug-in performance, ensure that a single process can fill up a single-machine network card (200 MB/s), a brand-new distributed model and infinite horizontal expansion of throughput, provide GB-level and even TB-level data flow, ensure accurate and strong flow control, support three modes of a channel, a recording stream and a byte stream, complete and sound fault-tolerant processing, can realize the retry of thread level, process level and operation level local/global, and more clear and easy-to-use plug-in interfaces, ensure that plug-in developers concentrate on service development, without paying attention to the frame details.
6. The visual big data stage-resource call and algorithm of claim 1, wherein: the ETL system comprises data extraction, data cleaning, data conversion and data loading, wherein the data extraction has the following different implementation parties for different sources of source data, and 1) for a data source processing method which is the same as that of a database system for storing DWs, a direct link relation is established between a DW database server and an original business system, and then a Select statement can be written for direct access;
2) For the processing method of the Data source different from the DW database system, for the Data source, the database link can be established in an ODBC mode generally, if the database link can not be established, the Data link can be completed in two modes, one mode is that the source Data is exported into a text or a table file through a tool, then the source system files are imported into an ODS (operating Data Source), and the other mode is completed through a program interface;
3) for the file type data source, the data can be imported into a specified database by using a database tool and then extracted from the specified database, or the data can be imported into the ODS by using tools, such as a plane data source and a plane target of an SSIS service of SQLServer 2005;
the data cleansing, which may include several independent steps, includes valid value detection (e.g., whether the existing zip code is within the valid value range), consistency detection (e.g., whether the zip code is consistent with the city code), duplicate records deletion (e.g., whether the same customer appears twice and the related attributes are slightly different), and detection of whether there are complex business rules and processes to be enhanced (e.g., whether the platinum customer has a related credit status), and the results of the data cleansing steps are often stored semi-permanently, because the required conversion is often very difficult and irreversible;
The task of the data conversion mainly performs inconsistent data conversion, conversion of data granularity, and calculation of some service rules 1), inconsistent data conversion: the process is an integrated process, and unifies the same type of data of different business systems, for example, the code of the same supplier in the settlement system is XX0001, and the code in CRM is YY0001, so that the data are uniformly converted into one code after being extracted; 2) and conversion of data granularity: the business system generally stores very detailed data, the data in the data warehouse is used for analysis, the very detailed data is not needed, and generally, the business system data is aggregated according to the granularity of the data warehouse;
the data loading is to organize the data into a simple and symmetrical frame model, which is called a dimension model, the frame greatly reduces the query time and simplifies the development process, and a plurality of query tools need the dimension frame and are necessary foundation for constructing the OLAP cube.
CN201910306977.9A 2019-05-23 2019-05-23 Visual big data middle station-resource calling and algorithm Withdrawn CN111984709A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910306977.9A CN111984709A (en) 2019-05-23 2019-05-23 Visual big data middle station-resource calling and algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910306977.9A CN111984709A (en) 2019-05-23 2019-05-23 Visual big data middle station-resource calling and algorithm

Publications (1)

Publication Number Publication Date
CN111984709A true CN111984709A (en) 2020-11-24

Family

ID=73435774

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910306977.9A Withdrawn CN111984709A (en) 2019-05-23 2019-05-23 Visual big data middle station-resource calling and algorithm

Country Status (1)

Country Link
CN (1) CN111984709A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112540975A (en) * 2020-12-29 2021-03-23 中科院计算技术研究所大数据研究院 Multi-source heterogeneous data quality detection method based on petri net
CN112559280A (en) * 2020-12-04 2021-03-26 国网安徽省电力有限公司信息通信分公司 Data full link monitoring method based on data center station
CN112667728A (en) * 2021-01-06 2021-04-16 上海振华重工(集团)股份有限公司 Visual single-machine data acquisition method in wharf efficiency analysis
CN112910703A (en) * 2021-02-01 2021-06-04 中金云金融(北京)大数据科技股份有限公司 Offline task management platform
CN113157191A (en) * 2021-02-21 2021-07-23 上海帕科信息科技有限公司 Data visualization method based on OLAP system
CN113868306A (en) * 2021-08-31 2021-12-31 云南昆钢电子信息科技有限公司 Data modeling system and method based on OPC-UA specification
CN114036031A (en) * 2022-01-05 2022-02-11 阿里云计算有限公司 Scheduling system and method for resource service application in enterprise digital middleboxes
CN114626822A (en) * 2022-03-22 2022-06-14 山东省国土测绘院 Full-link data integration method and system
CN114860833A (en) * 2022-05-30 2022-08-05 江苏顺骁工程科技有限公司 Data center platform applied to digital twin hydraulic engineering and data processing method
CN113535837B (en) * 2021-07-16 2024-07-12 深圳银兴智能数据有限公司 Unified data development and distributed scheduling system

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112559280B (en) * 2020-12-04 2023-08-22 国网安徽省电力有限公司信息通信分公司 Data full-link monitoring method based on data center station
CN112559280A (en) * 2020-12-04 2021-03-26 国网安徽省电力有限公司信息通信分公司 Data full link monitoring method based on data center station
CN112540975B (en) * 2020-12-29 2021-08-31 中科大数据研究院 Multi-source heterogeneous data quality detection method and system based on petri net
CN112540975A (en) * 2020-12-29 2021-03-23 中科院计算技术研究所大数据研究院 Multi-source heterogeneous data quality detection method based on petri net
CN112667728A (en) * 2021-01-06 2021-04-16 上海振华重工(集团)股份有限公司 Visual single-machine data acquisition method in wharf efficiency analysis
CN112667728B (en) * 2021-01-06 2023-11-21 上海振华重工(集团)股份有限公司 Visual single machine data acquisition method in wharf efficiency analysis
CN112910703A (en) * 2021-02-01 2021-06-04 中金云金融(北京)大数据科技股份有限公司 Offline task management platform
CN113157191A (en) * 2021-02-21 2021-07-23 上海帕科信息科技有限公司 Data visualization method based on OLAP system
CN113535837B (en) * 2021-07-16 2024-07-12 深圳银兴智能数据有限公司 Unified data development and distributed scheduling system
CN113868306A (en) * 2021-08-31 2021-12-31 云南昆钢电子信息科技有限公司 Data modeling system and method based on OPC-UA specification
CN114036031B (en) * 2022-01-05 2022-06-24 阿里云计算有限公司 Scheduling system and method for resource service application in enterprise digital middleboxes
CN114036031A (en) * 2022-01-05 2022-02-11 阿里云计算有限公司 Scheduling system and method for resource service application in enterprise digital middleboxes
CN114626822A (en) * 2022-03-22 2022-06-14 山东省国土测绘院 Full-link data integration method and system
CN114860833A (en) * 2022-05-30 2022-08-05 江苏顺骁工程科技有限公司 Data center platform applied to digital twin hydraulic engineering and data processing method
CN114860833B (en) * 2022-05-30 2023-08-11 江苏顺骁工程科技有限公司 Data center station and data processing method applied to digital twin hydraulic engineering

Similar Documents

Publication Publication Date Title
CN111984709A (en) Visual big data middle station-resource calling and algorithm
US10853387B2 (en) Data retrieval apparatus, program and recording medium
US7574379B2 (en) Method and system of using artifacts to identify elements of a component business model
US8671084B2 (en) Updating a data warehouse schema based on changes in an observation model
Paim et al. DWARF: An approach for requirements definition and management of data warehouse systems
CN112199433A (en) Data management system for city-level data middling station
CN112396404A (en) Data center system
US20080189308A1 (en) Apparatus and Methods for Displaying and Determining Dependency Relationships Among Subsystems in a Computer Software System
CN103092631B (en) A kind of data base application system development platform and development approach
CN112527774A (en) Data center building method and system and storage medium
CN111125068A (en) Metadata management method and system
CN112579563B (en) Power grid big data-based warehouse visualization modeling system and method
WO2024108973A1 (en) Credit assessment method for construction enterprises
CN114880405A (en) Data lake-based data processing method and system
CN116662441A (en) Distributed data blood margin construction and display method
CN115640300A (en) Big data management method, system, electronic equipment and storage medium
JP2007133624A (en) Information management method and device using connection relation information
CN116701358B (en) Data processing method and system
US20140149186A1 (en) Method and system of using artifacts to identify elements of a component business model
CN112784129A (en) Pump station equipment operation and maintenance data supervision platform
El Beggar et al. Towards an MDA-oriented UML profiles for data warehouses design and development
Liu Integrating process mining with discrete-event simulation modeling
US11216486B2 (en) Data retrieval apparatus, program and recording medium
Mao Construction of Intelligent Vocational Management Information System with R Programming
Sonnleitner et al. Persistence of workflow control data in temporal databases

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20201124

WW01 Invention patent application withdrawn after publication