CN114201465A - Data integration method, system, computer device and storage medium - Google Patents

Data integration method, system, computer device and storage medium Download PDF

Info

Publication number
CN114201465A
CN114201465A CN202111521772.6A CN202111521772A CN114201465A CN 114201465 A CN114201465 A CN 114201465A CN 202111521772 A CN202111521772 A CN 202111521772A CN 114201465 A CN114201465 A CN 114201465A
Authority
CN
China
Prior art keywords
data
data integration
request
integration
container cluster
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111521772.6A
Other languages
Chinese (zh)
Inventor
张乐
赵杏
肖钢
胡华林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kingdee Software China Co Ltd
Original Assignee
Kingdee Software China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kingdee Software China Co Ltd filed Critical Kingdee Software China Co Ltd
Priority to CN202111521772.6A priority Critical patent/CN114201465A/en
Publication of CN114201465A publication Critical patent/CN114201465A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/176Support for shared access to files; File sharing support
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to a data integration method, a system, a computer device and a storage medium. The method comprises the following steps: receiving a data integration request sent by a shared management console, and caching the data integration request to a data request pool; the data integration request comprises a data provider identification and an integration end address; for each container cluster in the plurality of container clusters, screening out at least one candidate data integration request from a data request pool according to an integration end address and a cluster address of a current container cluster; determining a target data integration request in the candidate data integration requests according to the idle resource amount of the current container cluster and the execution resource amount required by executing the candidate data integration requests; integrating, by the current container cluster, data provided by the data provider corresponding to the data provider identification in response to the target data integration request. By adopting the method, the expansibility of data integration can be improved.

Description

Data integration method, system, computer device and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a data integration method, system, computer device, and storage medium.
Background
With the development of computer technology, enterprises often need to develop data integration services through computers. Data integration is the logical or physical organic collection of data from different sources, formats, and structures, thereby providing an enterprise with comprehensive data sharing.
Currently, when an enterprise develops a data integration service, it needs to construct an integration executor and a management console associated with the enterprise, and a single-point deployment architecture integrates the constructed integration executor and the management console into one service, so as to generate a data integration request through the management console deployed in the same service and execute the data integration request through the integration executor.
However, since the integrated enforcer and the management console are integrated into one service, the integrated enforcer and the management console are in a tightly coupled state, thereby reducing the expansibility of the data integration system.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a data integration method, system, apparatus, computer device, storage medium, and computer program product capable of improving the scalability of data integration.
In a first aspect, the present application provides a data integration method. The method comprises the following steps:
receiving a data integration request sent by a shared management console, and caching the data integration request to a data request pool; the data integration request comprises a data provider identification and an integration end address;
for each container cluster in a plurality of container clusters, screening out at least one candidate data integration request from the data request pool according to the integration end address and the cluster address of the current container cluster;
determining a target data integration request in the candidate data integration requests according to the idle resource amount of the current container cluster and the execution resource amount required by executing the candidate data integration requests;
integrating, by the current container cluster, data provided by a data provider corresponding to the data provider identification in response to the target data integration request.
In one embodiment, before receiving a data integration request sent by a shared management console and caching the data integration request into a data request pool, the method further comprises: determining a shared management console associated with the data request pool; wherein the shared management console comprises a master console and at least one slave console; determining whether the main console is in a normal operation state; if so, connecting the data request pool with the main console; if not, determining the respective corresponding operation state of each slave console, and connecting the data request pool with the slave console in the normal operation state.
In one embodiment, determining the target data integration request in the candidate data integration requests according to the amount of idle resources of the current container cluster and the amount of execution resources required for executing the candidate data integration requests includes: determining the respective required execution resource amount for executing each candidate data integration request; comparing the idle resource amount of the current container cluster with the execution resource amount, and taking the execution resource amount smaller than or equal to the idle resource amount as a target execution resource amount; and screening out target data integration requests from the candidate data integration requests according to the target execution resource amount.
In one embodiment, after integrating the data provided by the data provider corresponding to the data provider identification in response to the target data integration request, the method further includes: releasing execution resources occupied when the target data integration request is executed; and updating the idle resource amount of the current container cluster according to the released execution resources.
In one embodiment, integrating, by the current container cluster, data provided by a data provider corresponding to the data provider identification in response to the target data integration request comprises: determining an un-started target container in the current container cluster; allocating execution resources required to execute the target data integration request to the target container; and starting the target container, executing the target data integration request based on the allocated execution resources through the target container, and integrating the data provided by the data provider corresponding to the data provider identification.
In one embodiment, after integrating the data provided by the data provider corresponding to the current data provider identifier, the method further includes: and releasing the execution resources occupied by the target container, and updating the target container in the starting state into the non-starting state.
In one embodiment, each container cluster of the plurality of container clusters is a cluster that is hosted in a different public cloud environment.
In a second aspect, the application further provides a data integration system. The system comprises: the system comprises a shared management console, a data request pool and a container cluster;
the sharing management console is used for acquiring a data provider identifier and an integration end address and generating a data integration request according to the data provider identifier and the integration end address;
the data request pool is used for receiving and caching the data integration request;
the container cluster is used for determining a local cluster address and screening out at least one candidate data integration request from the cached data integration requests according to the integration end address and the cluster address;
the container cluster is used for determining a target data integration request in the candidate data integration requests according to the idle resource amount and the execution resource amount required by executing the candidate data integration requests;
the container cluster is used for responding to the target data integration request and integrating the data provided by the data provider corresponding to the data provider identification.
In one embodiment, the shared management console is further configured to, upon receiving a data provider identifier, determine configuration information corresponding to the data provider identifier, and extract a corresponding aggregator address from the configuration information; the configuration information is generated according to an integration end address of the container cluster.
In a third aspect, the present application further provides a data integration apparatus. The device comprises:
the acquisition module is used for receiving a data integration request sent by a shared management console and caching the data integration request to a data request pool; the data integration request comprises a data provider identification and an integration end address;
the selection module is used for screening out at least one candidate data integration request from the data request pool for each container cluster in the plurality of container clusters according to the integration end address and the cluster address of the current container cluster; determining a target data integration request in the candidate data integration requests according to the idle resource amount of the current container cluster and the execution resource amount required by executing the candidate data integration requests;
and the integration module is used for responding to the target data integration request through the current container cluster and integrating the data provided by the data provider corresponding to the data provider identification.
In a fourth aspect, the present application further provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the following steps when executing the computer program:
receiving a data integration request sent by a shared management console, and caching the data integration request to a data request pool; the data integration request comprises a data provider identification and an integration end address;
for each container cluster in a plurality of container clusters, screening out at least one candidate data integration request from the data request pool according to the integration end address and the cluster address of the current container cluster;
determining a target data integration request in the candidate data integration requests according to the idle resource amount of the current container cluster and the execution resource amount required by executing the candidate data integration requests;
integrating, by the current container cluster, data provided by a data provider corresponding to the data provider identification in response to the target data integration request.
In a fifth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
receiving a data integration request sent by a shared management console, and caching the data integration request to a data request pool; the data integration request comprises a data provider identification and an integration end address;
for each container cluster in a plurality of container clusters, screening out at least one candidate data integration request from the data request pool according to the integration end address and the cluster address of the current container cluster;
determining a target data integration request in the candidate data integration requests according to the idle resource amount of the current container cluster and the execution resource amount required by executing the candidate data integration requests;
integrating, by the current container cluster, data provided by a data provider corresponding to the data provider identification in response to the target data integration request.
In a sixth aspect, the present application further provides a computer program product. The computer program product comprising a computer program which when executed by a processor performs the steps of:
receiving a data integration request sent by a shared management console, and caching the data integration request to a data request pool; the data integration request comprises a data provider identification and an integration end address;
for each container cluster in a plurality of container clusters, screening out at least one candidate data integration request from the data request pool according to the integration end address and the cluster address of the current container cluster;
determining a target data integration request in the candidate data integration requests according to the idle resource amount of the current container cluster and the execution resource amount required by executing the candidate data integration requests;
integrating, by the current container cluster, data provided by a data provider corresponding to the data provider identification in response to the target data integration request.
According to the data integration method, the data integration system, the data integration device, the computer equipment, the storage medium and the computer program product, the data integration request can be cached to the data request pool by obtaining the data integration request sent by the sharing management console. By caching the data integration requests to a data request pool, each container cluster in the container clusters can screen at least one candidate data integration request from the data request pool according to an integration end address of the data integration request and a local cluster address, and then determine a target data integration request in the candidate data integration requests according to a local idle resource amount and an execution resource amount required by executing the candidate data integration requests. By determining the target data integration request corresponding to each container cluster, the target data integration request can be issued to the corresponding container cluster, so that each container cluster can respond to the received target data integration request and integrate the data provided by the data provider corresponding to the data provider identifier. The data integration request sent by the sharing management console is cached through the data request pool, and the cached data integration request is sent to the corresponding container cluster according to the cluster address of each container cluster and the integration end address of each data integration request, so that the data request pool can be used as a message middleware to manage the data integration request, the aim of decoupling the sharing management console and the container clusters is fulfilled, and the expansibility of data integration is improved.
In addition, because the currently executable target data integration request is determined according to the amount of idle resources of the container cluster and the amount of execution resources required for executing the candidate data integration request, the maximum concurrency number of the data integration requests executed by a single container cluster can be controlled through the amount of idle resources and the amount of execution resources, so that the load of the container cluster is ensured to be stabilized at a balanced optimal point, and the probability of the container cluster crashing due to high concurrency is reduced.
Drawings
FIG. 1 is a diagram of an application environment of a data integration method in one embodiment;
FIG. 2 is a schematic flow chart diagram illustrating a data integration method in one embodiment;
FIG. 3 is a diagram of the overall architecture of the data integration system in one embodiment;
FIG. 4 is a block diagram showing the structure of a data integration apparatus according to an embodiment;
FIG. 5 is a block diagram showing the structure of a data integration apparatus according to another embodiment;
FIG. 6 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The data integration method provided by the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The terminal 102 is disposed with a sharing management console, and the sharing management console can obtain the data provider identifier and the integration end address, generate a data integration request according to the data provider identifier and the integration end address, and send the data integration request to the server 104. A pool of data requests and at least one container cluster are deployed in the server 104. The server 104 may cache the received data integration request in a data request pool, so that the data request pool may filter out a target data integration request corresponding to each container cluster, and send the target data integration request to the corresponding container cluster, so that the container cluster integrates data provided by the data provider according to the received target data integration request. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In one embodiment, as shown in fig. 2, a data integration method is provided, which is described by taking the method as an example applied to the server in fig. 1, and the data integration method includes the following steps:
step 202, receiving a data integration request sent by a shared management console, and caching the data integration request to a data request pool; the data integration request includes a data provider identification and an integration end address.
The shared management console refers to a platform for providing data providers with operations related to data integration, for example, the data providers can perform configuration of data integration tasks, perform data integration authorization, and the like through the shared management console. Data providers are participants in the integration of data that is provided for integration. For example, when enterprise a desires to integrate OA applications with data of an ERP application, the OA applications and the ERP application of enterprise a are data providers.
The data request pool is used for receiving the data integration request sent by the sharing management console and storing the received data integration request. The data integration request comprises a data provider identification, the data provider identification refers to information which uniquely identifies one data provider, and the data provided by the corresponding data provider can be integrated by identifying the data provider identification. The data integration request further includes an integration end address, where the integration end address refers to an address used to point to a container cluster, and before integrating data provided by a data providing manner, a data provider may be configured, so as to determine an integration end address corresponding to the data provider through a configuration process. The cluster address is an address used to point to the container cluster, and the cluster address may be an IP address of the container cluster.
Specifically, when at least one data integration request sent by the shared management console is acquired, the server may cache all the received data integration requests in the data request pool. Each data integration request comprises a data provider identifier and an integration end address configured by the data provider. It is easy to understand that the integration end address may be consistent with the cluster address of the container cluster, and the integration end address may also be an address obtained by transforming the cluster address, for example, the integration end address may be an address obtained by encrypting the cluster address.
Step 204, for each container cluster in the plurality of container clusters, screening out at least one candidate data integration request from the data request pool according to the integration end address and the cluster address of the current container cluster.
The container cluster is used for logically or physically integrating data with different sources and formats so as to realize unified management and sharing of the data. The container cluster can integrate the data of the database systems of all departments in the same enterprise or organization, and can also integrate the data of the database systems of different enterprises or organizations. The container cluster may be deployed in a public cloud environment, and may include a plurality of containers, each of which may be configured to respond to different tasks, for example, different execution resources may be allocated to different containers, so as to respond to different data integration requests through different execution resources, so that execution resources required for executing the data integration requests may be isolated, that is, execution resources corresponding to different data integration requests may be isolated, so as to avoid a probability that execution resources are contended among the data integration requests due to the fact that the execution resources are not isolated. The execution resource refers to a resource required when executing the data integration request, for example, the execution resource may specifically be a CPU resource, a GPU resource, or a memory resource.
Specifically, each container cluster has a unique cluster address for identifying the current container cluster, at least one container is deployed under each container cluster, and the server can determine a candidate data integration request corresponding to each container cluster according to the cluster address of each container cluster and an integration end address corresponding to each data integration request in the data request pool.
In one embodiment, for each container cluster in the plurality of container clusters, the server may screen out candidate data integration requests matching the current container cluster from the data request pool according to the integration end address of the data integration request and the cluster address of the current container cluster. For example, the server may determine a matching integration end address that matches the cluster address of the current container cluster and may use the data integration request with the matching integration end address as a candidate data integration request. The integration end address which is the same as the cluster address of the current container cluster can be used as the matching integration end address, and the integration end address which is decoded and is the same as the cluster address of the current container cluster can be used as the matching integration end address.
Step 206, determining a target data integration request in the candidate data integration requests according to the free resource amount of the current container cluster and the execution resource amount required for executing the candidate data integration requests.
And the server updates and monitors the idle resource amount of the current container cluster in real time.
Specifically, when the candidate data integration request corresponding to each container cluster is determined, the server may determine the target data integration request corresponding to each container cluster according to the amount of idle resources corresponding to each container cluster and the amount of execution resources required for executing the candidate data integration request. To better describe the present embodiment, the following description takes a current container cluster as an example, where the current container cluster is any one of a plurality of container clusters. When determining the candidate data integration requests corresponding to the current container cluster, the server can determine the amount of free resources of the current container cluster and the amount of execution resources corresponding to each candidate data integration request corresponding to the current container cluster, and screen out the target data integration requests from the candidate data integration requests according to the amount of execution resources and the amount of free resources. The execution resource amount refers to an amount of computer resources required to execute the candidate data integration request, that is, an amount of resources for executing the resource. The amount of free resources refers to the amount of unused computer resources that are currently free.
In one embodiment, the server may compare the amount of free resources of the current container cluster with the amount of execution resources of the candidate data integration request, and regard the amount of execution resources less than or equal to the amount of free resources of the current container cluster as a target amount of execution resources, and regard the candidate data integration request with the target amount of execution resources as a target data integration request.
In one embodiment, for the current container cluster, the server may filter out one target data integration request or may filter out multiple target data integration requests. When it is determined that the execution resources required to execute the current candidate data integration request are the same as the amount of free resources of the current container cluster, the server may only determine that the current candidate data integration request is the target data integration request. When the execution resources corresponding to the multiple candidate data integration requests are summed to obtain the sum of the execution resources, and the sum of the execution resources is less than or equal to the amount of idle resources of the current container cluster, the multiple candidate data integration requests can be respectively used as target data integration requests.
In one embodiment, the data integration request includes execution resource amount, and when the data integration request needs to be generated by the shared management console, the shared management console may predict the execution resource amount required for integrating the data to be integrated according to the data amount to be integrated, and generate a corresponding data integration request according to the execution resource amount, the data provider identifier, and the integration end address.
In one embodiment, the data integration request includes an execution resource amount, and when the shared management console generates the data integration request, a user may configure the execution resource through the shared management console, so that the shared management console generates a corresponding data integration request according to the configured execution resource amount, the data provider identifier, and the integration port address.
At step 208, data provided by the data provider corresponding to the data provider identification is integrated by the current container cluster in response to the target data integration request.
Specifically, when a target data integration request in the current container cluster is determined, the server may execute the target data integration request through the container in the current container cluster, and when the target data integration request is executed through the container, the container may determine data provided by the data provider according to the data provider identification in the target data integration request and integrate the data provided by the data provider.
In the data integration method, the data integration request can be cached to the data request pool by obtaining the data integration request sent by the shared management console. By caching the data integration requests to a data request pool, each container cluster in the container clusters can screen at least one candidate data integration request from the data request pool according to an integration end address of the data integration request and a local cluster address, and then determine a target data integration request in the candidate data integration requests according to a local idle resource amount and an execution resource amount required by executing the candidate data integration requests. By determining the target data integration request corresponding to each container cluster, the target data integration request can be issued to the corresponding container cluster, so that each container cluster can respond to the received target data integration request and integrate the data provided by the data provider corresponding to the data provider identifier. The data integration request sent by the sharing management console is cached through the data request pool, and the cached data integration request is sent to the corresponding container cluster according to the cluster address of each container cluster and the integration end address of each data integration request, so that the data request pool can be used as a message middleware to manage the data integration request, the aim of decoupling the sharing management console and the container clusters is fulfilled, and the expansibility of data integration is improved.
In addition, because the currently executable target data integration request is determined according to the amount of idle resources of the container cluster and the amount of execution resources required for executing the candidate data integration request, the maximum concurrency number of the data integration requests executed by a single container cluster can be controlled through the amount of idle resources and the amount of execution resources, so that the load of the container cluster is ensured to be stabilized at a balanced optimal point, and the probability of the container cluster crashing due to high concurrency is reduced.
In one embodiment, before receiving a data integration request sent by a shared management console and caching the data integration request into a data request pool, the method includes a shared management console association step, where the shared management console association step includes: determining a shared management console associated with the data request pool; the shared management console comprises a main console and at least one slave console; determining whether the main console is in a normal operation state; if so, connecting the data request pool with the main console; if not, determining the respective corresponding operation state of each slave console, and connecting the data request pool with the slave console in the normal operation state.
Specifically, before caching the data integration request generated by the shared management console into the data request pool, the shared management console needs to be connected with the data request pool. The server can obtain the IP address of the sharing management console and determine the sharing management console to be connected according to the IP address. The shared management console comprises a main console and at least one slave console. The server can judge the operation state of the main console and connect the main console with the data request pool when the main console is in a normal operation state, so that a data integration request can be generated by the main console subsequently, and the generated data integration request is sent to the data request pool. When the main console is in an abnormal operation state, the server can determine the slave console in a normal operation state and connect the slave console with the data request pool, so that a data integration request can be generated by the slave console subsequently, and the generated data integration request is sent to the data request pool. The slave console may specifically be a standby console of the master console, and specific information of the shared management console connected to the data request pool may be determined by the IP address of the shared management console, so that the probability of mistakenly connecting the shared management console may be reduced based on the specific information.
In one embodiment, the shared management console adopts a high-performance reverse proxy server, such as a Nginx server, when the Nginx server is used for load balancing, keepalive software is combined to realize high availability of a cluster to the Nginx server, wherein the keepalive software is used for checking the state of the Nginx server, if one of the Nginx server is down or fails, the keepalive software removes a failed node of the server from the system, when the failure is recovered, the failed node is repaired manually, and at the moment, the server is automatically added into the server cluster.
In one embodiment, any one of the slave consoles in the normal operation state can be selected as the master console, and the rest of the slave consoles can be used as the slave consoles. To better identify the master console in the shared management console, the master console and the at least one slave console comprised by the shared management console are separately marked.
In one embodiment, when the master console is switched from the abnormal operation state to the normal operation state, the connection between the data request pool and the slave console is disconnected, and the connection between the data request pool and the master console is established.
In this embodiment, when the master console is in the abnormal operation state, the data request pool is connected to the slave console in the normal operation state, so that a phenomenon that a data integration request cannot be generated due to invalidation of the master console can be avoided, and high availability of a system framework is ensured.
In one embodiment, determining a target data integration request of the candidate data integration requests according to the amount of free resources of the current container cluster and the amount of execution resources required for executing the candidate data integration requests comprises: determining the respective required execution resource amount for executing each candidate data integration request; comparing the idle resource amount and the execution resource amount of the current container cluster, and taking the execution resource amount smaller than or equal to the idle resource amount as a target execution resource amount; and screening out target data integration requests from the candidate data integration requests according to the target execution resource amount.
Specifically, when at least one candidate data integration request is obtained, the server may compare the amount of idle resources of the current container cluster with the amount of execution resources of each candidate data integration request to obtain a comparison result corresponding to each candidate data integration request. The comparison result may specifically be that the execution resource amount is greater than the idle resource amount, or that the execution resource amount is less than or equal to the idle resource amount. And the server screens a target data integration request from the candidate data integration requests according to the respective corresponding comparison result of each candidate data integration request.
In one embodiment, the candidate data integration requests may include a plurality of data integration requests with execution resource amount less than or equal to the free resource amount, and the server randomly selects one data integration request with execution resource amount less than or equal to the free resource amount as the target data integration request. For example, when the server obtains the candidate data integration request 1, the candidate data integration request 2, and the candidate data integration request 3 corresponding to the current container cluster, the server may simultaneously compare the amount of idle resources of the current container cluster with the amount of execution resources of the candidate data integration request 1, the candidate data integration request 2, and the candidate data integration request 3, and if the comparison result shows that the amount of execution resources of the candidate data integration request 1 and the candidate data integration request 2 is smaller than the amount of idle resources of the current container cluster, and the amount of execution resources of the candidate data integration request 3 is greater than the amount of idle resources of the current container cluster, use any one of the candidate data integration request 1 and the candidate data integration request 2 as the target data integration request.
In this embodiment, the idle resource amount of the current container cluster is compared with the execution resource amount of each candidate data integration request, and the target data integration request is screened from the candidate data integration requests according to the comparison result, so that the acquisition efficiency of the target data integration request in the current container cluster can be improved.
In one embodiment, when at least one candidate data integration request corresponding to the current container cluster is acquired, the server may store each candidate data integration request into the storage queue, and traverse the candidate data integration requests in the storage queue according to the arrangement order of the candidate data integration requests in the storage queue. And for the current candidate data integration request traversed currently, the server compares the execution resource quantity of the current candidate data integration request with the idle resource quantity of the current container cluster, and takes the current candidate data integration request as a target data integration request and stops traversing when the execution resource quantity of the current data integration request is less than or equal to the idle resource quantity of the current container cluster.
For example, when the order of the candidate data integration requests in the storage queue is candidate data integration request 1, candidate data integration request 2, and candidate data integration request 3, the server compares the execution resource amount of candidate data integration request 1 with the idle resource amount of the current container cluster, if the execution resource amount of candidate data integration request 1 is greater than the idle resource amount of the current container cluster, compares the execution resource amount of candidate data integration request 2 with the idle resource amount of the current container cluster, and if the execution resource amount of candidate data integration request 2 is less than or equal to the idle resource amount of the current container cluster, takes candidate data integration request 2 as the target data integration request, and stops the traversal process, and does not continue to compare candidate data integration request 3. It should be noted that the server may buffer the candidate data integration requests into the storage queue according to the generation time of the candidate data integration requests, store the candidate data integration requests generated first at the head of the storage queue, and store the candidate data integration requests generated later at the tail of the storage queue. The server can also cache the candidate data integration requests into the storage queue according to the priority of the candidate data integration requests, store the candidate data integration requests with high priority at the head of the storage queue, and store the candidate data integration requests with priority at the tail of the storage queue. The server preferentially extracts the candidate data integration request from the head of the storage queue and compares the extracted candidate data integration request with the amount of free resources of the current container cluster.
In the embodiment, the traversing process is immediately stopped after the target data integration request is determined, so that unnecessary comparison processes can be reduced, and the calculation resources consumed by the unnecessary comparison processes are saved.
In one embodiment, after integrating the data provided by the data provider corresponding to the data provider identification in response to the target data integration request, the method further comprises: releasing execution resources occupied when the target data integration request is executed; and updating the idle resource amount of the current container cluster according to the released execution resources.
Specifically, after the current container cluster responds to the target data integration request and integrates data provided by the data provider corresponding to the data provider identifier, the execution resources occupied when the target data integration request is executed are released. At this time, the server updates the amount of the idle resources of the current container cluster according to the released execution resources and the amount of the idle resources of the current container cluster, so as to obtain the updated amount of the idle resources. The updated amount of free resources may be used to determine a next target data integration request.
In one embodiment, the amount of execution resources required to execute each candidate data integration request is determined; comparing the idle resource quantity of the current container cluster with the execution resource quantity, and storing the candidate data integration request with the execution resource quantity larger than the idle resource quantity to a waiting queue; and when the execution resources occupied when the target data integration request is executed are released, and after the idle resource amount of the current container cluster is updated, traversing each candidate data integration request in the waiting queue according to the arrangement sequence of the candidate data integration requests in the waiting queue to obtain the next target data integration request.
For example, if the total amount of free resources of the computer devices deployed in the current container cluster is: the CPU 8 core 32G memory, the computer device in response to the candidate data integration request 1, the amount of execution resources required to execute the candidate data integration request 1 is: the CPU 4 core 20G, at this time, the data provided by the provider 1 corresponding to the candidate data integration request 1 is in integration; the server responds to the candidate data integration request 2, and the amount of execution resources required for executing the request 2 is as follows: the CPU 4 core 10G, at this time, the data provided by the provider 2 corresponding to the candidate data integration request 2 is in integration; when the server responds to the candidate data integration request 3, the amount of execution resources required to execute the candidate data integration request 3 is: the CPU 4 core 8G, which finds that the server resources are insufficient at this time, cannot integrate the data provided by the provider 3 corresponding to the candidate data integration request 3, and thus stores the candidate data integration request 3 in the waiting queue to wait.
If the data provided by the provider 1 is in the integration completion state, releasing the execution resources occupied by the candidate data integration request 1, and thus obtaining the updated free resource amount of the current container cluster as follows: CPU 8 core 22G memory. The computer device obtains the candidate data integration request 3 in the waiting queue, compares the execution resource amount of the candidate data integration request 3 with the free resource amount of the current container cluster, and can obtain that the execution resource amount of the candidate data integration request 3 is smaller than the free resource amount of the current container cluster, and at this time, the computer device responds to the candidate data integration request 3 and can integrate the data provided by the provider 3 corresponding to the candidate data integration request 3.
In this embodiment, by releasing the execution resources occupied when executing the target data integration request, the unprocessed data integration request can be processed in time based on the released resources, so that not only the utilization rate of the execution resources is improved, but also the processing efficiency of the data integration request is improved.
In one embodiment, integrating, by the current container cluster, data provided by a data provider corresponding to the data provider identification in response to the target data integration request comprises: determining an un-started target container in the current container cluster; allocating execution resources required for executing the target data integration request to the target container; and starting the target container, executing the target data integration request based on the allocated execution resources through the target container, and integrating the data provided by the data provider corresponding to the data provider identification.
Specifically, a plurality of containers may be pre-deployed in the current container cluster, and when the current container cluster receives the target data integration request, the server may determine an un-started container in the current container cluster and use the un-started container as the target container. The server determines the execution resource amount required for executing the target data integration request, and after allocating the execution resource amount required for executing the target data integration request to the target container, the target container is started to execute the target data integration request through the target container. That is, the data provided by the data provider corresponding to the data provider identification is integrated through the target container and the database network channel.
In one embodiment, after integrating the data provided by the data provider corresponding to the data provider identifier through the target container and the database network channel, the method further includes: and releasing the execution resources occupied by the target container, and updating the target container in the starting state into the non-starting state.
In this embodiment, by deploying a plurality of containers in the container cluster and allocating different execution resources to different containers, a plurality of target data integration requests can be executed simultaneously by different containers, thereby improving the execution efficiency of the target data integration requests. In addition, because different execution resources are allocated to different containers, the execution resources can be isolated, so that the situation that a plurality of data integration requests compete for the same execution resource due to insufficient execution resources is avoided.
In one embodiment, as shown in FIG. 3, there is provided a data integration system comprising: the system comprises a shared management console, a data request pool and a container cluster; the sharing management console is used for acquiring the data provider identifier and the integration end address and generating a data integration request according to the data provider identifier and the integration end address; the data request pool is used for receiving and caching the data integration request; the container cluster is used for determining a local cluster address and screening out at least one candidate data integration request from the cached data integration requests according to the integration end address and the cluster address; the container cluster is used for determining a target data integration request in the candidate data integration requests according to the idle resource amount and the execution resource amount required by executing the candidate data integration requests; and the container cluster is used for integrating the data provided by the data provider corresponding to the data provider identification in response to the target data integration request.
Specifically, a configuration interface of the data integration request can be displayed through the shared management console, the configuration interface is provided by a public cloud, subscription authorization, data integration configuration, integration scheduling task configuration and the like can be performed through the configuration interface, a corresponding data integration request is generated according to configuration completed information, the data integration request is sent to a data request pool, and the data request pool stores the received data integration request. The container cluster is used for screening out at least one candidate data integration request from the data request pool according to a local cluster address and an integration end address of the data integration request; the container cluster is used for comparing the self idle resource quantity with the execution resource quantity required by executing the candidate data integration request, determining the target data integration request in the candidate data integration request and pulling the target data integration request from the data request pool. The container cluster may start a target container and integrate data provided by the data provider corresponding to the data provider identification in response to a target data integration request via the target container. Wherein the target container is an un-started container deployed in the container cluster.
The container cluster can be a Kubernets (K8s) container cluster management system, Kubernets is an open-source container arrangement engine, a distributed architecture scheme of a containerization technology is provided, the container cluster management system is used for managing containers Pod of running applications, automatic deployment of the container cluster can be achieved by the Kubernets, running states of the applications can be checked, and the like. The container Pod is similar to a case-type virtual machine, but has a looser isolation property, can share an operating system among application programs, and has own file system, CPU, memory, process space and the like. The container Pod can run the integration task by starting the container Pod in the container cluster, and the container Pod can only access the data provider through the database connection, thereby completing the integration task.
The data request pool can be a message middleware, the message middleware can be a Kafka distributed message system, the Kafka system has the characteristics of high-level expansion and high throughput, a shared management console and an integrated actuator deployed in a container cluster can be decoupled, and meanwhile, the current-limiting control is performed on the data integration requests in the container cluster.
In one embodiment, multiple clusters of containers may communicate through a shared management console. For example, referring to fig. 3, when the provider 1 and the provider 2 perform a configuration operation through a configuration interface provided by a public cloud environment, the shared management console may generate a data integration request 1 corresponding to the provider 1 and a data integration request 2 corresponding to the provider 2 according to the configuration information, and send the data integration request 1 and the data integration request 2 to a data request pool (message middleware) for storage. If the address of the integration end in the data integration request 1 is the same as the address of the integration end of the container cluster a, and the address of the integration end in the data integration request 2 is the same as the address of the integration end of the container cluster B, the data integration request 1 in the data request pool can be used as a candidate data integration request of the container cluster a, and the data integration request 2 in the data request pool can be used as a candidate data integration request of the container cluster B. If the free resource amount of the container cluster A is larger than or equal to the execution resource amount required by executing the data integration request 1, responding to the data integration request 1 through the container cluster A; similarly, if the amount of free resources of the container cluster B is greater than or equal to the amount of execution resources required to execute the data integration request 2, the data integration request 2 is responded to by the container cluster B.
For example, container cluster a may be deployed in a public cloud a, and container cluster B may be deployed in a public cloud B.
In one embodiment, the shared management console is connected with each container cluster through the data request pool, so that the shared management console and the container clusters can be decoupled through the data request pool, the decoupled shared management console and the decoupled container clusters can be subsequently and respectively expanded, and the expansibility of the data integration system is improved.
In this embodiment, by obtaining the data provider identifier and the integration end address, a data integration request may be generated by the shared management console, and the generated data integration request is cached in the data request pool. By caching the data integration requests to a data request pool, each container cluster in the container clusters can screen at least one candidate data integration request from the data request pool according to an integration end address of the data integration request and a local cluster address, and then determine a target data integration request in the candidate data integration requests according to a local idle resource amount and an execution resource amount required by executing the candidate data integration requests. By determining the target data integration request corresponding to each container cluster, the target data integration request can be issued to the corresponding container cluster, so that each container cluster can respond to the received target data integration request and integrate the data provided by the data provider corresponding to the data provider identifier. Because the shared management console is connected with each container cluster through the data request pool, the shared management console and the container clusters can be decoupled through the data request pool, and therefore the expansibility of the data integration system is improved.
In addition, because the currently executable target data integration request is determined according to the amount of idle resources of the container cluster and the amount of execution resources required for executing the candidate data integration request, the maximum concurrency number of the data integration requests executed by a single container cluster can be controlled through the amount of idle resources and the amount of execution resources, so that the load of the container cluster is ensured to be stabilized at a balanced optimal point, and the probability of the container cluster crashing due to high concurrency is reduced.
In one embodiment, the shared management console is further configured to, upon receiving the data provider identifier, determine configuration information corresponding to the data provider identifier and extract a corresponding aggregator address from the configuration information; the configuration information is generated according to the integration end address of the container cluster.
Specifically, when the data provider needs to be configured through the shared management console, the shared management console may obtain a data provider identifier input by the data provider, and obtain initial configuration information corresponding to the data provider identifier, where the initial configuration information includes a default aggregator address, and the default aggregator address is the same as the aggregator address of any one of the plurality of container clusters. The shared management console can also acquire the rest configuration information input by the data provider, such as authorization information for data integration, data subscription information and the like, fuse the acquired rest configuration information with the initial configuration information to obtain final configuration information, and generate a data integration request based on the final configuration information. The initial configuration information may be one that is automatically generated for the data provider without the data provider manually configuring.
In this embodiment, because the integrated end address is automatically generated, compared with the conventional method that the public cloud environment where the integrated actuator is located needs to be known in advance, the integrated end address is configured according to the public cloud environment, the integrated end address can be configured without knowing the public cloud environment, and only the automatically-default integrated end address needs to be used as the final integrated end address, so that the configuration difficulty of the data provider is greatly reduced, and the configuration efficiency is improved.
It should be understood that, although the steps in the flowcharts related to the embodiments are shown in sequence as indicated by the arrows, the steps are not necessarily executed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the above embodiments may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides a data integration apparatus for implementing the above-mentioned data integration method. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme described in the above method, so specific limitations in one or more embodiments of the data integration device provided below can be referred to the limitations of the data integration method in the foregoing, and details are not described herein again.
In one embodiment, as shown in fig. 4, there is provided a data integration apparatus 400 comprising: an acquisition module 402, a selection module 404, and an integration module 406, wherein:
an obtaining module 402, configured to receive a data integration request sent by a shared management console, and cache the data integration request to a data request pool; the data integration request comprises a data provider identification and an integration end address;
a selecting module 404, configured to, for each container cluster in the multiple container clusters, screen out at least one candidate data integration request from the data request pool according to the integration end address and the cluster address of the current container cluster; determining a target data integration request in the candidate data integration requests according to the idle resource amount of the current container cluster and the execution resource amount required by executing the candidate data integration requests;
an integration module 406 to integrate data provided by the data provider corresponding to the data provider identification in response to the target data integration request through the current container cluster.
The data integration device can cache the data integration request to the data request pool by obtaining the data integration request sent by the sharing management console. By caching the data integration requests to a data request pool, each container cluster in the container clusters can screen at least one candidate data integration request from the data request pool according to an integration end address of the data integration request and a local cluster address, and then determine a target data integration request in the candidate data integration requests according to a local idle resource amount and an execution resource amount required by executing the candidate data integration requests. By determining the target data integration request corresponding to each container cluster, the target data integration request can be issued to the corresponding container cluster, so that each container cluster can respond to the received target data integration request and integrate the data provided by the data provider corresponding to the data provider identifier. The data integration request sent by the sharing management console is cached through the data request pool, and the cached data integration request is sent to the corresponding container cluster according to the cluster address of each container cluster and the integration end address of each data integration request, so that the data request pool can be used as a message middleware to manage the data integration request, the aim of decoupling the sharing management console and the container clusters is fulfilled, and the expansibility of data integration is improved.
In one embodiment, as shown in FIG. 5, another data integration apparatus 500 is provided, comprising: an acquisition module 502, a selection module 504, an integration module 506, and a connection module 508, wherein:
an obtaining module 502, configured to receive a data integration request sent by a shared management console, and cache the data integration request to a data request pool; the data integration request comprises a data provider identification and an integration end address;
a selecting module 504, configured to, for each container cluster in the multiple container clusters, screen out at least one candidate data integration request from the data request pool according to the integration end address and the cluster address of the current container cluster; determining a target data integration request in the candidate data integration requests according to the idle resource amount of the current container cluster and the execution resource amount required by executing the candidate data integration requests;
an integration module 506 for integrating data provided by the data provider corresponding to the data provider identification in response to the target data integration request through the current container cluster.
A connection module 508, configured to determine a shared management console associated with the data request pool; the shared management console comprises a main console and at least one slave console; determining whether the main console is in a normal operation state; if so, connecting the data request pool with the main console; if not, determining the respective corresponding operation state of each slave console, and connecting the data request pool with the slave console in the normal operation state.
In one embodiment, the selection module 504 further includes a comparison module 5041 for determining respective amounts of execution resources required to execute each candidate data integration request; comparing the idle resource amount and the execution resource amount of the current container cluster, and taking the execution resource amount smaller than or equal to the idle resource amount as a target execution resource amount; and screening out target data integration requests from the candidate data integration requests according to the target execution resource amount.
In one embodiment, the selecting module 504 further includes an updating module 5042, configured to, in response to the target data integration request, release execution resources occupied when the target data integration request is executed after integrating data provided by the data provider corresponding to the data provider identifier; and updating the idle resource amount of the current container cluster according to the released execution resources.
In one embodiment, the integration module 506 is further configured to determine target containers not started in the current container cluster; allocating execution resources required for executing the target data integration request to the target container; and starting the target container, executing the target data integration request based on the allocated execution resources through the target container, and integrating the data provided by the data provider corresponding to the data provider identification.
The modules in the data integration device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 6. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a data integration method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 6 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, in which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer-readable storage medium. The computer instructions are read by a processor of a computer device from a computer-readable storage medium, and the computer instructions are executed by the processor to cause the computer device to perform the steps in the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. A method of data integration, the method comprising:
receiving a data integration request sent by a shared management console, and caching the data integration request to a data request pool; the data integration request comprises a data provider identification and an integration end address;
for each container cluster in a plurality of container clusters, screening out at least one candidate data integration request from the data request pool according to the integration end address and the cluster address of the current container cluster;
determining a target data integration request in the candidate data integration requests according to the idle resource amount of the current container cluster and the execution resource amount required by executing the candidate data integration requests;
integrating, by the current container cluster, data provided by a data provider corresponding to the data provider identification in response to the target data integration request.
2. The method of claim 1, wherein before receiving the data integration request sent by the shared management console and buffering the data integration request to a pool of data requests, the method further comprises:
determining a shared management console associated with the data request pool; wherein the shared management console comprises a master console and at least one slave console;
determining whether the main console is in a normal operation state;
if so, connecting the data request pool with the main console;
if not, determining the respective corresponding operation state of each slave console, and connecting the data request pool with the slave console in the normal operation state.
3. The method of claim 1, wherein determining the target data integration request of the candidate data integration requests according to the amount of free resources of the current container cluster and the amount of execution resources required for executing the candidate data integration requests comprises:
determining the respective required execution resource amount for executing each candidate data integration request;
comparing the idle resource amount of the current container cluster with the execution resource amount, and taking the execution resource amount smaller than or equal to the idle resource amount as a target execution resource amount;
and screening out target data integration requests from the candidate data integration requests according to the target execution resource amount.
4. The method of claim 3, wherein after integrating the data provided by the data provider corresponding to the data provider identification in response to the target data integration request, the method further comprises:
releasing execution resources occupied when the target data integration request is executed;
and updating the idle resource amount of the current container cluster according to the released execution resources.
5. The method of claim 1, wherein integrating, by the current container cluster, data provided by a data provider corresponding to the data provider identification in response to the target data integration request comprises:
determining an un-started target container in the current container cluster;
allocating execution resources required to execute the target data integration request to the target container;
and starting the target container, executing the target data integration request based on the allocated execution resources through the target container, and integrating the data provided by the data provider corresponding to the data provider identification.
6. A data integration system, the system comprising: the system comprises a shared management console, a data request pool and a container cluster;
the sharing management console is used for acquiring a data provider identifier and an integration end address and generating a data integration request according to the data provider identifier and the integration end address;
the data request pool is used for receiving and caching the data integration request;
the container cluster is used for determining a local cluster address and screening out at least one candidate data integration request from the cached data integration requests according to the integration end address and the cluster address;
the container cluster is used for determining a target data integration request in the candidate data integration requests according to the idle resource amount and the execution resource amount required by executing the candidate data integration requests;
the container cluster is used for responding to the target data integration request and integrating the data provided by the data provider corresponding to the data provider identification.
7. The system of claim 6, wherein the shared management console is further configured to, upon receiving a data provider identifier, determine configuration information corresponding to the data provider identifier and extract a corresponding aggregator address from the configuration information; the configuration information is generated according to an integration end address of the container cluster.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 5.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 5.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1 to 5 when executed by a processor.
CN202111521772.6A 2021-12-13 2021-12-13 Data integration method, system, computer device and storage medium Pending CN114201465A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111521772.6A CN114201465A (en) 2021-12-13 2021-12-13 Data integration method, system, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111521772.6A CN114201465A (en) 2021-12-13 2021-12-13 Data integration method, system, computer device and storage medium

Publications (1)

Publication Number Publication Date
CN114201465A true CN114201465A (en) 2022-03-18

Family

ID=80653244

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111521772.6A Pending CN114201465A (en) 2021-12-13 2021-12-13 Data integration method, system, computer device and storage medium

Country Status (1)

Country Link
CN (1) CN114201465A (en)

Similar Documents

Publication Publication Date Title
US9971823B2 (en) Dynamic replica failure detection and healing
US20210004258A1 (en) Method and Apparatus for Creating Virtual Machine
US7730488B2 (en) Computer resource management method in distributed processing system
CN110069346B (en) Method and device for sharing resources among multiple processes and electronic equipment
CA2177020A1 (en) Customer information control system and method in a loosely coupled parallel processing environment
JP5503678B2 (en) Host providing system and host providing method
CN107153643B (en) Data table connection method and device
US11336588B2 (en) Metadata driven static determination of controller availability
CN113204353B (en) Big data platform assembly deployment method and device
CN111078516A (en) Distributed performance test method and device and electronic equipment
CN113986539A (en) Method, device, electronic equipment and readable storage medium for realizing pod fixed IP
CN115686346A (en) Data storage method and device and computer readable storage medium
CN114816272B (en) Magnetic disk management system under Kubernetes environment
CN114201465A (en) Data integration method, system, computer device and storage medium
US8850440B2 (en) Managing the processing of processing requests in a data processing system comprising a plurality of processing environments
CN115470303A (en) Database access method, device, system, equipment and readable storage medium
CN109101367A (en) The management method and device of component in cloud computing system
CN114924888A (en) Resource allocation method, data processing method, device, equipment and storage medium
CN114356549A (en) Method, device and system for scheduling container resources in multi-container cluster
CN109257201B (en) License sending method and device
CN118018552B (en) Cluster service deployment method and device based on middleware and computer equipment
US11768704B2 (en) Increase assignment effectiveness of kubernetes pods by reducing repetitive pod mis-scheduling
US11763017B2 (en) Method and system for proactive data protection of virtual machines
US20230342200A1 (en) System and method for resource management in dynamic systems
CN118051344A (en) Method and device for distributing hardware resources and hardware resource management system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination