CN117931767A - Database migration platform and database migration method - Google Patents

Database migration platform and database migration method Download PDF

Info

Publication number
CN117931767A
CN117931767A CN202311726897.1A CN202311726897A CN117931767A CN 117931767 A CN117931767 A CN 117931767A CN 202311726897 A CN202311726897 A CN 202311726897A CN 117931767 A CN117931767 A CN 117931767A
Authority
CN
China
Prior art keywords
migration
data
card turning
database
migration task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311726897.1A
Other languages
Chinese (zh)
Inventor
干从勇
肖姝
李霄雨
张龙龙
崔念龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yusys Technologies Group Co ltd
Original Assignee
Beijing Yusys Technologies Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yusys Technologies Group Co ltd filed Critical Beijing Yusys Technologies Group Co ltd
Priority to CN202311726897.1A priority Critical patent/CN117931767A/en
Publication of CN117931767A publication Critical patent/CN117931767A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a database migration platform and a database migration method, wherein the platform comprises the following components: the card turning library module, the management end and the engine end; the card turning library module stores a table, wherein the table comprises a card turning parameter field; the management end comprises a configuration module and a query module; the configuration module cuts the source table data into a plurality of blocks for serial transmission when configuring the migration task; storing a piece of data in a card turning library module, and storing current card turning parameters in a card turning parameter field; configuring an increment extraction statement in a migration task; the query module queries the current card turning parameters before each migration task execution and submits the current card turning parameters to the engine end; the configuration module also modifies the card turning parameters of the migration task when the migration task is successfully executed; the query module submits the modified card turning parameters to an engine end to execute the next migration task; and the engine side executes a migration task according to the increment extraction statement and the current card turning parameter. The invention realizes the automatic operation of the task based on the task batch flow of the card turning library.

Description

Database migration platform and database migration method
Technical Field
The invention relates to the field of database mass data migration, in particular to a database migration platform and a database migration method.
Background
Data is widely applied to various fields in the current society, and becomes an important resource for promoting innovation and development. With the rapid development of the Chinese credit industry, domestic enterprises and organizations accumulate a large number of precious data assets. However, due to technological developments and changes in business requirements, it is sometimes necessary to migrate such data from one database to another.
In this context, it becomes critical to implement domestic trusted database data migration. Firstly, the domestic database has better adaptability and customization, and can better meet the demands of domestic enterprises and organizations. Secondly, the data migration can improve the efficiency and the safety of data management and ensure the integrity and the reliability of data. In addition, the domestic database can better protect the data privacy and avoid the risk of information leakage caused by loopholes in the data transmission process.
The implementation of the domestic credit and trauma database data migration not only can improve the data management and operation efficiency, but also is helpful for supporting domestic innovation and development. By migrating the data to a domestic database, the development and application of the native technology can be promoted, and the further development of the Chinese credit and debit industry is promoted. Therefore, in order to achieve maximum utilization and protection of data assets, domestic credit-creation database data migration is a very necessary measure.
The current open source data migration tools commonly used in the market mainly comprise overseas sqoop and domestic DataX, wherein the inventor finds that the sqoop mainly has the following problems when carrying out domestic data migration of a trusted database:
1. The supported databases are few in variety, the sqoop is mainly used for data synchronization of traditional databases (such as mysql and oracle), and is not supported for domestic information-created databases, and users are required to develop corresponding plug-ins by themselves, so that the use difficulty is increased.
2. Functional limitations: the function of Sqoop is relatively basic, and is mainly used for simple data import and export. Additional development and customization may be required for complex data migration requirements, such as data conversion, data cleansing, incremental synchronization, and the like.
DataX supports a larger variety of domestic databases than sqoop, but there are also a number of problems in use, such as:
1. Compatibility issues, dataX, while supporting a variety of databases, the specific functions and syntax of the domestic database may not be fully compatible with the default plug-in of DataX. This may require additional customization and development effort to ensure that the data migration process proceeds smoothly.
2. Without paged configuration, dataX is a scripting tool, does not provide a rapid configuration flow for paging, and in use, configuration items completely require users to manually configure source table and target table information, so that task development efficiency is low.
3. The reliability problem is that the DataX is used as a data synchronization tool of a single edition, can not be used when the current server is in downtime, stuck and the like, and the reliability of the enterprise during daily data batch transmission is difficult to ensure.
Disclosure of Invention
In view of the above, an object of the embodiments of the present invention is to provide a database migration platform and a database migration method, which solve the problem encountered by enterprises during domestic database data migration.
In order to achieve the above objective, in a first aspect, an embodiment of the present invention provides a database migration platform, where the platform includes a dynamic card flipping library module, a data migration platform management end, and a migration task engine end;
the dynamic card turning library module is used for storing a data reading parameter table, wherein the table comprises a card turning parameter field, and the card turning parameter field is used for storing current card turning parameters of a migration task;
The data migration platform management end comprises a migration task configuration module and a card turning library query module;
The migration task configuration module is used for carrying out field analysis on source table data of an upstream system when a migration task is configured, and dividing the source table data into a plurality of blocks according to a preset dividing rule for serial transmission; storing a piece of data in the dynamic card turning library module, wherein the card turning parameter field stores initial card turning parameters of the migration task; configuring database increment extraction sentences in a migration task, wherein the card turning parameters are dynamically replaced by placeholders;
The card turning library inquiry module is used for inquiring current card turning parameters of the current migration task by the dynamic card turning library module before each migration task is executed, and submitting the current card turning parameters to a migration task engine end;
the migration task configuration module is further used for modifying the card turning parameters of the migration task when the migration task is successfully executed, and obtaining modified card turning parameters;
The card turning library query module is further used for submitting the modified card turning parameters to the migration task engine end to execute the next migration task when the execution of the migration task is successful;
and the migration task engine end is used for executing a migration task according to the increment extraction statement and the current card turning parameter.
In some possible embodiments, the preset slicing rules include: a table containing date fields is cut by day or month; a table without a date field is segmented by pressing a main key or a line number;
the table also contains a unique identification field, which is the primary key of the data read parameter table, indicating the uniqueness of the record, and a task identification field for uniquely identifying the migration task using the parameter.
In some possible embodiments, the migration task engine is further configured to return a task success flag to the data migration platform management end when the execution of the current migration task is successful, so as to notify the data migration platform management end that the current migration task is completed; when the execution of the migration task fails, returning a task failure mark to the data migration platform management end;
The migration task configuration module is further configured to not modify the card turning parameter of the migration task when the execution of the migration task fails; and the card turning library query module is also used for re-submitting card turning parameters to the migration task engine end when the execution of the migration task fails, so as to restart the failed migration task.
In some possible embodiments, the database migration platform further comprises: a drive catalog for storing any one or more Jdbc drives including mysql drive, oracle drive, gaussdb drive, gbase drive, kingbase drive;
The migration task engine end comprises:
the database driver dynamic loading module is configured with a driver adapter and is used for dynamically loading Jdbc drivers of the needed connection databases from the appointed driver catalog into the Java virtual machine context environment, and establishing Jdbc connection with the target database and the source database respectively by utilizing the database connection information and the Jdbc drivers; and
The data processing module based on the spark distributed memory computing framework is configured with a plurality of independent spark read-write plug-ins, and each spark read-write plug-in comprises a read data operation method and a write data operation method for a specific type of database; the data processing module is used for selecting a corresponding spark read-write plug-in according to the data source type of the source system or the target system, and submitting the selected spark read-write plug-in to a spark cluster for execution.
In some possible embodiments, each spark read-write plug-in is an independent jar packet, and the spark read-write plug-in is formulated with a standard access interface, and the standard access interface refers to interface method names of reading and writing of all database spark read-write plug-ins, and the in-parameter and the out-parameter are consistent.
In some possible embodiments, the placeholder is a string in date format for replacing a specific value of a current turn parameter in the turn library.
In some possible embodiments, the migration task engine side further includes:
The remote database tool calling module is used for reading data from an upstream source server and exporting the data into a file; writing the exported file into a file storage area of the file server; logging in to the file server through a remote secure shell protocol, wherein a data importing tool of a corresponding database is pre-installed on the file server; triggering the file server to execute a command to call a data importing tool of a corresponding database; and executing an import operation through a data import tool of the corresponding database, and writing the exported file into the target server.
In some possible embodiments, the remote database tool calling module is specifically configured to splice together a command of the database import tool and related parameters to form a command executable by the database import tool; the relevant parameters include any of a number of configuration items: file full path information, file separator, line feed, encoding format, etc.
In a second aspect, a database migration method of the database migration platform is provided, and the method includes:
the dynamic card turning library module stores a data reading parameter table, wherein the table comprises a card turning parameter field, and the card turning parameter field is used for storing the current card turning parameters of the migration task;
The migration task configuration module performs field analysis on source table data of an upstream system when a migration task is configured, and segments the source table data into a plurality of blocks according to a preset segmentation rule for serial transmission; storing a piece of data in the dynamic card turning library module, wherein the card turning parameter field stores initial card turning parameters of the migration task; configuring database increment extraction sentences in a migration task, wherein the card turning parameters are dynamically replaced by placeholders;
before each migration task is executed, the card turning library inquiry module inquires the current card turning parameters of the current migration task, and submits the current card turning parameters to a migration task engine end;
The migration task configuration module modifies the card turning parameters of the migration task when the migration task is successfully executed, and obtains modified card turning parameters;
The card turning library query module submits the modified card turning parameters to the migration task engine end to execute the next migration task when the execution of the migration task is successful;
And the migration task engine end executes a migration task according to the increment extraction statement and the current card turning parameter.
In some possible embodiments, the preset slicing rules include: a table containing date fields is cut by day or month; a table without a date field is segmented by pressing a main key or a line number; the data reading parameter table also comprises a unique identification field and a task identification field, wherein the unique identification field is a main key of the data reading parameter table and is used for indicating the uniqueness of the record, and the task identification field is used for uniquely identifying a migration task using the parameter.
In some possible embodiments, the placeholder is a string in date format for replacing a specific value of a current turn parameter in the turn library.
Further, the method further comprises:
When the execution of the migration task is successful, the migration task engine returns a task success mark to the data migration platform management end so as to inform the data migration platform management end that the migration task is completed;
When the execution of the migration task fails, the migration task engine returns a task failure mark to the data migration platform management end;
when the execution of the migration task fails, the migration task configuration module does not modify the card turning parameters of the migration task;
when the execution of the migration task fails, the card turning library inquiry module resubmisses the card turning parameters to the migration task engine end so as to restart the failed migration task.
In some possible embodiments, the method further comprises the steps of:
the drive adapter in the migration task engine side dynamically loads Jdbc drives of the needed connection database from the appointed drive catalog into the Java virtual machine context;
The drive adapter establishes Jdbc connection with the target database and the source database respectively by utilizing the database connection information and the Jdbc drive;
the data processing module in the migration task engine end selects a corresponding spark read-write plug-in according to the data source type of the source system or the target system, and submits the plug-in to a spark cluster for data processing;
And the remote database tool calling module in the migration task engine end executes the operation of writing the data into the target database according to the data writing form selected by the user, wherein the data writing form comprises a batch insertion form or a form using a data importing tool.
In a third aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements any of the methods as described in the second aspect.
In a fourth aspect, there is provided a computer device comprising:
One or more processors;
A storage means for storing one or more programs;
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement any of the methods of the second aspect.
The technical scheme has the following beneficial effects:
the embodiment of the invention realizes the automatic operation of the tasks based on the task batch flow of the card turning library, and solves the problem of breakpoint continuous transmission.
The task operation scheme based on spark greatly improves the performance, reliability and success rate of the task.
The multiple warehousing modes of the embodiment of the invention are switched, so that the writing efficiency of mass data is improved.
The graphical configuration of the embodiment of the invention greatly improves the efficiency of developing the data migration task for the user.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an overall logical architecture of a data migration platform according to an embodiment of the present invention;
FIG. 2 is a diagram of a dynamic database driver loading process for a migration task according to an embodiment of the present invention;
FIG. 3 is a dynamic data flipping flow diagram of a migration task in accordance with an embodiment of the present invention;
FIG. 4 is a tool import task flow diagram of a data migration platform of an embodiment of the present invention;
FIG. 5 is a functional block diagram of a computer-readable storage medium of an embodiment of the present invention;
FIG. 6 is a functional block diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The embodiment of the invention is mainly applied to the field of mass data migration of domestic credit and debit databases, and solves the problems of large development difficulty, high cost, poor performance and the like in the data migration process.
Based on the problems, the embodiment of the invention develops a domestic database migration platform for solving the problems encountered by enterprises during domestic database data migration. The data migration platform has the following characteristics: graphical configuration, high availability, high performance, more domestic database type support, flexible plug-in development scheme and perfect data migration flow.
As shown in fig. 1, an embodiment of the present invention provides a database migration platform, where the platform includes: the dynamic card turning library module, the data migration platform management end and the migration task engine end;
The dynamic card-turning library module is used for storing a data reading (extracting) parameter table, the table comprises a card-turning parameter field, the card-turning parameter field is used for storing the current card-turning parameter of the migration task, and the current card-turning parameter is dynamically updated when the execution of the migration task is successful;
the data migration platform management end comprises a migration task configuration module and a card overturning library query module;
The migration task configuration module is used for carrying out field analysis on source table data of an upstream system when a migration task is configured, and dividing the source table data into a plurality of blocks according to a preset segmentation rule for serial transmission; storing a piece of data in a dynamic card turning library module, wherein a card turning parameter field stores initial card turning parameters of the migration task; configuring database increment extraction sentences in a migration task, wherein the card turning parameters are dynamically replaced by placeholders;
the card turning library inquiry module is used for inquiring the current card turning parameters of the current migration task by the dynamic card turning library module before each migration task is executed, and submitting the current card turning parameters to the migration task engine end;
the migration task configuration module is also used for modifying the card turning parameters of the migration task when the migration task is successfully executed, and obtaining the modified card turning parameters;
The card turning library query module is also used for submitting the modified card turning parameters to the migration task engine end to execute the next migration task when the execution of the migration task is successful;
And the migration task engine end is used for executing the migration task according to the increment extraction statement and the current card turning parameter.
In some embodiments, the preset slicing rules include: a table containing date fields is cut by day or month; a table without a date field is segmented by pressing a main key or a line number; the data reading parameter table also comprises a unique identification field and a task identification field, wherein the unique identification field is a main key of the data reading parameter table and is used for indicating the uniqueness of the record, and the task identification field is used for uniquely identifying a migration task using the parameter.
In some embodiments, the migration task engine is further configured to return a task success flag to the data migration platform management end when the execution of the migration task is successful, so as to notify the data migration platform management end that the migration task is completed; when the execution of the migration task fails, returning a task failure mark to the data migration platform management end; the migration task configuration module is also used for not modifying the card turning parameters of the migration task when the execution of the migration task fails; and the card turning library query module is also used for re-submitting card turning parameters to the migration task engine end when the execution of the migration task fails, so as to restart the failed migration task.
In some embodiments, the database migration platform further comprises: a drive catalog for storing any one or more Jdbc drives including mysql drive, oracle drive, gaussdb drive, gbase drive, kingbase drive;
The migration task engine end comprises:
The database driver dynamic loading module is configured with a driver adapter and is used for dynamically loading Jdbc drivers of a required connection database from a specified driver catalog into a Java virtual machine context environment and establishing Jdbc connection with a target database and a source database respectively by utilizing pre-acquired (such as user-input) database connection information and Jdbc drivers; and
The data processing module based on the spark distributed memory computing framework is configured with a plurality of independent spark read-write plug-ins, and each spark read-write plug-in comprises a read data operation method and a write data operation method for a specific type of database; the data processing module is used for selecting a corresponding spark read-write plug-in according to the data source type of the source system or the target system, and submitting the selected spark read-write plug-in to a spark cluster for execution.
In some embodiments, each spark read/write plug-in is an independent jar packet, and the spark read/write plug-in is formulated with a standard access interface, where the standard access interface refers to the interface method names of all database spark read/write plug-ins for reading and writing, and the in-parameter and out-parameter are consistent.
In some embodiments, the placeholder is a string in date format that is used to replace a specific value of the current turn parameter in the turn library.
In some embodiments, the migration task engine may further include:
The remote database tool calling module is used for reading data from an upstream source server and exporting the data into a file; writing the exported file into a file storage area of the file server; logging in to a file server through a remote security shell protocol, wherein a data importing tool of a corresponding database is pre-installed on the file server; triggering the file server to execute the command to call the data importing tool of the corresponding database; and executing import operation through the data import tool of the corresponding database, and writing the exported file into the target server.
In some embodiments, the remote database tool calling module is specifically configured to splice together a command of the database import tool and related parameters to form a command executable by the data import tool; the relevant parameters include any of a number of configuration items: file full path information, file separator, line feed, encoding format, etc. One form of data writing into the target database is in the form of a data importing tool, which is mainly implemented by importing in the form of a file.
The spark read-write plug-in includes but is not limited to: mysql read-write plug-in, oracle read-write plug-in, gaussdb read-write plug-in, gbase read-write plug-in, kingbase read-write plug-in, etc. The JVM, i.e., the Java virtual machine, is a core component of Java technology, which provides a running environment for Java programs. In performing database migration, the performance and characteristics of the JVM need to be considered to ensure that the entire migration process proceeds smoothly.
In some embodiments, database migration requires transferring data from one database to another database service, which operation requires the use of corresponding database drivers to connect the source and target databases. The database driven dynamic loading module supports dynamic loading of any one or more Jdbc drives including mysql driver, oracle driver, gaussdb driver, gbase driver, kingbase driver.
The data migration platform is described in more detail below:
1. database driven dynamic loading technique
As shown in fig. 2, in the embodiment of the present invention, when the data in the upstream source database is extracted and the target database data is written, jdbc (Java Database Connectivity, java database connection) connections of the database are required. Because the domestic database is developed faster, the version is updated frequently, and the driver for connection is also updated frequently, and the problem of inconsistent version is easily caused directly in a static introduction form, the embodiment of the invention utilizes a class loader (ClassLoader) of Java language as a driving adapter to dynamically specify the Jdbc driver of the connection database required for loading under the driving directory, loads content (content refers to a specific implementation code, class file or other related resource of Jdbc driver for loading the connection data source required under the specified driving directory, including but not limited to mysql driver, oracle driver, gaussdb driver, gbase driver, kingbase driver, etc.) into JVM (Java Virtual Machine ) context (context refers to an operation environment in which the driving adapter loads Jdbc driver and establishes connection with the database), and then the driving adapter establishes Jdbc connection with the target database and the source database respectively by utilizing pre-acquired (e.g. user-entered) database connection information. Drive directories include, but are not limited to: mysql driver, oracle driver, gaussdb driver, gbase driver, kingbase driver, etc. The drive adapter sends a loading instruction of the source database drive or the target database drive to the drive catalog, and the drive catalog returns the source database drive or the target database drive to the drive adapter according to the drive loading instruction. The JVM is the running environment of Java programs, which is responsible for converting Java bytecodes into machine code and executing the programs. The driver adapter is a specific Java class and is responsible for dynamically loading the assigned Jdbc drivers and establishing a connection with the database in the JVM context. The JVM provides the execution environment required for the drive adapter to run, and the drive adapter uses this environment to implement the connection operations with the database.
Database migration requires the transfer of data from one database to another database service, which requires the use of corresponding database drivers to connect the source and target databases. The JVM, i.e., the Java virtual machine, is a core component of Java technology, which provides a running environment for Java programs. In performing database migration, the performance and characteristics of the JVM need to be considered to ensure that the entire migration process proceeds smoothly. The JVM and the drive catalog are part of a database migration platform.
2. Spark distributed memory computing framework
Spark is a big data parallel computing framework based on memory computing, provides memory computing, and intermediate results are directly put into the memory, so that higher iterative operation efficiency is brought, and IO (input/output) overhead is greatly reduced. When data is extracted, the task is split into sub-tasks to run on different nodes, and the sub-tasks are read and written simultaneously, so that the data exchange efficiency is greatly improved, and the problem that the whole task fails due to single-node breakdown can be avoided. And meanwhile, the data filtering and cleaning based on the memory eliminates the isomerism of the cleaning and filtering functions of each database, and avoids the excessive pressure generated on the database by directly adding the cleaning and filtering functions when extracting the database data, and meanwhile, the efficiency of cleaning and filtering the data can be greatly improved based on the cleaning and filtering of the memory.
As shown in FIG. 1, the data migration platform writes spark read-write plug-ins corresponding to database types, each of which exists in an independent jar (Java Archive, a file format for packaging Java class files, related resources and metadata), and comprises a data reading operation method and a data writing operation method for the database of the type. A standard access interface is established for each spark read-write plug-in, so that when more types of data sources are accessed in the future, the corresponding plug-in can be independently developed and integrated into a task engine only by developing according to the standard access interface. When the task is executed, the data migration platform selects or adapts a corresponding spark read-write plug-in according to the data source type of the source system or the target system, and submits the selected or adapted spark read-write plug-in to a spark cluster for execution. The embodiment of the invention uses the plug-in development form, can greatly simplify development difficulty and improve development and deployment efficiency. A source system refers to a source of data migration and may be a database, data warehouse, file system, or other data storage system. The target system refers to the destination of the data migration and may also be a database, data warehouse, file system, or other data storage system. Commit to spark cluster execution refers to committing the data migration task to execute in a cluster environment running APACHE SPARK. APACHE SPARK is an open source distributed computing framework for large-scale data processing that can process large-scale data in parallel and provide rich data processing and analysis functions. And submitting the data migration task to the Spark cluster for execution, so that the parallel computing capacity and the data processing capacity of the Spark can be fully utilized to accelerate the data migration process. The overall task engine refers to a system responsible for scheduling, executing, and monitoring data migration tasks. It may include task schedulers, task executors, monitoring alarm modules, etc. for managing task scheduling, execution and status monitoring throughout the data migration process.
The task engine can be regarded as a collection of plug-ins, the read-write plug-ins of each database are placed in each folder according to rules, and the parent folders can be regarded as the whole task engine. The standard interface refers to the interface method names of reading and writing of all database plugins, and the input parameters and the output parameters are consistent, so that when different database plugins are called, only the called plugin path is required to be modified, and the calling mode is not required to be modified.
3. Dynamic card turning warehouse technology
When enterprises migrate domestic database data, all historical data needs to be migrated, the data volume is huge, the TB and PB levels can be achieved, and aiming at the data volume of the level, all data cannot be run out by one task at a time, the task can last for days or even months, a large number of uncontrollable factors cause task failure to need to be re-run, and the efficiency is unacceptable to enterprises.
Based on this, as shown in fig. 1, the embodiment of the present invention proposes a technical solution of a dynamic card-turning library, which is a concept proposed by the data migration platform, that is, a table for rolling and setting data reading parameters existing in the configuration library of the data migration platform. The table fields are respectively IDs (unique identifiers), task_ids (TASK identifiers), task_param (turning parameters). The data migration platform management end comprises: and the migration task configuration module and the card turning library query module.
The migration task configuration module performs field analysis on an upstream system source table when configuring the migration task, and cuts the source table data into a plurality of blocks for serial transmission. The table containing the date field is divided by day and month, and the table without the date field can be divided by a main key or a row number (row number). After determining the splitting scheme, the data migration platform management end stores a piece of data in the card turning library, wherein the task_Param field stores the initial card turning parameters of the TASK.
And then, the data migration platform management end configures a database increment extraction statement in a migration task, parameters in the increment extraction statement are dynamically replaced by placeholders, a card turning library query module of the data migration platform management end can go to a card turning library to query the current card turning parameters of the migration task before each migration task is executed, and after the migration task is completed, the card turning parameters of the migration task are modified and the next migration task execution is carried out, so that the effect of batch extraction of the migration task can be realized. For example, with a date field in the current source table and a date field named DATA_DT, then the delta extraction statement WHERE DATA _DT= $ { yyyy-dd-mm } may be configured at the platform migration task, where $ { yyyy-dd-mm } is a specific value for replacing the current parameters in the pool of flipping cards. The following will explain a specific example.
As shown in fig. 3, the parameters for splitting are put into the table of the service library of the data migration platform, and can be extracted from front to back or from back to front, for example, the date range of the table data to be extracted is from 01 month 2000 to 12 months 2022, and the table field in the current card-turning library is 2022 month 12. The dynamic data card turning flow of the migration task specifically comprises a successful task execution flow and a failed task execution flow. The successful task execution flow comprises the following steps:
step 1.1: when a migration task starts to run, firstly, a data migration platform management end inquires the current card turning date of a card turning library;
step 1.2: returning the card turning library to the data migration platform management end, wherein the current card turning date is 2022 and 12 months, and the data migration platform management end acquires the current date to be extracted as 2022 and 12 months;
Step 1.3: the data migration platform management end submits the current card turning date 2022 and 12 months to the migration task engine end;
step 1.4: the migration task engine starts to migrate data in 12 months of 2022, namely executing 'WHERE DATADT =2022-12';
Step 1.5: after the migration task is successful, the migration task engine end returns a task success mark to the data migration platform management end so as to inform the data migration platform management end that the migration task is completed;
step 1.6: the management end of the data migration platform modifies the current card turning date of the card turning library, namely, the card turning date in the card turning library is reduced by 1 to be changed into 2022 and 11 months, and then the card turning date (2022 and 11 months) is submitted to the migration task engine end, and the migration task engine end starts to extract 2022 and 11 months of data, and the like until all the card turning dates are extracted.
If the task is abnormal in operation (i.e. the task fails), the date data of the card turning library cannot be modified by the data migration platform management end, at the moment, the data in the card turning library is 2022 and 11 months, the data migration platform restarts the failed task and extracts 2022 and 11 months data again, and the continuous transmission effect is achieved. The failed task execution flow specifically comprises the following steps:
Step 2.1: the management end of the data migration platform inquires that the current card turning date of the card turning library is 2022 and 11 months;
Step 2.2: returning the card turning library to the data migration platform management end, wherein the current card turning date is 2022 and 11 months;
Step 2.3: the management end of the data migration platform submits the current card turning date 2022 and 11 months to the migration task engine end;
step 2.4: the migration task engine executes "WHERE DATADT =2022-11";
step 2.5: the migration task engine end returns a task failure mark to the data migration platform management end;
Step 2.6: the data migration platform management end resubmit the card turning date 2022 and 11 months to the migration task engine end.
4. Remote database tool call
In the scheme of writing into the target database, the data migration platform of the embodiment of the invention provides two options for users to select, wherein the first option is a traditional data direct batch insertion form and the second option is a form of using a data import tool. The first scheme is suitable for most databases, can conveniently realize data exchange among heterogeneous data sources, but is limited by factors such as server bandwidth, disk IO, database parameters, internal processing mechanism of the distributed database and the like in the scene of mass data transmission to the domestic distributed database, and is not suitable for the scene of large data writing quantity and high concurrency.
Therefore, most nations produce distributed databases and provide special data import tools to realize high concurrency and massive writing of data. For example, the gds tool of GaussDB, the gccli client of the general Gbase of south China, and the like, the data importing tool generally needs to firstly place data (the place refers to that the data is exported from a database and stored as a file so as to be used for high concurrency, mass writing and other operations by using a special tool later) into the data file, and then related commands of the tool are called to realize file importing.
As shown in fig. 4, the data migration platform uses this feature to provide a data import scheme for users when performing database migration tasks, where the scheme needs to select a file server as a file storage area, and the users configure connection information and file storage addresses of the file server by themselves, and the file server needs to install a data import tool corresponding to the database in advance. When the data migration platform executes the data migration task, firstly, the data of the upstream source database is exported into a file and put into a file storage area, then the file is logged in to a file server through a remote SSH (Secure Shell) protocol login form, the data migration platform splices (the splicing is to integrate various information into a sentence which can be understood by a database import plug-in, and is used for writing the file into the database) and executes a data import command, so that the data import tool import of the database is realized. Specifically, the above-described various information means: the commands for loading file data into the target library by different databases contain different configuration items, such as file full path information, file separators, line breaks, coding formats, and the like.
Specifically, the migration task engine end of the data migration platform is responsible for splicing the command of the database importing tool and related parameters (such as file full path information, file separator, line feed, coding format and the like) together to form a command executable by the current plug-in, logging in to the file server in a remote SSH protocol logging form, and executing the command to realize the importing of the database importing tool. The method comprises the steps of splicing and executing data import commands in a database migration scene, namely combining the commands of a database import tool and related parameters according to a specific format (different plug-in formats and different plug-in recognized commands) to form a complete executable command so as to realize the data import operation. The data migration platform tool importing task flow comprises the following steps:
step 3.1: the migration task engine end of the data migration platform reads data from an upstream source server and exports the read data into a file; data in the database is exported as files for migration or backup between different database systems. The purpose of exporting data as files is for data transfer, backup, or import into other databases in different environments. Doing so may help ensure data integrity and portability.
Step 3.2: the migration task engine end of the data migration platform writes the exported file into the file storage area of the file server;
step 3.3: the migration task engine end of the data migration platform logs in to the file server in a remote SSH protocol login mode;
Step 3.4: the file server executes the command (the command which is executed by the current plug-in is formed by splicing) so as to call the data importing tool of the corresponding database;
Step 3.5: the data importing tool of the database executes loading operation to write the file in the file storage area into the target server. Loading refers to importing data from a file into a target server.
Based on the description of the two schemes, the advantages and disadvantages of the two schemes can be obtained, and the scheme of directly inserting data is suitable for data migration with small data volume and small concurrence, because the data transmission can be carried out without additional steps. The method of importing data by using a data importing tool of a database is suitable for data migration of tables with large data volume and high concurrency, and because files are required to be firstly landed when tasks are executed, remote server login is carried out, and command importing is finally adopted, compared with a scheme of directly inserting data, the scheme has more front steps and is not a scheme of directly inserting data in efficiency for tables with small data volume.
The data migration platform provided by the embodiment of the invention is based on flexible selection of two writing modes, so that a user can flexibly configure according to the data volume of the upstream source table in the process of data migration, and the migration efficiency is improved.
The technical effects of the embodiment of the invention include:
the embodiment of the invention realizes the automatic operation of the task based on the task batch flow of the card turning library, and solves the problem of breakpoint continuous transmission;
The graphical configuration of the embodiment of the invention greatly improves the efficiency of developing the data migration task for the user.
The task operation scheme based on spark greatly improves the performance, reliability and success rate of the task.
The multiple warehousing modes of the embodiment of the invention are switched, so that the writing efficiency of mass data is improved.
The embodiment of the invention provides a database migration method, which is used for data migration among databases and comprises the following steps:
The dynamic card turning library module stores a data reading parameter table, wherein the data reading parameter table comprises a card turning parameter field, and the card turning parameter field is used for storing current card turning parameters of a migration task; the current card turning parameters can be dynamically updated according to the successful execution result of the current migration task;
The migration task configuration module performs field analysis on source table data of an upstream system when a migration task is configured, and segments the source table data into a plurality of blocks according to a preset segmentation rule for serial transmission; storing a piece of data in a dynamic card turning library module, wherein a card turning parameter field stores initial card turning parameters of the migration task; configuring database increment extraction sentences in a migration task, wherein the card turning parameters are dynamically replaced by placeholders;
before each migration task is executed, the card turning library inquiry module inquires the current card turning parameters of the current migration task and submits the current card turning parameters to the migration task engine end;
The migration task configuration module modifies the card turning parameters of the migration task when the migration task is successfully executed, and obtains the modified card turning parameters;
The card turning library inquiry module submits the modified card turning parameters to a migration task engine end to execute the next migration task when the execution of the migration task is successful;
and the migration task engine end executes the migration task according to the increment extraction statement and the current card turning parameter.
In some embodiments, the preset slicing rules include: a table containing date fields is cut by day or month; a table without a date field is segmented by pressing a main key or a line number; the data read parameter table also contains a unique identification field, which is the primary key of the data read parameter table, indicating the uniqueness of the record, and a task identification field for uniquely identifying the migration task using the parameter.
In some embodiments, the placeholder is a string in date format that is used to replace a specific value of the current turn parameter in the turn library.
Further, the method further comprises: when the execution of the migration task is successful, the migration task engine returns a task success mark to the data migration platform management end so as to inform the data migration platform management end that the migration task is completed; when the execution of the migration task fails, the migration task engine returns a task failure mark to the data migration platform management end; when the execution of the migration task fails, the migration task configuration module does not modify the card turning parameters of the migration task; when the execution of the migration task fails, the card turning library inquiry module resubmisses the card turning parameters to the migration task engine end so as to restart the failed migration task.
In some embodiments, the method further comprises the steps of:
Dynamically loading Jdbc drivers of a required connection database from a specified driver catalog to a Java virtual machine context by a driver adapter in a migration task engine end;
The drive adapter establishes Jdbc connection with the target database and the source database respectively by utilizing the database connection information and Jdbc drive;
A data processing module in the migration task engine end selects a corresponding spark read-write plug-in according to the data source type of the source system or the target system, and submits the plug-in to a spark cluster for data processing;
And the remote database tool calling module in the migration task engine end executes the operation of writing the data into the target database according to the data writing form selected by the user, wherein the data writing form comprises a batch inserting form or a form using a data importing tool.
Such database drives include, but are not limited to, mysql drive, oracle drive, gaussdb drive, gbase drive, kingbase drive, and the like.
The spark read-write plug-ins exist in independent jar packages, and each plug-in comprises a read data operation method and a write data operation method for the type of database.
In some embodiments, the data write operation includes the sub-steps of:
reading data from an upstream source server and exporting the data into a file;
Writing the exported file into a file storage area of the file server;
Logging in to a file server through a remote secure shell protocol;
the file server executes the command to call the data importing tool of the corresponding database;
The data importing tool of the database executes importing operation and writes the data into the target server.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
As shown in fig. 5, an embodiment of the present invention further provides a computer-readable storage medium, in which a computer program is stored, which when executed by a processor, implements the steps of the above method.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. Of course, there are other ways of readable storage medium, such as quantum memory, graphene memory, etc. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
As shown in fig. 6, an embodiment of the present invention further provides an electronic device, which includes one or more processors 301, a communication interface 302, a memory 303, and a communication bus 304, where the processors 301, the communication interface 302, and the memory 303 perform communication with each other through the communication bus 304.
A memory 303 for storing a computer program;
The processor 301 is configured to implement the steps of the above method when executing the program stored in the memory 303.
Processor 301 may be a general-purpose processor including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but may also be a digital signal processor (DIGITAL SIGNAL Processing, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components.
Memory 303 may include mass storage for data or instructions. By way of example, and not limitation, memory 303 may include a hard disk drive (HARD DISK DRIVE, HDD), a floppy disk drive, flash memory, optical disk, magneto-optical disk, magnetic tape, or a universal serial bus (Universal Serial Bus, USB) drive, or a combination of two or more of these. The memory 303 may include removable or non-removable (or fixed) media, where appropriate. In a particular embodiment, the memory 303 is a non-volatile solid state memory. In particular embodiments, memory 303 includes Read Only Memory (ROM). The ROM may be mask programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory, or a combination of two or more of these, where appropriate.
The communication bus 304 includes hardware, software, or both for coupling the above components to one another. For example, the bus may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a micro channel architecture (MCa) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus, or a combination of two or more of the above. The bus may include one or more buses, where appropriate. Although embodiments of the invention have been described and illustrated with respect to a particular bus, the invention contemplates any suitable bus or interconnect.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
Although the application provides method operational steps as an example or a flowchart, more or fewer operational steps may be included based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one way of performing the order of steps and does not represent a unique order of execution. When implemented by an actual device or client product, the instructions may be executed sequentially or in parallel (e.g., in a parallel processor or multi-threaded processing environment) as shown in the embodiments or figures.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The principles and embodiments of the present invention have been described in detail with reference to specific examples, which are provided to facilitate understanding of the method and core ideas of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (10)

1. A database migration platform, the platform comprising: the dynamic card turning library module, the data migration platform management end and the migration task engine end;
the dynamic card turning library module is used for storing a data reading parameter table, wherein the table comprises a card turning parameter field, and the card turning parameter field is used for storing current card turning parameters of a migration task;
The data migration platform management end comprises a migration task configuration module and a card turning library query module;
The migration task configuration module is used for carrying out field analysis on source table data of an upstream system when a migration task is configured, and dividing the source table data into a plurality of blocks according to a preset dividing rule for serial transmission; storing a piece of data in the dynamic card turning library module, wherein the card turning parameter field stores initial card turning parameters of the migration task; configuring database increment extraction sentences in a migration task, wherein the card turning parameters are dynamically replaced by placeholders;
The card turning library inquiry module is used for inquiring current card turning parameters of the current migration task by the dynamic card turning library module before each migration task is executed, and submitting the current card turning parameters to a migration task engine end;
the migration task configuration module is further used for modifying the card turning parameters of the migration task when the migration task is successfully executed, and obtaining modified card turning parameters;
The card turning library query module is further used for submitting the modified card turning parameters to the migration task engine end to execute the next migration task when the execution of the migration task is successful;
and the migration task engine end is used for executing a migration task according to the increment extraction statement and the current card turning parameter.
2. The database migration platform of claim 1, wherein the preset slicing rules comprise: a table containing date fields is cut by day or month; a table without a date field is segmented by pressing a main key or a line number;
The data reading parameter table also comprises a unique identification field and a task identification field, wherein the unique identification field is a main key of the data reading parameter table and is used for indicating the uniqueness of the record, and the task identification field is used for uniquely identifying a migration task using the parameter.
3. The database migration platform of claim 1, wherein,
The migration task engine end is further used for returning a task success mark to the data migration platform management end when the execution of the migration task is successful, so as to inform the data migration platform management end that the migration task is completed; when the execution of the migration task fails, returning a task failure mark to the data migration platform management end;
The migration task configuration module is further configured to not modify the card turning parameter of the migration task when the execution of the migration task fails; and the card turning library query module is also used for re-submitting card turning parameters to the migration task engine end when the execution of the migration task fails, so as to restart the failed migration task.
4. The database migration platform of claim 1, further comprising: a drive catalog for storing any one or more Jdbc drives including mysql drive, oracle drive, gaussdb drive, gbase drive, kingbase drive;
The migration task engine end comprises:
the database driver dynamic loading module is configured with a driver adapter and is used for dynamically loading Jdbc drivers of the needed connection databases from the appointed driver catalog into the Java virtual machine context environment, and establishing Jdbc connection with the target database and the source database respectively by utilizing the database connection information and the Jdbc drivers; and
The data processing module based on the spark distributed memory computing framework is configured with a plurality of independent spark read-write plug-ins, and each spark read-write plug-in comprises a read data operation method and a write data operation method for a specific type of database; the data processing module is used for selecting a corresponding spark read-write plug-in according to the data source type of the source system or the target system, and submitting the selected spark read-write plug-in to a spark cluster for execution.
5. The database migration platform of claim 4, wherein each spark read/write plug-in is an independent jar packet, the spark read/write plug-in is formulated with a standard access interface, the standard access interface refers to interface method names of all database spark read/write plug-ins, and the in-parameter and the out-parameter are consistent.
6. The database migration platform of claim 1, wherein the placeholder is a string in date format for replacing a specific value of a current turn-over parameter in the turn-over library.
7. The database migration platform of claim 1, wherein the migration task engine side further comprises:
The remote database tool calling module is used for reading data from an upstream source server and exporting the data into a file; writing the exported file into a file storage area of the file server; logging in to the file server through a remote secure shell protocol, wherein a data importing tool of a corresponding database is pre-installed on the file server; triggering the file server to execute a command to call a data importing tool of a corresponding database; and executing an import operation through a data import tool of the corresponding database, and writing the exported file into the target server.
8. The database migration platform of claim 7, wherein the remote database tool call module is specifically configured to splice together a command of the database import tool and related parameters to form a command executable by the data import tool; the relevant parameters include any of a number of configuration items: file full path information, file separator, line feed, encoding format, etc.
9. A database migration method of a database migration platform according to claim 1, the method comprising:
the dynamic card turning library module stores a data reading parameter table, wherein the table comprises a card turning parameter field, and the card turning parameter field is used for storing the current card turning parameters of the migration task;
The migration task configuration module performs field analysis on source table data of an upstream system when a migration task is configured, and segments the source table data into a plurality of blocks according to a preset segmentation rule for serial transmission; storing a piece of data in the dynamic card turning library module, wherein the card turning parameter field stores initial card turning parameters of the migration task; configuring database increment extraction sentences in a migration task, wherein the card turning parameters are dynamically replaced by placeholders;
before each migration task is executed, the card turning library inquiry module inquires the current card turning parameters of the current migration task, and submits the current card turning parameters to a migration task engine end;
The migration task configuration module modifies the card turning parameters of the migration task when the migration task is successfully executed, and obtains modified card turning parameters;
The card turning library query module submits the modified card turning parameters to the migration task engine end to execute the next migration task when the execution of the migration task is successful;
And the migration task engine end executes a migration task according to the increment extraction statement and the current card turning parameter.
10. The database migration method of claim 9, further comprising the steps of:
the drive adapter in the migration task engine side dynamically loads Jdbc drives of the needed connection database from the appointed drive catalog into the Java virtual machine context;
The drive adapter establishes Jdbc connection with the target database and the source database respectively by utilizing the database connection information and the Jdbc drive;
the data processing module in the migration task engine end selects a corresponding spark read-write plug-in according to the data source type of the source system or the target system, and submits the plug-in to a spark cluster for data processing;
And the remote database tool calling module in the migration task engine end executes the operation of writing the data into the target database according to the data writing form selected by the user, wherein the data writing form comprises a batch insertion form or a form using a data importing tool.
CN202311726897.1A 2023-12-15 2023-12-15 Database migration platform and database migration method Pending CN117931767A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311726897.1A CN117931767A (en) 2023-12-15 2023-12-15 Database migration platform and database migration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311726897.1A CN117931767A (en) 2023-12-15 2023-12-15 Database migration platform and database migration method

Publications (1)

Publication Number Publication Date
CN117931767A true CN117931767A (en) 2024-04-26

Family

ID=90749656

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311726897.1A Pending CN117931767A (en) 2023-12-15 2023-12-15 Database migration platform and database migration method

Country Status (1)

Country Link
CN (1) CN117931767A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104699723A (en) * 2013-12-10 2015-06-10 北京神州泰岳软件股份有限公司 Data exchange adapter and system and method for synchronizing data among heterogeneous systems
CN110674108A (en) * 2019-08-30 2020-01-10 中国人民财产保险股份有限公司 Data processing method and device
CN111367894A (en) * 2020-03-31 2020-07-03 中国工商银行股份有限公司 Data comparison method and device based on database migration
CN112579626A (en) * 2020-09-28 2021-03-30 京信数据科技有限公司 Construction method and device of multi-source heterogeneous SQL query engine
CN113486116A (en) * 2021-07-07 2021-10-08 建信金融科技有限责任公司 Data synchronization method and device, electronic equipment and computer readable medium
CN114528347A (en) * 2022-01-28 2022-05-24 中银金融科技有限公司 Data synchronization method between heterogeneous database systems
CN116467279A (en) * 2023-03-16 2023-07-21 中银金融科技有限公司 Data migration method and device
CN116775613A (en) * 2023-06-28 2023-09-19 中国建设银行股份有限公司 Data migration method, device, electronic equipment and computer readable medium
CN116775599A (en) * 2023-05-29 2023-09-19 中国电信股份有限公司 Data migration method, device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104699723A (en) * 2013-12-10 2015-06-10 北京神州泰岳软件股份有限公司 Data exchange adapter and system and method for synchronizing data among heterogeneous systems
CN110674108A (en) * 2019-08-30 2020-01-10 中国人民财产保险股份有限公司 Data processing method and device
CN111367894A (en) * 2020-03-31 2020-07-03 中国工商银行股份有限公司 Data comparison method and device based on database migration
CN112579626A (en) * 2020-09-28 2021-03-30 京信数据科技有限公司 Construction method and device of multi-source heterogeneous SQL query engine
CN113486116A (en) * 2021-07-07 2021-10-08 建信金融科技有限责任公司 Data synchronization method and device, electronic equipment and computer readable medium
CN114528347A (en) * 2022-01-28 2022-05-24 中银金融科技有限公司 Data synchronization method between heterogeneous database systems
CN116467279A (en) * 2023-03-16 2023-07-21 中银金融科技有限公司 Data migration method and device
CN116775599A (en) * 2023-05-29 2023-09-19 中国电信股份有限公司 Data migration method, device, electronic equipment and storage medium
CN116775613A (en) * 2023-06-28 2023-09-19 中国建设银行股份有限公司 Data migration method, device, electronic equipment and computer readable medium

Similar Documents

Publication Publication Date Title
US20230138736A1 (en) Cluster file system-based data backup method and apparatus, and readable storage medium
CN111339041B (en) File analysis and storage method and device and file generation method and device
US8239348B1 (en) Method and apparatus for automatically archiving data items from backup storage
CN111324610A (en) Data synchronization method and device
US10095731B2 (en) Dynamically converting search-time fields to ingest-time fields
CN112380180A (en) Data synchronization processing method, device, equipment and storage medium
GB2508599A (en) Software version management when downgrading software
US9892122B2 (en) Method and apparatus for determining a range of files to be migrated
CN109376142B (en) Data migration method and terminal equipment
CN109885642B (en) Hierarchical storage method and device for full-text retrieval
US7356493B2 (en) Apparatus and method for passing information between catalogs in a computer operating system
CN105808653A (en) User label system-based data processing method and device
CN110134646B (en) Knowledge platform service data storage and integration method and system
CN115858488A (en) Parallel migration method and device based on data governance and readable medium
CN112306957A (en) Method and device for acquiring index node number, computing equipment and storage medium
CN110019169B (en) Data processing method and device
CN105183854A (en) Scheduling method applicable to data unloading of database
CN112965939A (en) File merging method, device and equipment
CN110727565B (en) Network equipment platform information collection method and system
CN117931767A (en) Database migration platform and database migration method
CN115080114B (en) Application program transplanting processing method, device and medium
CN115809070A (en) Method for mixed application of object storage in private cloud and big data cluster
EP3082050A1 (en) Mass data fusion storage method and system
CN114896347A (en) Data processing method and device, electronic equipment and storage medium
CN111142791A (en) Data migration method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination