CN108733704B - Multi-database data processing method and device, storage medium and electronic equipment - Google Patents

Multi-database data processing method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN108733704B
CN108733704B CN201710261410.5A CN201710261410A CN108733704B CN 108733704 B CN108733704 B CN 108733704B CN 201710261410 A CN201710261410 A CN 201710261410A CN 108733704 B CN108733704 B CN 108733704B
Authority
CN
China
Prior art keywords
database
data
preset
cache
content information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710261410.5A
Other languages
Chinese (zh)
Other versions
CN108733704A (en
Inventor
马文军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201710261410.5A priority Critical patent/CN108733704B/en
Publication of CN108733704A publication Critical patent/CN108733704A/en
Application granted granted Critical
Publication of CN108733704B publication Critical patent/CN108733704B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure relates to a multi-database data processing method and device, a storage medium and an electronic device. The method comprises the following steps: acquiring a first operation request and a second operation request corresponding to a first database and a second database respectively; executing a first data operation on the first database according to the first operation request; when a preset method in a preset cache is called to judge that the concurrent operation does not exist currently, storing the operation content information of the second operation request into the preset cache; and calling the operation content information from the preset cache, and executing second data operation on the second database through a preset Java multithreading pool. The data consistency problem of multiple databases is solved by using a cache and multithreading technology, the defect of processing in an MQ message mode is overcome, and the data consistency of the multiple databases can be better.

Description

Multi-database data processing method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a multi-database data processing method, a multi-database data processing apparatus, and a computer-readable storage medium and an electronic device for implementing the multi-database data processing method.
Background
At present, with the great and rapid increase of the data storage amount and the data access amount of a database, tables in the database will be more and the data amount in the tables will be more and more along with the development of time and business under the condition that the database is not divided into tables. Accordingly, the overhead of data operation, increasing, deleting, and modifying is also increased. In addition, since distributed deployment cannot be performed and resources of one server are limited, eventually, data volume and data processing capacity which can be carried by the database will suffer from bottlenecks.
In order to alleviate the above problems to some extent, database-based and table-based technologies are widely used in databases. The database and table dividing technology is used for storing data blocks originally stored in one database into a plurality of databases and storing data blocks originally stored in one table into a plurality of tables. Therefore, a large amount of data can be stored in a plurality of database machines and a plurality of tables, the problems of database access performance and storage space caused by a single table and a single library are solved, and data expansion is facilitated. However, one problem with data banking is that data involved in a service line is split into different databases, and data operations of a service line must ensure atomic consistency of execution of the relevant data, i.e., all data operations are successful or all data operations are failed. However, since the data is in different databases, the data consistency cannot be guaranteed through the transaction of the databases. How to solve the problem of data consistency of multiple databases is the first difficult problem faced by each database using the database partitioning technology at present.
In the related art, a mainstream solution for realizing data consistency of multiple databases at present is to asynchronously execute database operations of other databases through a message queue mq (message queue) message, so as to ensure data consistency. However, the disadvantages of the related art are as follows: firstly, MQ messages have aging problems, which cause inconsistent execution sequences, that is, messages sent first may be received later, which may cause multiple database operations, and finally, the result of the database has errors, which cannot ensure data consistency. If the A message changes the amount to 10 and the B message changes the amount to 5, the correct operation should be to execute A first and then B, however, since the message sequence cannot be guaranteed, the final execution sequence is B first and then A, and the final amount is 10. Secondly, there is a possibility that the MQ message is lost, and once the MQ message is lost, the whole data has errors, and the data consistency cannot be ensured. Finally, because the MQ messages are executed asynchronously, normal business operations hope to solve the problem of concurrent processing, but the MQ messages are asynchronous and cannot solve the problem, and finally data inconsistency and even program error report can be caused. For example, the service line logic is that the A database amount is reduced by 10, and the B database state is changed from 0 to 1. If there are multiple operations of this service line in a concurrent situation, only one operation can be successfully performed in the normal situation. However, the asynchronous processing of the MQ messages results in the A database money being subtracted by 10 twice, and the B database state being changed for the second time, an exception occurs. Therefore, there is a need to provide a new technical solution to improve one or more of the problems in the above solutions.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
An object of the present disclosure is to provide a multi-database data processing method, a multi-database data processing apparatus, and a computer-readable storage medium and an electronic device implementing the multi-database data processing method, thereby overcoming one or more of the problems due to the limitations and disadvantages of the related art, at least to some extent.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of the embodiments of the present disclosure, there is provided a method for processing multiple databases, the method including:
acquiring a first operation request and a second operation request corresponding to a first database and a second database respectively;
executing a first data operation on the first database according to the first operation request;
when a preset method in a preset cache is called to judge that the concurrent operation does not exist currently, storing the operation content information of the second operation request into the preset cache;
and calling the operation content information from the preset cache, and executing second data operation on the second database through a preset Java multithreading pool.
In an exemplary embodiment of the present disclosure, the preset cache includes a Redis cache, and the preset method is a setNx method in the Redis cache.
In an exemplary embodiment of the present disclosure, the method further includes:
after the first data operation of the first database is executed according to the first operation request, setting a compensation schedule in the first database; wherein the compensation schedule table stores the operation content information of the second operation request;
and deleting the operation content information in the compensation scheduling table after the second data operation on the second database is successfully executed.
In an exemplary embodiment of the present disclosure, the method further includes:
starting a query task at preset time intervals through a preset task scheduling tool to acquire the compensation scheduling table in the first database;
judging whether the operation content information in the compensation scheduling table is deleted or not, if not, extracting the operation content information, and executing the second data operation on the second database;
and after the second data operation on the second database is executed, deleting the operation content information.
In an exemplary embodiment of the present disclosure, the preset task scheduling tool includes an open source Quartz task scheduling tool.
In an exemplary embodiment of the present disclosure, the preset time is 5 to 10 minutes.
In an exemplary embodiment of the present disclosure, the first data operation and the second data operation are different data operations.
According to a second aspect of embodiments of the present disclosure, there is provided a multi-database data processing apparatus, the apparatus including:
the operation request acquisition module is used for acquiring a first operation request and a second operation request which respectively correspond to the first database and the second database;
the first operation execution module is used for executing first data operation on the first database according to the first operation request;
the cache concurrency judging module is used for calling a preset method in a preset cache to judge that the concurrency operation does not exist at present, and storing the operation content information of the second operation request into the preset cache;
and the second operation execution module is used for calling the operation content information from the preset cache and executing second data operation on the second database through a preset Java multithreading pool.
According to a third aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the multi-database data processing method described in any one of the above embodiments.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the steps of the multi-database data processing method in any of the above embodiments via execution of the executable instructions.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in an embodiment of the present disclosure, after executing a first data operation on a first database, when calling a preset method in a preset cache to determine that there is no concurrent operation currently, the method and the device for processing data of multiple databases store operation content information of a second operation request in the preset cache, call the operation content information from the preset cache, and execute a second data operation on a second database through a preset Java multithread pool. Therefore, on one hand, the data consistency problem of multiple databases is solved by using a cache and multithreading technology, and the defects of processing in an MQ message mode in the related technology, such as abnormal execution problem of the data of the multiple databases under the condition of concurrent operation, the problem of the execution sequence and efficiency of data operation and the like, are avoided; on the other hand, the multi-database data consistency can be further better by using the technology based on cache and multithreading.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.
FIG. 1 schematically illustrates a flow chart of a method for processing multiple database data in an exemplary embodiment of the disclosure;
FIG. 2 schematically illustrates a flow chart of a method for processing multiple database data in an exemplary embodiment of the disclosure;
FIG. 3 schematically illustrates a flow chart of a method for processing multiple database data in an exemplary embodiment of the disclosure;
FIG. 4 schematically illustrates a multiple database data processing apparatus in an exemplary embodiment of the disclosure;
FIG. 5 schematically illustrates a computer-readable storage medium in an exemplary embodiment of the disclosure;
fig. 6 schematically illustrates an electronic device in an exemplary embodiment of the disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The exemplary embodiment first provides a method for processing data of multiple databases. Referring to fig. 1, the method may include the steps of:
step S101: and acquiring a first operation request and a second operation request which respectively correspond to the first database and the second database.
Step S102: and executing a first data operation on the first database according to the first operation request.
Step S103: and when a preset method in a preset cache is called to judge that the concurrent operation does not exist currently, storing the operation content information of the second operation request into the preset cache.
Step S104: and calling the operation content information from the preset cache, and executing second data operation on the second database through a preset Java multithreading pool.
By the multi-database data processing method, on one hand, the data consistency problem of the multi-database is solved by using a technology based on cache and multithreading, and the defects of processing in an MQ message mode in the related technology, such as abnormal multi-database data execution problem under the condition of concurrent operation, data operation execution sequence and efficiency problem and the like, are avoided; on the other hand, the multi-database data consistency can be further better by using the technology based on cache and multithreading.
Hereinafter, the respective steps of the above-described method in the present exemplary embodiment will be described in more detail with reference to fig. 1 to 5.
In step S101, a first operation request and a second operation request corresponding to the first database and the second database are acquired.
In the present exemplary embodiment, the number of databases may be two or more, which is not limited to this, and more databases are also applicable to the method. The first operation request and the second operation request may be different operation requests belonging to the same service line, such as one being a data modification request, the other being a data update request, and the like.
In step S102, a first data operation on the first database is performed according to the first operation request.
Illustratively, a first data operation, e.g., a data modification operation, is performed on a first database, e.g., in accordance with a data modification request. For example, the amount of money in the table of amounts in the first database (e.g., 100) may be reduced by 10, i.e., the amount of money may be modified to 90.
In step S103, when the preset method in the preset cache is called to determine that there is no concurrent operation currently, the operation content information of the second operation request is stored in the preset cache.
In an exemplary embodiment, the preset cache may include a Redis cache, and the preset method may be a setNx method in the Redis cache. For example, in step S103, a setNx method in the Redis cache may be called to determine whether there is a concurrent operation currently, and when there is no concurrent operation currently, the operation content information (e.g., change the state of a data table in the second database from 0 to 1) of the second operation request (e.g., the data update request) is stored in the Redis cache.
It should be clear that, when there is no concurrent operation currently, it means that a subsequent operation such as data update corresponding to the second operation request can be continuously executed. Therefore, the operation content information data required to be executed by the second database can be stored in the Redis cache for subsequent multi-thread calling. And submitting the current database transaction after the execution of the current first database is finished. And when concurrent operation exists currently, if the current service line is executed, the database rollback is required, and the calling is finished.
The setNx method adopted in the exemplary embodiment can ensure that only one execution succeeds during concurrence. The setNx method utilizing the Redis cache technology can prevent concurrent database operation and solve the problem of asynchronously executing data operation under the concurrent condition.
In step S104, the operation content information is called from the preset cache, and a second data operation on the second database is executed through a preset Java multithread pool.
In the exemplary embodiment, the preset Java multithreading pool may be, for example, a thread pool tool threadpoolsexcecutor of Java itself. The first data operation and the second data operation are different data operations, one is a data modification operation, and the other is a data update operation. For example, in step S104, the operation content information may be called from the Redis cache (e.g., to change the state of a data table in the second database from 0 to 1), and a second data operation on the second database is performed through threadpooxecutor, e.g., to change the state of a data table in the second database from 0 to 1 by performing a data update operation. In the exemplary embodiment, the data operation of other databases except the first database is quickly executed by adopting the multithreading pool of the Java, so that the problems of the sequence and the efficiency of the data operation execution caused by adopting an MQ message mode in the related technology are solved, and the final data consistency can be better ensured.
Referring to fig. 2, in an exemplary embodiment of the present disclosure, on the basis of the above embodiment, the method may further include the steps of:
step S201: after the first data operation of the first database is executed according to the first operation request, setting a compensation schedule in the first database; wherein the compensation schedule table stores therein the operation content information of the second operation request.
Illustratively, a word record table can be inserted in the first database as the compensation schedule table for the compensation operation of the subsequent database. The record content in the Work record table may be, for example, execution information related to the update operation in which the state of the second database application form table R0001 is changed from 0 to 1.
Step S202: and deleting the operation content information in the compensation scheduling table after the second data operation on the second database is successfully executed.
Illustratively, after the second data operation on the second database is successfully executed in step S104, the content in the Work record table is deleted. That is, after inserting the word record table, if other database operations recorded in the word record table have already been performed, the relevant contents in the word record table may be deleted. If other database operations recorded in the Work record table have not been performed, the following database operation compensation process as shown in FIG. 3 may be performed.
Referring to fig. 3, in an exemplary embodiment of the present disclosure, the method may further include the steps of:
step S301: and starting a query task at preset time intervals through a preset task scheduling tool so as to obtain the compensation scheduling table in the first database.
Illustratively, the preset task scheduler may comprise an open source Quartz task scheduler. The preset time can be 5-10 minutes, certainly can be other values, can be set by self according to a service scene, and is not limited to this. The Quartz task scheduling tool may execute a query task every preset time, for example, 10 minutes, query the work record table in the first database, and take out the record inserted 10 minutes ago.
Step S302: and judging whether the operation content information in the compensation scheduling table is deleted or not, if not, extracting the operation content information, and executing the second data operation on the second database.
Illustratively, continuing with the above example, if the record in the Work record table is still not deleted within 10 minutes, it indicates that the operation (e.g., the update operation) of the second database corresponding to the execution information in the record is not executed for some reason, and needs to be compensated. For example, this undeleted record indicates that the database operation is performed with the application form table R0001 in the second database changed from 0 to 1. The execution information in the record may be extracted and an update operation to the second database may be performed based on the execution information to complete the compensation process.
Step S303: and after the second data operation on the second database is executed, deleting the operation content information.
For example, after the update operation is completed, the record of the corresponding work record table is deleted, which indicates that the database consistency compensation operation is completed, and the next task scheduling may be performed.
In the exemplary embodiment, the task scheduling is used to asynchronously compensate the database operation, so that the final data consistency can be ensured even if the previous database operation is not executed. Therefore, the problem that even if the server is down and the like, compensation tools can be provided to ensure the final consistency of the data can be solved.
The method steps in the above embodiments of the present disclosure are further described below with reference to a specific application scenario.
In this application scenario, 3 databases are taken as an example for explanation. The databases involved are, for example, the A database, the B database and the C database. The multiple databases in the service line execute data operation as follows: the database A needs to subtract 10 yuan from the amount table corresponding to the application form R0001 (corresponding to the first data operation), the database B needs to update the state of the R0001 in the application form master table from 0 to 1 (corresponding to the second data operation), and the database C needs to delete the record of the R0001 in the application form detail table (corresponding to the third data operation). The service line should ensure the consistency of the three data operations, i.e. the three data operations are successful at the same time.
Assuming that the data operation of the A database is more important on the service line, the transaction of the A database is started first, and the following operation steps 401-404 are executed in the transaction:
step 401: and subtracting 10 from the amount in the amount table corresponding to the R0001 in the database A.
Step 402: and inserting a Work record table in the database A for the compensation operation of the subsequent database and the task scheduling execution. The compensation record content in the Work record table is that the state of the B database application form master table R0001 is changed from 0 to 1, and the C database application form detail deletes the execution information related to the record of the R0001.
Step 403: and judging and determining whether concurrent operation exists currently or not by a setNx method of Redis cache. And if the cache judges that the execution fails, the concurrent operation is present, namely the current service line is executed, the database is rolled back, and the calling is finished. If the cache judges that the execution is successful, the cache indicates that no concurrent operation exists at present, and the subsequent operation can be continuously executed. Storing the execution information data to be executed by the database B and the database C into a Redis cache, wherein the cache KEY value is 'R0001-B-C' for subsequent multithread calling. And completing the operation execution of the current A database, and submitting the current database transaction.
Step 404: the database operations of the B database and the C database are executed by a thread pool tool threadPooleExecutor of Java itself.
Illustratively, the Redis cache is called, and the relevant execution information of the database needing to be executed is obtained through caching the KEY value 'R0001-B-C'. In this embodiment, the relevant execution information is the state of the application form master table R0001 in the B database is changed from 0 to 1, and the deletion application form detail table of the C database. Then, database operation is performed, the state of the record of the application form master table R0001 in the B database is updated from 0 to 1, and the record of the application form detail table R0001 in the C database is deleted. And after the database is successfully executed, deleting the compensation record in the Work record table for executing the task scheduling. Because the data in the current B and C databases are already executed, compensation based on task scheduling again is not needed.
If the updating operation of the B database or the C database executed by the current multithreading cannot be executed due to the reasons of database connection failure, machine restart and the like, the data consistency operation is carried out again in the form of a task scheduling Work record table subsequently.
The following description is made of the operation of compensating for data consistency by task scheduling. In the application scenario of the example, an open source task scheduling tool Quartz is adopted, a query task is executed every preset time, for example, 10 minutes, a Work record table in a database A is queried, and a compensation record inserted before 10 minutes is taken out. If the compensation record is not deleted within 10 minutes, it indicates that the corresponding B and C database operations are not performed and compensation processing is required.
Assuming that the compensation record indicates that the unexecuted database operation changes the state of the application form master table R0001 in the B database from 0 to 1, the record of the application form detail table R0001 in the C database needs to be deleted. The execution data information of the compensation record is extracted and two database operations (corresponding to the second data operation and the third data operation) of the B database and the C database are respectively executed. And deleting the compensation record of the corresponding Work record table after the execution is finished, indicating that the database consistent compensation operation is finished, and performing the next task scheduling.
In the application scenario, only three databases, namely the database A, the database B and the database C, are involved, and if more databases are involved, the method can still be applied. The most important database operation is executed firstly, and other database operations carry out compensation operation of data consistency through multi-thread execution or subsequent task scheduling, so that the data consistency of multiple databases can be well ensured.
It should be noted that although the various steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that these steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc. Additionally, it will also be readily appreciated that the steps may be performed synchronously or asynchronously, e.g., among multiple modules/processes/threads.
Further, in the present exemplary embodiment, a multi-database data processing apparatus is also provided. Referring to fig. 4, the apparatus 100 may include an operation request acquisition module 101, a first operation execution module 102, a cache concurrency determination module 103, and a second operation execution module 104. Wherein:
the operation request obtaining module 101 is configured to obtain a first operation request and a second operation request corresponding to a first database and a second database respectively.
The first operation execution module 102 is configured to execute a first data operation on the first database according to the first operation request.
The cache concurrency determining module 103 is configured to, when a preset method in a preset cache is called to determine that there is no concurrent operation currently, store the operation content information of the second operation request in the preset cache.
The second operation execution module 104 is configured to call the operation content information from the preset cache, and execute a second data operation on the second database through a preset Java multithread pool.
In an exemplary embodiment of the present disclosure, the preset cache includes a Redis cache, and the preset method is a setNx method in the Redis cache.
In an exemplary embodiment of the disclosure, the apparatus 100 may further include an information setting module (not shown) configured to set a compensation schedule in the first database after the first data operation of the first database is executed according to the first operation request. Wherein the compensation schedule table stores therein the operation content information of the second operation request. The second operation executing module 104 is further configured to delete the operation content information in the compensation scheduling table after the second data operation on the second database is successfully executed.
In an exemplary embodiment of the present disclosure, the apparatus 100 may further include a data operation compensation module (not shown) configured to start, by a preset task scheduling tool, a query task at preset time intervals to obtain the compensation schedule in the first database; judging whether the operation content information in the compensation scheduling table is deleted or not, if not, extracting the operation content information, and executing the second data operation on the second database; and after the second data operation on the second database is executed, deleting the operation content information.
In an exemplary embodiment of the present disclosure, the preset task scheduling tool may include, but is not limited to, an open source Quartz task scheduling tool, and the like.
In an exemplary embodiment of the present disclosure, the preset time may be 5 to 10 minutes, but is not limited thereto.
In an exemplary embodiment of the present disclosure, the first data operation and the second data operation may be different data operations.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units. The components shown as modules or units may or may not be physical units, i.e. may be located in one place or may also be distributed over a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the wood-disclosed scheme. One of ordinary skill in the art can understand and implement it without inventive effort.
Referring to fig. 5, in an exemplary embodiment of the present disclosure, a computer-readable storage medium 300 is further provided, on which a computer program is stored, which when executed by, for example, a processor, may implement the steps of the multi-database data processing method described in any one of the above embodiments. In some possible embodiments, the various aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the above-mentioned section of the multi-database data processing method of the present description, when said program product is run on the terminal device.
A program product for implementing the above-described multi-database data processing method according to an embodiment of the present invention may employ a portable compact disc read only memory (CD-ROM) and include program codes, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
In an exemplary embodiment of the present disclosure, there is also provided an electronic device, which may include a processor, and a memory for storing executable instructions of the processor. Wherein the processor is configured to perform the steps of the multi-database data processing method in any of the above embodiments via execution of the executable instructions.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 600 according to this embodiment of the invention is described below with reference to fig. 6. The electronic device 600 shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 6, the electronic device 600 is embodied in the form of a general purpose computing device. The components of the electronic device 600 may include, but are not limited to: at least one processing unit 610, at least one storage unit 620, a bus 630 that connects the various system components (including the storage unit 620 and the processing unit 610), a display unit 640, and the like.
Wherein the storage unit stores program code executable by the processing unit 610 to cause the processing unit 610 to perform steps according to various exemplary embodiments of the present invention described in the above-mentioned multiple database data processing method section of the present specification. For example, the processing unit 610 may perform the steps as shown in fig. 1.
The storage unit 620 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)6201 and/or a cache memory unit 6202, and may further include a read-only memory unit (ROM) 6203.
The memory unit 620 may also include a program/utility 6204 having a set (at least one) of program modules 6205, such program modules 6205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 630 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 600 may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 600, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 600 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 650. Also, the electronic device 600 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 660. The network adapter 660 may communicate with other modules of the electronic device 600 via the bus 630. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 600, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, or a network device, etc.) to execute the above-mentioned multi-database data processing method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (9)

1. A method for processing data in multiple databases, the method comprising:
acquiring a first operation request and a second operation request corresponding to a first database and a second database respectively; wherein the first database and the second database are databases for the same service line, and the first operation request and the second operation request are different operation requests belonging to the same service line;
executing a first data operation on the first database according to the first operation request;
when a preset method in a preset cache is called to judge that the concurrent operation does not exist currently, storing the operation content information of the second operation request into the preset cache;
and calling the operation content information from the preset cache, and executing second data operation on the second database through a preset Java multithreading pool.
2. The method for processing the data of multiple databases according to claim 1, wherein the preset cache comprises a Redis cache, and the preset method is a setNx method in the Redis cache.
3. The method for processing multiple database data according to claim 2, said method further comprising:
after the first data operation of the first database is executed according to the first operation request, setting a compensation schedule in the first database; wherein the compensation schedule table stores the operation content information of the second operation request;
and deleting the operation content information in the compensation scheduling table after the second data operation on the second database is successfully executed.
4. The method for processing multiple database data according to claim 3, said method further comprising:
starting a query task at preset time intervals through a preset task scheduling tool to acquire the compensation scheduling table in the first database;
judging whether the operation content information in the compensation scheduling table is deleted or not, if not, extracting the operation content information, and executing the second data operation on the second database;
and after the second data operation on the second database is executed, deleting the operation content information.
5. The method for processing the data of multiple databases of claim 4, wherein the predetermined task scheduling tool comprises an open source Quartz task scheduling tool.
6. The method for processing the data of multiple databases according to claim 4, wherein the predetermined time is 5 to 10 minutes.
7. An apparatus for processing multiple database data, the apparatus comprising:
the operation request acquisition module is used for acquiring a first operation request and a second operation request which respectively correspond to the first database and the second database; wherein the first database and the second database are databases for the same service line, and the first operation request and the second operation request are different operation requests belonging to the same service line;
the first operation execution module is used for executing first data operation on the first database according to the first operation request;
the cache concurrency judging module is used for calling a preset method in a preset cache to judge that the concurrency operation does not exist at present, and storing the operation content information of the second operation request into the preset cache;
and the second operation execution module is used for calling the operation content information from the preset cache and executing second data operation on the second database through a preset Java multithreading pool.
8. A computer-readable storage medium having stored thereon a computer program, the program being adapted to perform the steps of the multi-database data processing method as claimed in any one of claims 1 to 6 when executed by a processor.
9. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the steps of the multiple database data processing method of any of claims 1-6 via execution of the executable instructions.
CN201710261410.5A 2017-04-20 2017-04-20 Multi-database data processing method and device, storage medium and electronic equipment Active CN108733704B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710261410.5A CN108733704B (en) 2017-04-20 2017-04-20 Multi-database data processing method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710261410.5A CN108733704B (en) 2017-04-20 2017-04-20 Multi-database data processing method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN108733704A CN108733704A (en) 2018-11-02
CN108733704B true CN108733704B (en) 2020-11-20

Family

ID=63933614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710261410.5A Active CN108733704B (en) 2017-04-20 2017-04-20 Multi-database data processing method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN108733704B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112416955B (en) * 2020-11-18 2022-02-22 四川长虹电器股份有限公司 Cache consistency problem solving method based on high concurrency environment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7290015B1 (en) * 2003-10-02 2007-10-30 Progress Software Corporation High availability via data services
CN103412786A (en) * 2013-08-29 2013-11-27 苏州科达科技股份有限公司 High performance server architecture system and data processing method thereof
CN105989065A (en) * 2015-02-09 2016-10-05 苏宁云商集团股份有限公司 Flash shot data processing method and system
CN106021335A (en) * 2016-05-06 2016-10-12 北京奇虎科技有限公司 A database accessing method and device
CN106202505A (en) * 2016-07-20 2016-12-07 北京京东尚科信息技术有限公司 Data processing method and system thereof
CN106341345A (en) * 2016-08-18 2017-01-18 北京奇虎科技有限公司 Concurrent request processing method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8751639B2 (en) * 2011-04-27 2014-06-10 Rackspace Us, Inc. Event queuing and distribution system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7290015B1 (en) * 2003-10-02 2007-10-30 Progress Software Corporation High availability via data services
CN103412786A (en) * 2013-08-29 2013-11-27 苏州科达科技股份有限公司 High performance server architecture system and data processing method thereof
CN105989065A (en) * 2015-02-09 2016-10-05 苏宁云商集团股份有限公司 Flash shot data processing method and system
CN106021335A (en) * 2016-05-06 2016-10-12 北京奇虎科技有限公司 A database accessing method and device
CN106202505A (en) * 2016-07-20 2016-12-07 北京京东尚科信息技术有限公司 Data processing method and system thereof
CN106341345A (en) * 2016-08-18 2017-01-18 北京奇虎科技有限公司 Concurrent request processing method and device

Also Published As

Publication number Publication date
CN108733704A (en) 2018-11-02

Similar Documents

Publication Publication Date Title
CN110569305B (en) Block synchronization method, device, medium and computing equipment
CN109271450B (en) Database synchronization method, device, server and storage medium
US10261869B2 (en) Transaction processing using torn write detection
US11281445B2 (en) Method, apparatus, device and storage medium for upgrading application
US8762929B2 (en) System and method for exclusion of inconsistent objects from lifecycle management processes
US8688628B2 (en) Nested queued transaction manager
CN110019498B (en) Log synchronization method and device, storage medium and electronic equipment
US9652492B2 (en) Out-of-order execution of strictly-ordered transactional workloads
CN110399227B (en) Data access method, device and storage medium
CN109683984B (en) Data hot loading method and device, electronic equipment and computer readable storage medium
CN111752901A (en) Index creation method and device, electronic equipment and storage medium
CN111125040A (en) Method, apparatus and storage medium for managing redo log
CN110427258A (en) Scheduling of resource control method and device based on cloud platform
CN108733704B (en) Multi-database data processing method and device, storage medium and electronic equipment
CN111078693B (en) Method and device for generating database record primary key
US10761940B2 (en) Method, device and program product for reducing data recovery time of storage system
US11397570B2 (en) Abort installation of firmware bundles
CN115543393B (en) Upgrade method, electronic device and storage medium
WO2023111910A1 (en) Rolling back database transaction
CN111176705A (en) Feature library upgrading method and device
CN115495436A (en) Database upgrading method and device
US20150046414A1 (en) Computer product, managing apparatus, and managing method
CN112486421B (en) Data storage method and device, electronic equipment and storage medium
CN117539960A (en) Database data synchronization method, storage medium and device
CN117873748A (en) Transaction chain execution method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant