CN114691383A - Data processing method, device, equipment and storage medium - Google Patents

Data processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN114691383A
CN114691383A CN202011630611.6A CN202011630611A CN114691383A CN 114691383 A CN114691383 A CN 114691383A CN 202011630611 A CN202011630611 A CN 202011630611A CN 114691383 A CN114691383 A CN 114691383A
Authority
CN
China
Prior art keywords
processed
data
task
thread
operation mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011630611.6A
Other languages
Chinese (zh)
Inventor
朱伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Holding Co Ltd
Original Assignee
Jingdong Technology Holding Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Holding Co Ltd filed Critical Jingdong Technology Holding Co Ltd
Priority to CN202011630611.6A priority Critical patent/CN114691383A/en
Publication of CN114691383A publication Critical patent/CN114691383A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/547Remote procedure calls [RPC]; Web services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/06Asset management; Financial planning or analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/544Remote
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/547Messaging middleware
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Finance (AREA)
  • General Engineering & Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • Operations Research (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a data processing method, a data processing device and a storage medium, which can be applied to any node device in a data processing system, wherein the node device acquires a plurality of to-be-processed tasks from a message queue of a message server, and each to-be-processed task comprises operation data of a user for a target product and a data operation mode corresponding to the operation data; and the node equipment allocates threads for each task to be processed according to the data operation mode, and executes a plurality of tasks to be processed in parallel through a plurality of threads to obtain the data processing result of each task to be processed. In the first aspect, in this embodiment, the message server distributes the to-be-processed task to each node device in the data processing system, so that data decoupling between the node devices is realized, and the dependency relationship between the node devices is reduced. On the other hand, each node device can concurrently process the tasks to be processed in different data operation modes, and the data processing efficiency of the system is improved.

Description

Data processing method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the field of information technology, and in particular, to a data processing method, device, equipment and storage medium.
Background
Along with the development of internet finance and the improvement of financial consciousness of people, various financial application programs APP become very important flow entries of personal financial finance, and people buy various financial products such as stocks, funds and the like through the financial APP.
For service enterprises providing various financial products, a perfect data processing system needs to be established, transaction data of users are obtained, various parameter indexes of various financial products are calculated, and the users can conveniently inquire related information. An existing data processing system provides a synchronous Remote Procedure Call Protocol (RPC) interface scheme, that is, data consistency is ensured by calling among all service terminals of the data processing system through RPC interfaces.
According to the scheme, strong dependence exists between the service ends, and the data processing efficiency of the system is low.
Disclosure of Invention
The embodiment of the application provides a data processing method, a data processing device, data processing equipment and a storage medium, and improves the data processing efficiency of a system.
A first aspect of an embodiment of the present application provides a data processing method, including:
acquiring a plurality of to-be-processed tasks from a message queue of a message server, wherein each to-be-processed task comprises operation data of a user for a target product and a data operation mode corresponding to the operation data;
distributing a thread for each task to be processed according to the data operation mode;
and executing the plurality of tasks to be processed in parallel through a plurality of threads to obtain a data processing result of each task to be processed.
In an embodiment of the present application, the allocating a thread to each of the tasks to be processed according to the data operation mode includes:
performing hash transformation on the data operation mode in each task to be processed to obtain a hash value corresponding to the data operation mode;
and determining the thread number corresponding to each task to be processed according to the hash value and the thread number preset by the node equipment.
In an embodiment of the application, the determining, according to the hash value and a thread number preset by the node device, a thread number corresponding to each to-be-processed task includes:
and obtaining the thread number corresponding to each task to be processed by taking the modulus of the thread number through the hash value.
In an embodiment of the present application, if there are at least two tasks to be processed in the plurality of tasks to be processed that have the same data operation mode corresponding to the operation data, the method further includes:
and sequentially processing the at least two tasks to be processed in the threads corresponding to the same data operation mode according to the queue sequence of the at least two tasks to be processed in the message queue.
In an embodiment of the application, for each of the threads, before processing each of the to-be-processed tasks in the thread, the method further includes:
executing a first command in the thread, wherein the first command is used for requesting to lock a data operation mode of the task to be processed;
and determining whether to execute the task to be processed in the thread according to a return result of the first command.
In an embodiment of the application, the determining whether to execute the task to be processed in the thread according to the returned result of the first command includes:
if the return result is a first value, executing the task to be processed in the thread; or
And if the return result is the second value, waiting for other threads to release the lock of the data operation mode.
In an embodiment of the present application, the to-be-processed task is processed in the thread, or the thread executes the to-be-processed task for a time-out, and the method further includes:
and releasing the lock of the data operation mode of the task to be processed in the thread.
In an embodiment of the present application, executing the to-be-processed task in the thread includes:
acquiring a historical data processing result corresponding to the data operation mode of the task to be processed from a database server;
and updating the historical processing result according to the historical processing result and the operation data of the task to be processed.
A second aspect of an embodiment of the present application provides a data processing apparatus, including:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a plurality of tasks to be processed from a message queue of a message server, and each task to be processed comprises operation data of a user aiming at a target product and a data operation mode corresponding to the operation data;
the processing module is used for distributing threads to each task to be processed according to the data operation mode;
and executing the plurality of tasks to be processed in parallel through a plurality of threads to obtain a data processing result of each task to be processed.
In an embodiment of the present application, the processing module is specifically configured to: performing hash transformation on the data operation mode in each task to be processed to obtain a hash value corresponding to the data operation mode;
and determining the thread number corresponding to each task to be processed according to the hash value and the thread number preset by the node equipment.
In an embodiment of the application, the processing module is specifically configured to obtain, by taking a modulus of the thread number by using the hash value, a thread number corresponding to each to-be-processed task.
In an embodiment of the application, if there are at least two tasks to be processed in the plurality of tasks to be processed that have the same data operation mode corresponding to the operation data, the processing module is further configured to:
and sequentially processing the at least two tasks to be processed in the threads corresponding to the same data operation mode according to the queue sequence of the at least two tasks to be processed in the message queue.
In an embodiment of the application, for each of the threads, before the processing module processes each of the to-be-processed tasks in the thread, the processing module is further configured to:
executing a first command in the thread, wherein the first command is used for requesting to lock a data operation mode of the task to be processed;
and determining whether to execute the task to be processed in the thread according to a return result of the first command.
In an embodiment of the present application, the processing module is specifically configured to:
if the return result is a first value, executing the task to be processed in the thread; or
And if the return result is the second value, waiting for other threads to release the lock of the data operation mode.
In an embodiment of the application, the processing module is further configured to, after the thread finishes processing the task to be processed, or when the thread executes the task to be processed and times out:
and releasing the lock of the data operation mode of the task to be processed in the thread.
In an embodiment of the application, the obtaining module is further configured to obtain, from a database server, a historical data processing result corresponding to a data operation mode of the task to be processed;
the processing module is further configured to update the historical processing result according to the historical processing result and the operation data of the task to be processed.
A third aspect of embodiments of the present application provides an electronic device, including:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of any of the first aspects.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium having stored thereon a computer program for execution by a processor to perform the method according to any one of the first aspect.
A fifth aspect of embodiments of the present application provides a computer program product comprising a computer program for execution by a processor to perform the method of any one of the first aspects
The embodiment of the application provides a data processing method, a data processing device and a storage medium, which can be applied to any node device in a data processing system, wherein the node device acquires a plurality of to-be-processed tasks from a message queue of a message server, and each to-be-processed task comprises operation data of a user for a target product and a data operation mode corresponding to the operation data; and the node equipment allocates threads for each task to be processed according to the data operation mode, and executes a plurality of tasks to be processed in parallel through a plurality of threads to obtain the data processing result of each task to be processed. In the first aspect, in this embodiment, the message server distributes the to-be-processed task to each node device in the data processing system, so that data decoupling between the node devices is realized, and the dependency relationship between the node devices is reduced. On the other hand, each node device can concurrently process tasks to be processed in different data operation modes, and the data processing efficiency of the system is improved.
Drawings
Fig. 1 is a schematic system architecture diagram of a data processing method according to an embodiment of the present application;
fig. 2 is a scene interaction diagram of a data processing method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a data processing method according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a data processing method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application;
fig. 6 is a hardware structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be understood that the terms "comprises" and "comprising," and any variations thereof, as used herein, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements explicitly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, a brief description will be given of terms related to the embodiments of the present application.
The asset management business is called asset management business for short, is a novel business developed by security management organizations on the basis of traditional business, and mainly refers to the behavior that financial investment companies such as securities, futures and funds serve as asset managers, and the management operation is carried out on client assets according to the mode, conditions, requirements and limits agreed by asset management contracts, so that investment management services of securities and other financial products are provided for clients.
A Fast Message Queue (FMQ), a set of systems or platforms, also called message middleware servers or message servers, is used for communication between applications or between service nodes, and mainly accomplishes interaction through message passing.
The distributed lock is a distributed coordination technology, and in order to prevent a plurality of processes in a distributed system from interfering with each other, reasonable scheduling of the plurality of processes is realized through the technology. In a distributed system, a method can only be executed by one thread of one service node at a time.
The term of holding the position refers to the total amount of the investment products in the stock account of the investment user, and no product is called empty position and a product is called holding position.
In the financial service industry, in order to improve the service quality of the resource management business, a service enterprise needs to establish a perfect data processing system. The data processing system needs to undertake the tasks of position taking calculation, bargaining derivation and the like; calculating parameter indexes such as combined position-holding change, position-holding market value, combined net value, product scale and the like according to the bargain information; and according to the estimated value table information after the final calculation every day, parameter indexes such as the combination net value, the bargain change and the like are deduced.
In the current data processing system, each service provides a synchronous Remote Procedure Call (RPC) interface, and parameter indexes such as combined position change, position market value, combined net value, product scale and the like are calculated in a single-thread serial manner in the RPC interface to derive parameter indexes such as combined net value, bargain change and the like. In practical application, synchronous serial realization of code blocks is realized by using java synchronous lock synchronized.
The synchronous RPC interface scheme is that service nodes are interdependent, so that service management difficulty is increased. Taking a data processing system for managing services as an example, the system acquires various service data initiated by different users at a client, if service nodes in the system are mutually dependent, sufficient decoupling is not achieved, so that excessive RPC synchronous dependence among services is caused, and the efficiency and accuracy of data processing are influenced.
In addition, the single-threaded synchronous lock is realized in a mode that the service cannot be expanded horizontally. The single-thread synchronous lock can ensure the consistency of data among services, but the services cannot be expanded in a distributed mode, once reaching the performance bottleneck, the service nodes cannot be expanded transversely, and the overall throughput of the system cannot be improved.
In view of the above problems, embodiments of the present application provide a data processing method, which is used to solve the problems of dependence between current system services and high difficulty in horizontal expansion, and improve the overall throughput of the system. The overall idea of the scheme is as follows: because the message is directly stored and distributed by the message middleware, a service party only needs to pay attention to how to send the message to the message middleware server, and each service node only needs to pay attention to how to acquire the message from the message middleware server, so that decoupling among the service nodes is realized, and dependence among the service nodes is reduced. Each service node in the system can be configured with a plurality of threads, each thread monitors tasks in the memory queue and executes each task according to the memory queue sequence. When each thread processes a task, a corresponding distributed lock needs to be acquired according to the data operation mode of each task, so that the consistency of data processing is ensured. In order to further improve the overall throughput of the system, it may be considered to deploy a plurality of service nodes in the system to implement horizontal expansion of the system.
The system architecture of the technical solution provided by the embodiments of the present application is briefly described below with reference to the accompanying drawings. Exemplarily, fig. 1 is a schematic diagram of a system architecture of a data processing method provided in an embodiment of the present application, and as shown in fig. 1, the data processing system provided in the embodiment includes:
a plurality of clients (e.g., clients 11, 12, 13 shown in fig. 1), a message server 14, a plurality of node devices (e.g., node devices 15, 16, 17 shown in fig. 1), and a database server 18. The plurality of clients are connected with the message server, the message server is connected with each node device, and each node device is connected with the database server.
Fig. 2 is an interaction diagram of a scenario of a data processing method according to an embodiment of the present application, where the scenario includes a service party (i.e., any client in fig. 1), a fast message queue server (i.e., the message server 14 in fig. 1), and a plurality of node devices. Different from the system architecture shown in fig. 1, in this embodiment, there are multiple database servers, such as the database 1 and the database 2 in fig. 2, and the multiple databases are connected to each other to form a distributed database system, where data of each database in the distributed database system are shared with each other.
For ease of understanding, the following takes the resource management service as an example, and briefly describes the interaction process between the devices in the data processing system:
(1) the method comprises the steps that a user initiates investment transactions through different clients, generated operation data (or transaction data) are pushed to a message server, the message server sequentially adds the operation data into a message queue of the message server according to user operation time, each node device obtains tasks to be processed from the message server, and the tasks to be processed comprise the operation data of the user and a data operation mode corresponding to the operation data.
(2) Each node device can execute a plurality of tasks to be processed at the same time in parallel, but the tasks to be processed in the same data operation mode can be executed in only one thread of one node device at the same time, so that the consistency requirement of data processing is ensured.
(3) After any node device completes a task, the data processing result is stored in the database server, so that the subsequent node device or other node devices can obtain the data processing result.
The technical solution of the present application will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 3 is a schematic flowchart of a data processing method according to an embodiment of the present application. The data processing method provided by this embodiment may be applied to the node device shown in fig. 1 or fig. 2, and as shown in fig. 3, the data processing method includes the following steps:
step 201, a plurality of tasks to be processed are obtained from a message queue of a message server.
Each task to be processed comprises operation data of a user for a target product and a data operation mode corresponding to the operation data.
In this embodiment, the message server receives operation data initiated by different users through the client, and adds the operation data to a message queue of the message server according to a time sequence generated by the operation data, so that each node device in the data processing system acquires a task to be processed from the message queue.
The operation data refers to transaction data of a user for a certain target product, and the target product may be various financial products such as stocks, funds, futures, and the like, or may also be an entity product or a service product in other fields, which is not limited in this embodiment of the present application. Illustratively, user A purchases 1 ten thousand dollars of fund products for a year and user B purchases 200 shares of stock code 600 xxx.
The data operation mode refers to a investment portfolio mode adopted by the operation data. Illustratively, user A creates two portfolio approaches, for example, portfolio approach 1 is a stock market 20%, a futures market 30% and a fund market 50%, and portfolio approach 2 is a real estate market 30%, a stock market 40% and a gold market 30%. If the user A associates the investment portfolio mode 1 when buying the stock 600xx1, the data operation mode corresponding to the transaction data is the investment portfolio mode 1, and the number is marked as code-001. If the user A is associated with the portfolio mode 2 when buying the stock 600xx2, the data operation mode corresponding to the transaction data is the portfolio mode 2, and the number is marked as code-002.
Step 202, allocating a thread to each task to be processed according to the data operation mode.
The node equipment acquires a plurality of tasks to be processed from a message queue of the message server, and allocates threads to the tasks to be processed according to a data operation mode corresponding to operation data in each task to be processed. Specifically, the node device performs hash transformation on a data operation mode corresponding to operation data in a task to be processed to obtain a hash value corresponding to the data operation mode; and determining the thread number corresponding to each task to be processed according to the hash value and the thread number preset by the node equipment. Specifically, the thread number corresponding to each task to be processed can be obtained by taking the modulus of the thread number through the hash value.
For example, it is assumed that the number of threads preset by the node device is 4, that is, the node device can execute 4 tasks to be processed at most at the same time in parallel. The data operation mode corresponding to the operation data in a certain task to be processed received by the node device may be represented by a portfolio number, for example, code-001, a hash value, for example, 101, is calculated according to the portfolio number, and then the task to be processed enters the 1 st memory queue by modulo the thread number, for example, 101% 4 is equal to 1, and is processed by the thread number 1.
In an embodiment of the present application, if the data operation modes corresponding to the operation data of at least two tasks to be processed in the plurality of tasks to be processed are the same, the data processing method further includes: the node device may sequentially process the at least two to-be-processed tasks in the thread corresponding to the same data operation mode according to a queue order of the at least two to-be-processed tasks in the message queue.
Specifically, the node device sequentially adds the operation data in the same data operation mode to the memory queue of the thread corresponding to the data operation mode according to the queue order in the message queue, so that the operation data in the same data operation mode is sequentially processed, and the consistency of data processing results is ensured.
And 203, executing a plurality of tasks to be processed in parallel through a plurality of threads to obtain a data processing result of each task to be processed.
In this embodiment, each node device in the data processing system may start multiple threads, for example, 4 threads, to process operation data of each user in the system, and since multiple node devices exist in the data processing system at the same time, and multiple parallel threads exist in the system, for example, 5 node devices are included in the system, the total parallel thread number of the system is 5 × 4 — 20. In practical application, the node equipment can be transversely expanded according to actual requirements, and the overall throughput of the system is improved.
As can be seen from the above example, the data processing method provided in this embodiment can ensure that the operation data of the same investment portfolio in the same node device are always sequentially entered into the same memory queue and sequentially processed in the thread corresponding to the memory queue. In this way, each node device can concurrently process the tasks to be processed in the multiple threads, that is, each node device can simultaneously process the operation data of multiple investment portfolios, and the operation data of the same investment portfolios are sequentially processed, thereby ensuring the consistency of the data processing results in the node device.
The data processing method provided by the embodiment can be applied to any node device in a data processing system, and the node device acquires a plurality of to-be-processed tasks from a message queue of a message server, wherein each to-be-processed task comprises operation data of a user for a target product and a data operation mode corresponding to the operation data; and the node equipment allocates threads for each task to be processed according to the data operation mode, and executes a plurality of tasks to be processed in parallel through a plurality of threads to obtain a data processing result of each task to be processed. In the first aspect, in this embodiment, the message server distributes the to-be-processed task to each node device in the data processing system, so that data decoupling between the node devices is realized, and the dependency relationship between the node devices is reduced. On the other hand, each node device can concurrently process the tasks to be processed in different data operation modes, and the data processing efficiency of the system is improved.
On the basis of the above embodiments, the following embodiment shows a process in which a node device of a data processing system performs data processing based on distributed locks.
Fig. 4 is a schematic flow diagram of a data processing method provided in an embodiment of the present application, where the data processing method provided in this embodiment is also applicable to the node device shown in fig. 1 or fig. 2, as shown in fig. 4, the data processing method includes the following steps:
step 301 executes a first command in a thread.
The first command is used for requesting to lock a data operation mode of the task to be processed.
Step 302, according to the return result of the first command, determining whether to execute the task to be processed in the thread.
And if the return result is the first value, executing the task to be processed in the thread.
And if the return result is the second value, waiting for other threads to release the lock of the data operation mode.
In one embodiment of the present application, the first command is a setnx command. The returned result of the setnx command includes a first value and a second value. Illustratively, the first value is 1 and the second value is 0. If the return result is 1, the data operation mode of the task to be processed is not locked, and the thread successfully obtains the lock. If the return result is 0, it indicates that the data operation mode of the task to be processed is already locked by other threads, and it needs to wait for other threads to release the lock.
In an embodiment of the present application, after the node device executes the task to be processed in the thread, it needs to release the lock of the data operation mode of the task to be processed in the thread in time.
In an embodiment of the present application, when a node device executes a task to be processed in a thread, if an error occurs in the execution process, a current resource is locked, so that other threads cannot obtain a lock. In order to solve the above problem, a preset duration needs to be set for each thread to ensure that the lock in each thread can be automatically released when the preset duration is reached, so as to avoid that other threads are always in a waiting state. That is, when a certain thread of the node device processes a task overtime, the node device may release a lock of a data operation mode of a task to be processed in the thread.
In one embodiment of the present application, executing a task to be processed in a thread includes: acquiring a historical data processing result corresponding to the data operation mode of the task to be processed from a database server; and updating the historical processing result according to the historical processing result and the operation data of the task to be processed.
It should be noted that, one or more database servers may be provided in the present embodiment, and the present embodiment is not limited in any way. And the plurality of database servers are mutually connected to realize data sharing.
As can be seen from the above description, after the data processing is completed, each thread of each node device in the data processing system updates the data processing result to the database server, so as to ensure the consistency of the data processing.
Taking the application scenario shown in fig. 2 as an example, it is assumed that the node device 1 obtains purchase transaction data of the fund product 1 from the message server, where the number of the investment portfolio corresponding to the transaction data is code-001. Meanwhile, the node device 2 acquires the purchase transaction data of the fund product 2 from the message server by the user B, and the investment portfolio number corresponding to the transaction data is also code-001. Although the investment portfolio numbers of the two transactions are the same, the node device A and the node device B can simultaneously process data because the transaction data are on different node devices and are transaction data of different users.
Illustratively, it is assumed that the node apparatus 1 acquires purchase transaction data of the fund product 1 by the user a from the message server, the transaction data corresponding to the portfolio number code-001. At the same time, the node apparatus 2 acquires the purchase transaction data of the stock code 600xx2 by the user a from the message server, and the number of the portfolio corresponding to the transaction data is also code-001. Since the operation time of the user a on the fund product 1 is earlier than the operation time on the stock code 600xx2, the node device 1 executes a first command in the thread thereof to request to lock the thread of the investment portfolio number code-001, the obtained return result is 1, the node device 1 performs data processing on the transaction data of the investment portfolio number code-001 in the thread thereof, and after the processing is completed, the transaction information of the investment portfolio number in the database is updated. If the node device 1 executes the to-be-processed task, the node device 2 executes the first command in the thread thereof, and the obtained return result is 0, it is necessary to wait for the node device 1 to release the lock. When the node device 2 monitors that the node device 1 releases the lock after the execution is finished, the node device acquires the latest transaction information of the investment portfolio number code-001 from the database, and further updates the transaction information of the investment portfolio number code-001 in the database based on the latest transaction information and the transaction data of the stock purchase 600xx2 of the user A.
In one embodiment of the present application, updates to the trading information include updates to position, market value, gross size, and net value. The new position holding amount is the old position holding amount plus the transaction amount of the transaction, the new market value is the old market value plus the transaction amount of the transaction multiplied by the transaction price of the transaction, the total scale is the initial scale plus the profit caused by the fluctuation of the transaction price, and the net value is the total scale/the initial scale.
According to the data processing method provided by the embodiment of the application, each node device in the data processing system executes a data processing task based on the distributed lock, when each thread processes the task, the distributed lock of the data operation mode in the task to be processed is obtained by executing the first command, if the lock is successfully obtained, the corresponding data processing task is executed, and the obtained data processing result is updated to the database server, so that other threads can obtain the latest data processing result when executing subsequent tasks. If the lock grabbing fails, the thread needing to obtain the lock needs to wait for releasing the lock after finishing the processing. Therefore, when the multiple node devices concurrently process the same data operation mode, the orderly execution can be achieved, and the consistency of the data processing results on the multiple node devices is ensured.
In the embodiment of the present application, the data processing apparatus may be divided into the functional modules according to the method embodiment, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a form of hardware or a form of a software functional module. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation. The following description will be given by taking an example in which each functional module is divided by using a corresponding function.
Fig. 5 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application. As shown in fig. 5, the data processing apparatus 400 provided in this embodiment includes:
an obtaining module 401, configured to obtain multiple to-be-processed tasks from a message queue of a message server, where each to-be-processed task includes operation data of a user for a target product and a data operation mode corresponding to the operation data;
a processing module 402, configured to allocate a thread to each to-be-processed task according to the data operation mode;
and executing the plurality of tasks to be processed in parallel through a plurality of threads to obtain a data processing result of each task to be processed.
In an embodiment of the application, the processing module 402 is specifically configured to: performing hash transformation on the data operation mode in each task to be processed to obtain a hash value corresponding to the data operation mode;
and determining the thread number corresponding to each task to be processed according to the hash value and the thread number preset by the node equipment.
In an embodiment of the application, the processing module 402 is specifically configured to obtain, by taking a modulus of the thread number through the hash value, a thread number corresponding to each to-be-processed task.
In an embodiment of the application, if there are at least two tasks to be processed in the plurality of tasks to be processed that have the same data operation mode corresponding to the operation data, the processing module 402 is further configured to:
and sequentially processing the at least two tasks to be processed in the threads corresponding to the same data operation mode according to the queue sequence of the at least two tasks to be processed in the message queue.
In an embodiment of the application, for each of the threads, before the processing module 402 processes each of the to-be-processed tasks in the thread, the processing module is further configured to:
executing a first command in the thread, wherein the first command is used for requesting to lock a data operation mode of the task to be processed;
and determining whether to execute the task to be processed in the thread according to a return result of the first command.
In an embodiment of the application, the processing module 402 is specifically configured to:
if the return result is a first value, executing the task to be processed in the thread; or
And if the return result is the second value, waiting for other threads to release the lock of the data operation mode.
In an embodiment of the application, the processing module 402 is further configured to, after the to-be-processed task is processed in the thread, or when the thread executes the to-be-processed task for a time-out:
and releasing the lock of the data operation mode of the task to be processed in the thread.
In an embodiment of the application, the obtaining module 401 is further configured to obtain a historical data processing result corresponding to a data operation mode of the task to be processed from a database server;
the processing module 402 is further configured to update the historical processing result according to the historical processing result and the operation data of the task to be processed.
The data processing apparatus provided in this embodiment may execute the technical solutions of any of the above method embodiments, and the implementation principles and technical effects are similar, which are not described herein again.
Fig. 6 is a hardware structure diagram of an electronic device according to an embodiment of the present application, and as shown in fig. 6, an electronic device 500 according to the embodiment includes:
a memory 501;
a processor 502; and
a computer program;
the computer program is stored in the memory 501 and configured to be executed by the processor 502 to implement the technical solution of any one of the above method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
Optionally, the memory 501 may be separate or integrated with the processor 502. When the memory 501 is a separate device from the processor 502, the electronic device 500 further comprises: a bus 503 for connecting the memory 501 and the processor 502.
The embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by the processor 502 to implement the technical solution of any one of the foregoing method embodiments.
The present application further provides a computer program product, which includes a computer program, and the computer program is executed by the processor 502 to implement the technical solution of any of the foregoing method embodiments.
An embodiment of the present application further provides a chip, including: a memory, a processor and a computer program, the computer program being stored in the memory, the processor running the computer program to perform the solution of any of the method embodiments described above.
It should be understood that the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
The memory may comprise a high-speed RAM memory, and may further comprise a non-volatile storage NVM, such as at least one disk memory, and may also be a usb disk, a removable hard disk, a read-only memory, a magnetic or optical disk, etc.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
The storage medium may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an Application Specific Integrated Circuits (ASIC). Of course, the processor and the storage medium may reside as discrete components in an electronic device.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the present disclosure as defined by the appended claims.

Claims (12)

1. A data processing method applied to a node device in a data processing system, the method comprising:
acquiring a plurality of to-be-processed tasks from a message queue of a message server, wherein each to-be-processed task comprises operation data of a user for a target product and a data operation mode corresponding to the operation data;
distributing a thread for each task to be processed according to the data operation mode;
and executing the plurality of tasks to be processed in parallel through a plurality of threads to obtain a data processing result of each task to be processed.
2. The method according to claim 1, wherein said assigning a thread to each of said tasks to be processed according to said data operation mode comprises:
performing hash transformation on the data operation mode in each task to be processed to obtain a hash value corresponding to the data operation mode;
and determining the thread number corresponding to each task to be processed according to the hash value and the thread number preset by the node equipment.
3. The method according to claim 2, wherein the determining, according to the hash value and a thread number preset by the node device, a thread number corresponding to each task to be processed includes:
and obtaining the thread number corresponding to each task to be processed by taking the modulus of the thread number through the hash value.
4. The method according to any one of claims 1 to 3, wherein if the data operation modes corresponding to the operation data of at least two of the plurality of tasks to be processed are the same, the method further comprises:
and sequentially processing the at least two tasks to be processed in the threads corresponding to the same data operation mode according to the queue sequence of the at least two tasks to be processed in the message queue.
5. The method of any of claims 1 to 3, wherein for each of the threads, prior to processing each of the pending tasks in the thread, the method further comprises:
executing a first command in the thread, wherein the first command is used for requesting to lock a data operation mode of the task to be processed;
and determining whether to execute the task to be processed in the thread according to a return result of the first command.
6. The method of claim 5, wherein the determining whether to execute the pending task in the thread according to the returned result of the first command comprises:
if the return result is a first value, executing the task to be processed in the thread; or
And if the return result is the second value, waiting for other threads to release the lock of the data operation mode.
7. The method of claim 6, wherein after the thread finishes processing the pending task, or when the thread times out executing the pending task, the method further comprises:
and releasing the lock of the data operation mode of the task to be processed in the thread.
8. The method of claim 6, wherein executing the pending task in the thread comprises:
acquiring a historical data processing result corresponding to the data operation mode of the task to be processed from a database server;
and updating the historical processing result according to the historical processing result and the operation data of the task to be processed.
9. A data processing apparatus, comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a plurality of tasks to be processed from a message queue of a message server, and each task to be processed comprises operation data of a user aiming at a target product and a data operation mode corresponding to the operation data;
the processing module is used for distributing threads to each task to be processed according to the data operation mode;
and executing the plurality of tasks to be processed in parallel through a plurality of threads to obtain a data processing result of each task to be processed.
10. An electronic device, comprising:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of any one of claims 1 to 8.
11. A computer-readable storage medium, on which a computer program is stored, which computer program is executed by a processor to implement the method according to any one of claims 1 to 8.
12. A computer program product, comprising a computer program to be executed by a processor for implementing the method according to any one of claims 1 to 8.
CN202011630611.6A 2020-12-31 2020-12-31 Data processing method, device, equipment and storage medium Pending CN114691383A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011630611.6A CN114691383A (en) 2020-12-31 2020-12-31 Data processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011630611.6A CN114691383A (en) 2020-12-31 2020-12-31 Data processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114691383A true CN114691383A (en) 2022-07-01

Family

ID=82133520

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011630611.6A Pending CN114691383A (en) 2020-12-31 2020-12-31 Data processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114691383A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115080250A (en) * 2022-08-22 2022-09-20 深圳星云智联科技有限公司 Data processing method, device and system
CN115168056A (en) * 2022-09-02 2022-10-11 深圳华锐分布式技术股份有限公司 Information processing method, device, equipment and medium based on resource allocation
CN116107725A (en) * 2023-04-12 2023-05-12 中国人民解放军63921部队 Radar data processing system and method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115080250A (en) * 2022-08-22 2022-09-20 深圳星云智联科技有限公司 Data processing method, device and system
CN115080250B (en) * 2022-08-22 2022-12-02 深圳星云智联科技有限公司 Data processing method, device and system
CN115168056A (en) * 2022-09-02 2022-10-11 深圳华锐分布式技术股份有限公司 Information processing method, device, equipment and medium based on resource allocation
CN115168056B (en) * 2022-09-02 2022-12-02 深圳华锐分布式技术股份有限公司 Information processing method, device, equipment and medium based on resource allocation
CN116107725A (en) * 2023-04-12 2023-05-12 中国人民解放军63921部队 Radar data processing system and method

Similar Documents

Publication Publication Date Title
US10146792B1 (en) Systems and methods for implementing a programming model for smart contracts within a decentralized computer network
US20210287215A1 (en) Resource transfer system
CN114691383A (en) Data processing method, device, equipment and storage medium
US20200005388A1 (en) Rental asset processing for blockchain
US8719131B1 (en) Allocating financial risk and reward in a multi-tenant environment
US20110137805A1 (en) Inter-cloud resource sharing within a cloud computing environment
JP2022536447A (en) Systems, methods and storage media for managing digital liquidity tokens on a distributed ledger platform
CN107392582B (en) Method and device for realizing resource transfer and method and device for realizing collection and payment
US8132177B2 (en) System and method for load-balancing in server farms
CN110275767A (en) A kind of batch data processing method and processing device
CN113095935A (en) Transaction order processing method and device, computer equipment and storage medium
US11140094B2 (en) Resource stabilization in a distributed network
CN112926964A (en) Block chain-based asset management method, electronic device and readable storage medium
JP2021047574A (en) Settlement information sharing system
JP6270469B2 (en) Stock lending balance check system and stock lending balance check program
Pradhan et al. Resource Allocation Methodologies in Cloud Computing: A Review and Analysis
WO2024011917A1 (en) Delegate model for blockchain transactions
WO2024050135A1 (en) Resource stabilization mechanisms in a distributed network
CN117593130A (en) Resource management method and device, computer equipment, medium and product
CN116977056A (en) User borrowing party matching method, system, computer equipment and storage medium
CN115630917A (en) Service processing method, electronic device and computer storage medium
CN116467050A (en) Transaction processing method, device, equipment, storage medium and system
CN114240674A (en) Periodic splitting method and device
KR20210067174A (en) cryptodollar currency transaction system and method performing thereof
JP2002318912A (en) Information entry system, information entry method, information entry device and program, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination