CN116755637B - Transaction data storage method, device, equipment and medium - Google Patents

Transaction data storage method, device, equipment and medium Download PDF

Info

Publication number
CN116755637B
CN116755637B CN202311035280.5A CN202311035280A CN116755637B CN 116755637 B CN116755637 B CN 116755637B CN 202311035280 A CN202311035280 A CN 202311035280A CN 116755637 B CN116755637 B CN 116755637B
Authority
CN
China
Prior art keywords
message
messages
file
asynchronous
writing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311035280.5A
Other languages
Chinese (zh)
Other versions
CN116755637A (en
Inventor
雷海龙
何磊
黄海伦
刘威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huarui Distributed Technology Co ltd
Original Assignee
Shenzhen Huarui Distributed Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huarui Distributed Technology Co ltd filed Critical Shenzhen Huarui Distributed Technology Co ltd
Priority to CN202311035280.5A priority Critical patent/CN116755637B/en
Publication of CN116755637A publication Critical patent/CN116755637A/en
Application granted granted Critical
Publication of CN116755637B publication Critical patent/CN116755637B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0674Disk device
    • G06F3/0676Magnetic disk device
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention relates to the technical field of data processing, and provides a transaction data storage method, device, equipment and medium, which can synchronously write binary transaction data into a memory area based on a memory file mapping technology, effectively reduce IO (input/output) expenditure in a transaction consignment library-falling process, simultaneously pre-allocate storage space, change time with space and optimize message transmission performance; the asynchronous threads are started at preset time intervals to write the data in the memory area into the file area of the disk, and the disk data cannot be lost, so that message pulling and retransmission at any position are supported, and the problem of data loss caused by faults during synchronous disk brushing is solved; by means of asynchronous thread parallel processing, the writing throughput capacity of data is improved, and further the persistence efficiency of transaction information is improved.

Description

Transaction data storage method, device, equipment and medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a transaction data storage method, device, equipment, and medium.
Background
For a securities trading system, the daily trade commission information has large data volume, and the data persistence database write-in has higher requirements on performance and data accuracy.
However, the conventional transaction message is mainly processed by a serial sequential storage mode, and the next transaction can be processed after the data writing is completed. The method has low performance and high time delay, and cannot meet the performance requirement of the system under the scene of large data volume of the transaction commission message.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a transaction data storage method, apparatus, device, and medium that aims to solve the problem of inefficiency in persistence of transaction messages.
A transaction data storage method, the transaction data storage method comprising:
message subscription processing is carried out through an OnMessage interface;
when binary transaction data reported through the OnMessage interface is received, a reserved memory area is acquired;
synchronously writing the binary transaction data into the memory area based on a memory file mapping technology;
starting an asynchronous thread every preset time interval to write the data in the memory area into a file area of a disk; wherein the file area comprises an index file and a data file;
reading a message from the data file based on the index file and writing the message into a binary message list;
transferring the information in the binary information list into a callback function to analyze the information by calling a thread in an asynchronous thread pool to obtain an information file with an information sequence number;
Writing the message file into a persistent message annular queue according to the message sequence number;
reading messages from the persistent message annular queue according to batches and submitting the messages to an asynchronous distribution thread pool;
and executing persistence processing on the messages read from the persistence message annular queue according to the message type based on the asynchronous distribution thread pool.
According to a preferred embodiment of the present invention, before the obtaining the reserved memory area, the method further includes:
pre-distributing a storage space with a preset volume in a memory based on a file writing type;
and initializing the storage space to obtain the memory area.
According to a preferred embodiment of the present invention, the reading of the message from the data file and writing to the binary message list based on the index file includes:
reading a message position and a message length from the index file;
and reading a message from the data file based on the message position and the message length and writing the message into the binary message list.
According to a preferred embodiment of the present invention, the reading and submitting messages from the persistent message ring queue by batch to an asynchronous distribution thread pool includes:
Obtaining a pre-configured message processing amount of each batch;
reading messages from the persistent message annular queue according to the message processing amount of each batch and submitting the messages to the asynchronous distribution thread pool;
the asynchronous distributing thread pool comprises a plurality of asynchronous distributing threads, and each asynchronous distributing thread is used for distributing the messages read from the persistent message annular queue.
According to a preferred embodiment of the present invention, the performing, based on the asynchronous distribution thread pool, persistence processing on the messages read from the persistence message ring queue according to message types includes:
acquiring a message of a non-state message type from the read message to form a first message set;
determining the message attribute of each first message in the first message set, and determining the message processing amount of each asynchronous distribution thread in the asynchronous distribution thread pool;
grouping the messages in the first message set according to the message attribute of each first message and the message processing capacity of each asynchronous distribution thread to obtain at least one message group;
obtaining a preset batch warehouse entry number;
carrying out batch processing on the messages in each message group according to the batch warehousing number to obtain a multi-batch message list;
Submitting the multi-batch message list to an asynchronous database-falling thread pool;
calling a service interface corresponding to each message attribute based on each thread in the asynchronous database-falling thread pool, and writing each first message into a corresponding service table in a database in parallel;
wherein the non-status message type messages have no order keeping requirements.
According to a preferred embodiment of the present invention, the performing, based on the asynchronous distribution thread pool, persistence processing on the messages read from the persistence message ring queue according to message types includes:
acquiring a message of a status message type from the read message to form a second message set;
acquiring a message identifier corresponding to the status message type;
marking each second message in the second set of messages based on the message identifier;
writing each marked second message into a status message annular queue;
reading the message from the state message annular queue for persistence processing;
wherein the messages of the status message type have order preserving requirements.
According to a preferred embodiment of the present invention, the reading the message from the status message ring queue for persistence includes:
Acquiring the number of preset groups;
initializing at least one packet queue according to the packet number, and numbering each packet queue;
the message read from the state message annular queue is grouped according to service logic to obtain a grouping message list and a grouping number of each group;
matching the packet message list with the at least one packet queue according to preconfigured calculation logic and the packet number of each group, and writing the message list under the corresponding packet into the corresponding packet queue;
in each grouping queue, sequentially calling corresponding service interfaces according to the processing sequence of the messages in the preset group, and serially writing each message into a corresponding service table in the database;
wherein message dropping is performed in parallel between each packet queue.
A transaction data storage device, the transaction data storage device comprising:
the processing unit is used for carrying out message subscription processing through an OnMessage interface;
the acquisition unit is used for acquiring a reserved memory area when receiving binary transaction data reported through the OnMessage interface;
the writing unit is used for synchronously writing the binary transaction data into the memory area based on a memory file mapping technology;
The writing unit is further used for starting an asynchronous thread at preset time intervals to write the data in the memory area into a file area of a disk; wherein the file area comprises an index file and a data file;
the writing unit is further used for reading the information from the data file based on the index file and writing the information into a binary information list;
the analysis unit is used for analyzing the message by calling a thread in the asynchronous thread pool to transfer the message in the binary message list into a callback function so as to obtain a message file with a message sequence number;
the writing unit is further used for writing the message file into a persistent message annular queue according to the message sequence number;
the reading unit is used for reading the messages from the persistent message annular queue according to batches and submitting the messages to an asynchronous distribution thread pool;
and the persistence unit is used for executing persistence processing on the messages read from the persistence message annular queue according to the message type based on the asynchronous distribution thread pool.
A computer device, the computer device comprising:
a memory storing at least one instruction; and
And the processor executes the instructions stored in the memory to realize the transaction data storage method.
A computer-readable storage medium having stored therein at least one instruction for execution by a processor in a computer device to implement the transaction data storage method.
According to the technical scheme, binary transaction data can be synchronously written into the memory area based on the memory file mapping technology, IO (input/output) overhead in the process of consignment and library falling is effectively reduced, meanwhile, a storage space is pre-allocated, so that the time is replaced by the space, and the message transmission performance is optimized; the asynchronous threads are started at preset time intervals to write the data in the memory area into the file area of the disk, and the disk data cannot be lost, so that message pulling and retransmission at any position are supported, and the problem of data loss caused by faults during synchronous disk brushing is solved; by means of asynchronous thread parallel processing, the writing throughput capacity of data is improved, and further the persistence efficiency of transaction information is improved.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of the transaction data storage method of the present invention.
FIG. 2 is a functional block diagram of a preferred embodiment of the transaction data storage device of the present invention.
FIG. 3 is a schematic diagram of a computer device implementing a preferred embodiment of the transaction data storage method of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
FIG. 1 is a flow chart of a preferred embodiment of the transaction data storage method of the present invention. The order of the steps in the flowchart may be changed and some steps may be omitted according to various needs.
The transaction data storage method is applied to one or more computer devices, wherein the computer device is a device capable of automatically performing numerical calculation and/or information processing according to preset or stored instructions, and the hardware comprises, but is not limited to, a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, an ASIC), a programmable gate array (Field-Programmable Gate Array, FPGA), a digital processor (Digital Signal Processor, DSP), an embedded device and the like.
The computer device may be any electronic product that can interact with a user in a human-computer manner, such as a personal computer, tablet computer, smart phone, personal digital assistant (Personal Digital Assistant, PDA), game console, interactive internet protocol television (Internet Protocol Television, IPTV), smart wearable device, etc.
The computer device may also include a network device and/or a user device. Wherein the network device includes, but is not limited to, a single network server, a server group composed of a plurality of network servers, or a Cloud based Cloud Computing (Cloud Computing) composed of a large number of hosts or network servers.
The server may be an independent server, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms.
Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
The network in which the computer device is located includes, but is not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a virtual private network (Virtual Private Network, VPN), and the like.
S10, carrying out message subscription processing through an OnMessage interface.
The present embodiments may be implemented based on a pre-configured data persistence component for performing a persistence of a pool of transaction messages.
All types of data transmission related in the embodiment can be based on a high-speed message bus, and the high-speed message bus supports transmission of binary messages, and the binary messages occupy small storage space, so that the transmission speed is high, and the transmission and the reception of the messages are more convenient and quicker.
Specifically, the data persistence component may be integrated with the high-speed message bus, and after the data persistence component receives the transaction data pushed through the high-speed message bus, the data persistence component may perform message subscription processing through the onmessage interface.
Wherein the transaction data may include, but is not limited to: a transaction delegation request (such as an investor delegation declaration), a transaction return message returned by the exchange, and the like.
S11, when binary transaction data reported through the OnMessage interface is received, a reserved memory area is acquired.
Specifically, the binary transaction data can be processed through a built-in writer (dataWriter) in the data persistence component, and after the data writer receives the message, the message is stored into a file, so that the message is convenient to store and use in subsequent persistence database falling.
In this embodiment, before the obtaining the reserved memory area, the method further includes:
pre-distributing a storage space with a preset volume in a memory based on a file writing type;
and initializing the storage space to obtain the memory area.
Wherein, the file writing class can be configured in advance, for example: the file write class may be denoted as the MessageFileWrite class.
Wherein the preset volume may be configured according to system performance.
Through the embodiment, a certain volume of memory space is reserved, so that space time is replaced, and the performance of message transmission is optimized.
S12, synchronously writing the binary transaction data into the memory area based on a memory file mapping technology.
In the above embodiment, the memory file mapping technology effectively reduces the IO (Input/Output) overhead in the process of delegation and library dropping, and significantly improves the storage performance.
And after receiving the binary transaction data, filling the binary transaction data into the memory area, wherein the process of receiving the message is operated in the memory and does not involve hard disk IO operation, so that the quick receiving of the message is ensured, the upstream back pressure caused by slow writing is avoided, and the performance is faster.
S13, starting an asynchronous thread at intervals of preset time to write the data in the memory area into a file area of a disk; the file area comprises an index file and a data file.
After the transaction message is stored in the file area, the single-threaded memory file map writing can be adopted, and the messages in the file are orderly because of single-thread serial.
The preset time interval can be configured according to actual data storage requirements.
In the above embodiment, the memory data and the files in the disk are ensured to be synchronized into the files by adopting a mode of regularly brushing the disks, thereby ensuring the data persistence of the messages to the files.
Further, after writing the data in the memory area into the file area of the disk, a message writing log is recorded, where the log may include the number of messages, the time of writing the messages, and the like.
In the process of receiving binary transaction data and writing in files, memory file mapping writing is directly carried out on the binary transaction data, and as a memory and file mapping mode is adopted and the message content is binary, the next message can be directly received without waiting after the memory file writing is successful, so that the message receiving capability is effectively improved, a high throughput and large number of message pushing scenes can be better adapted, IO delay loss in a message link is effectively avoided, and meanwhile, the problems of message accumulation caused by slow writing, processing capability reduction of a transmitting end caused by receiving a slow back pressure message and the like are avoided. In addition, the final record of data persistence is based on writing into the database, so that message pulling and retransmission at any position can be supported, and the problem of data loss caused by faults at the moment of synchronous disk brushing is avoided.
S14, reading the information from the data file based on the index file and writing the information into a binary information list.
In this embodiment, the reading the message from the data file based on the index file and writing to the binary message list includes:
Reading a message position and a message length from the index file;
and reading a message from the data file based on the message position and the message length and writing the message into the binary message list.
In the above embodiment, the index file corresponds to a pointer, and corresponding data can be accurately read from the data file through the index file.
S15, transferring the information in the binary information list into a callback function to analyze the information by calling a thread in an asynchronous thread pool, and obtaining an information file with an information sequence number.
In the above embodiment, after the message is parsed, the message file with the message sequence number is obtained, so that the order of the message can be kept, and the situations of disorder or loss and the like in the subsequent transmission can be avoided.
S16, writing the message file into a persistent message annular queue according to the message sequence number.
The persistent message ring queue is a ring queue, has the characteristic of high reading and writing speed, and can realize message order preservation by waiting for message order elimination execution. For example: after a batch of messages are read in a serial manner by a file single thread, serial numbers are pre-allocated for each message, multi-thread parallel analysis is adopted for message analysis, and the analyzed messages are placed into a persistent message annular queue according to the pre-allocated serial numbers, and although multi-thread analysis is adopted, the pre-allocated serial numbers point to element positions of the annular queue, so that the messages in the persistent message annular queue are ordered.
Specifically, the length and position information of each binary message is recorded in the index file. In order to ensure that binary message files can be read orderly according to the sequence of the index files, orderly message sequence numbers are pre-configured, and the read message files are ordered according to the sequence numbers. Then, the messages in the binary message list are put into an asynchronous thread pool for parallel analysis, the asynchronous analysis is rapidly realized through a preassigned annular queue mechanism, and the order of the messages is preserved, so that the purposes of rapidly analyzing and preserving the order of the messages are achieved.
Wherein the positions of corresponding elements in the persistent message ring queue may be pointed to by a remainder algorithm.
The embodiment adopts asynchronous multithreading analysis, and adopts a ring queue to support parallel writing and order preservation, so that the message analysis performance is extremely fast.
S17, reading the messages from the persistent message annular queue according to batches and submitting the messages to an asynchronous distribution thread pool.
In this embodiment, the reading messages from the persistent message ring queue by batch and submitting them to an asynchronous distribution thread pool includes:
obtaining a pre-configured message processing amount of each batch;
reading messages from the persistent message annular queue according to the message processing amount of each batch and submitting the messages to the asynchronous distribution thread pool;
The asynchronous distributing thread pool comprises a plurality of asynchronous distributing threads, and each asynchronous distributing thread is used for distributing the messages read from the persistent message annular queue.
In the above embodiment, the messages are read in batches, so that the throughput performance of the data can be improved.
And S18, executing persistence processing on the messages read from the persistence message annular queue according to the message type based on the asynchronous distribution thread pool.
In this embodiment, the performing, based on the asynchronous distribution thread pool, persistence processing on the message read from the persistent message ring queue according to the message type includes:
acquiring a message of a non-state message type from the read message to form a first message set;
determining the message attribute of each first message in the first message set, and determining the message processing amount of each asynchronous distribution thread in the asynchronous distribution thread pool;
grouping the messages in the first message set according to the message attribute of each first message and the message processing capacity of each asynchronous distribution thread to obtain at least one message group;
obtaining a preset batch warehouse entry number;
Carrying out batch processing on the messages in each message group according to the batch warehousing number to obtain a multi-batch message list;
submitting the multi-batch message list to an asynchronous database-falling thread pool;
calling a service interface corresponding to each message attribute based on each thread in the asynchronous database-falling thread pool, and writing each first message into a corresponding service table in a database in parallel;
wherein the non-status message type messages have no order keeping requirements.
The message attributes may include, but are not limited to: reporting class, response class, reporting class, etc.
And calling a service interface corresponding to each message attribute by adopting an insert method, and writing each first message into a corresponding service table in the database in parallel.
The service table may include an order table, a return table, and the like.
Wherein, messages without order preservation requirements and messages with order preservation requirements can be filtered out according to the specific identification field in the message body.
In the embodiment, the analyzed information is distributed, grouped and processed in batches, and the information without the program-preserving requirement is rapidly subjected to library-falling writing in a multithread concurrent asynchronous mode, so that the writing capacity is improved.
In this embodiment, the performing, based on the asynchronous distribution thread pool, persistence processing on the message read from the persistent message ring queue according to the message type includes:
acquiring a message of a status message type from the read message to form a second message set;
acquiring a message identifier corresponding to the status message type;
marking each second message in the second set of messages based on the message identifier;
writing each marked second message into a status message annular queue;
reading the message from the state message annular queue for persistence processing;
wherein the messages of the status message type have order preserving requirements.
In the above embodiment, for the messages with order-preserving requirements, another ring queue is put, that is, the state message ring queue is used for order-preserving message processing.
Specifically, the reading the message from the status message ring queue for persistence processing includes:
acquiring the number of preset groups;
initializing at least one packet queue according to the packet number, and numbering each packet queue;
the message read from the state message annular queue is grouped according to service logic to obtain a grouping message list and a grouping number of each group;
Matching the packet message list with the at least one packet queue according to preconfigured calculation logic and the packet number of each group, and writing the message list under the corresponding packet into the corresponding packet queue;
in each grouping queue, sequentially calling corresponding service interfaces according to the processing sequence of the messages in the preset group, and serially writing each message into a corresponding service table in the database;
wherein message dropping is performed in parallel between each packet queue.
Wherein the calculation logic may be as follows: and obtaining the number of the packet queues, and modulo the number of the packet queues according to the client contract number, for example, when the client contract number is 50001 and the number of the packet queues is 4, modulo 4 is utilized to obtain 1, and then the message corresponding to 50001 is written into the packet queue with the sequence number of 1.
In the above embodiment, status messages are grouped according to traffic, and multiline Cheng Yewu is kept ordered, increasing the writing speed and ensuring the ordering of the same messages.
It should be noted that, the conventional order-preserving message is mainly written by serialization, the writing performance is slow, and a large amount of throughput cannot be guaranteed. According to the embodiment, the method comprises the steps of initializing the packet queues in advance and numbering the packet queues, so that the plurality of packet queues work in parallel, the throughput capacity of the messages is improved, meanwhile, the business layer realizes the abstract interface, the messages needing order preservation are classified, the data writing between the same packets is serial and ordered, and the accuracy of the data is guaranteed.
Further, after the writing of the whole batch of state messages is successful, the positions of the processed messages can be recorded, so that the subsequent fault recovery and message processing are facilitated.
In the embodiment, message grouping and message batching are carried out in a message persistence stage, and a message asynchronous distribution thread pool and a message asynchronous database-falling thread pool are respectively adopted, so that the message writing performance is improved in a concurrent writing mode.
In the embodiment, different threads are respectively adopted for processing in the processes of file writing, file reading, file analysis and message library dropping, and after decoupling in each stage, the processes are not mutually influenced, so that service blocking is reduced, and the processing performance is improved.
In the persistence process, the message asynchronous distribution thread pool and the message asynchronous library-falling thread pool are adopted, so that when the persistence message is written, the parallel written messages are disordered, and the disordered messages only process the messages which are allowed to be inserted in parallel on the service, so that the library-falling data are still accurate; for the state information, the information numbers are allocated in advance, so that the information in the annular queue is orderly, and the state information is returned to the fixed packet number by the service custom realization packet interface, so that the same information is always grouped in the same fixed packet queue, the information in the same group is orderly dropped, the ordered storage of the transaction information is realized, the high-performance transaction consignment data storage is further realized, and the data is ensured not to be heavy, messy and lost.
In this implementation, the method further includes:
and capturing the access abnormality information of the database and automatically performing retry processing.
Through the embodiment, the reliability of the database falling data can be ensured.
In the implementation, when the system fault is detected, the breakpoint can be accurately positioned through the position of the message receiving and transmitting, and consumption is continued from the breakpoint position, so that the data reliability is ensured.
According to the technical scheme, binary transaction data can be synchronously written into the memory area based on the memory file mapping technology, IO (input/output) overhead in the process of consignment and library falling is effectively reduced, meanwhile, a storage space is pre-allocated, so that the time is replaced by the space, and the message transmission performance is optimized; the asynchronous threads are started at preset time intervals to write the data in the memory area into the file area of the disk, and the disk data cannot be lost, so that message pulling and retransmission at any position are supported, and the problem of data loss caused by faults during synchronous disk brushing is solved; by means of asynchronous thread parallel processing, the writing throughput capacity of data is improved, and further the persistence efficiency of transaction information is improved.
FIG. 2 is a functional block diagram of a preferred embodiment of the transaction data storage device of the present invention. The transaction data storage device 11 comprises a processing unit 110, an acquisition unit 111, a writing unit 112, an parsing unit 113, a reading unit 114, and a persistence unit 115. The module/unit referred to in the present invention refers to a series of computer program segments, which are stored in a memory, capable of being executed by a processor and of performing a fixed function. In the present embodiment, the functions of the respective modules/units will be described in detail in the following embodiments.
The processing unit 110 is configured to perform message subscription processing through an onmessage interface;
the acquiring unit 111 is configured to acquire a reserved memory area when receiving binary transaction data reported through the onmessage interface;
the writing unit 112 is configured to synchronously write the binary transaction data into the memory area based on a memory file mapping technique;
the writing unit 112 is further configured to start an asynchronous thread at preset time intervals to write the data in the memory area into a file area of a disk; wherein the file area comprises an index file and a data file;
the writing unit 112 is further configured to read a message from the data file based on the index file and write the message to a binary message list;
the parsing unit 113 is configured to parse a message by calling a thread in an asynchronous thread pool to transfer the message in the binary message list into a callback function, so as to obtain a message file with a message sequence number;
the writing unit 112 is further configured to write the message file into a persistent message ring queue according to the message sequence number;
the reading unit 114 is configured to read messages from the persistent message ring queue by batches and submit the messages to an asynchronous distribution thread pool;
The persistence unit 115 is configured to perform persistence processing on the message read from the persistent message ring queue according to a message type based on the asynchronous distribution thread pool.
According to the technical scheme, binary transaction data can be synchronously written into the memory area based on the memory file mapping technology, IO (input/output) overhead in the process of consignment and library falling is effectively reduced, meanwhile, a storage space is pre-allocated, so that the time is replaced by the space, and the message transmission performance is optimized; the asynchronous threads are started at preset time intervals to write the data in the memory area into the file area of the disk, and the disk data cannot be lost, so that message pulling and retransmission at any position are supported, and the problem of data loss caused by faults during synchronous disk brushing is solved; by means of asynchronous thread parallel processing, the writing throughput capacity of data is improved, and further the persistence efficiency of transaction information is improved.
Fig. 3 is a schematic structural diagram of a computer device for implementing a transaction data storage method according to a preferred embodiment of the present invention.
The computer device 1 may comprise a memory 12, a processor 13 and a bus, and may further comprise a computer program, such as a transaction data storage program, stored in the memory 12 and executable on the processor 13.
It will be appreciated by those skilled in the art that the schematic diagram is merely an example of the computer device 1 and does not constitute a limitation of the computer device 1, the computer device 1 may be a bus type structure, a star type structure, the computer device 1 may further comprise more or less other hardware or software than illustrated, or a different arrangement of components, for example, the computer device 1 may further comprise an input-output device, a network access device, etc.
It should be noted that the computer device 1 is only used as an example, and other electronic products that may be present in the present invention or may be present in the future are also included in the scope of the present invention by way of reference.
The memory 12 includes at least one type of readable storage medium including flash memory, a removable hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 12 may in some embodiments be an internal storage unit of the computer device 1, such as a removable hard disk of the computer device 1. The memory 12 may in other embodiments also be an external storage device of the computer device 1, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the computer device 1. Further, the memory 12 may also include both an internal storage unit and an external storage device of the computer device 1. The memory 12 may be used not only for storing application software installed in the computer device 1 and various types of data, such as codes of transaction data storage programs, etc., but also for temporarily storing data that has been output or is to be output.
The processor 13 may be comprised of integrated circuits in some embodiments, for example, a single packaged integrated circuit, or may be comprised of multiple integrated circuits packaged with the same or different functions, including one or more central processing units (Central Processing unit, CPU), microprocessors, digital processing chips, graphics processors, a combination of various control chips, and the like. The processor 13 is a Control Unit (Control Unit) of the computer device 1, connects the respective components of the entire computer device 1 using various interfaces and lines, executes or executes programs or modules stored in the memory 12 (for example, executes a transaction data storage program or the like), and invokes data stored in the memory 12 to perform various functions of the computer device 1 and process data.
The processor 13 executes the operating system of the computer device 1 and various types of applications installed. The processor 13 executes the application program to implement the steps of the various transaction data storage method embodiments described above, such as the steps shown in fig. 1.
Illustratively, the computer program may be partitioned into one or more modules/units that are stored in the memory 12 and executed by the processor 13 to complete the present invention. The one or more modules/units may be a series of computer readable instruction segments capable of performing the specified functions, which instruction segments describe the execution of the computer program in the computer device 1. For example, the computer program may be divided into a processing unit 110, an acquisition unit 111, a writing unit 112, a parsing unit 113, a reading unit 114, a persistence unit 115.
The integrated units implemented in the form of software functional modules described above may be stored in a computer readable storage medium. The software functional modules are stored in a storage medium and include instructions for causing a computer device (which may be a personal computer, a computer device, or a network device, etc.) or processor (processor) to perform portions of the transaction data storage methods described in various embodiments of the invention.
The modules/units integrated in the computer device 1 may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on this understanding, the present invention may also be implemented by a computer program for instructing a relevant hardware device to implement all or part of the procedures of the above-mentioned embodiment method, where the computer program may be stored in a computer readable storage medium and the computer program may be executed by a processor to implement the steps of each of the above-mentioned method embodiments.
Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory, or the like.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created from the use of blockchain nodes, and the like.
The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The Blockchain (Blockchain), which is essentially a decentralised database, is a string of data blocks that are generated by cryptographic means in association, each data block containing a batch of information of network transactions for verifying the validity of the information (anti-counterfeiting) and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
The bus may be a peripheral component interconnect standard (peripheral component interconnect, PCI) bus or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. For ease of illustration, only one straight line is shown in fig. 3, but not only one bus or one type of bus. The bus is arranged to enable a connection communication between the memory 12 and at least one processor 13 or the like.
Although not shown, the computer device 1 may further comprise a power source (such as a battery) for powering the various components, preferably the power source may be logically connected to the at least one processor 13 via a power management means, whereby the functions of charge management, discharge management, and power consumption management are achieved by the power management means. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The computer device 1 may further include various sensors, bluetooth modules, wi-Fi modules, etc., which will not be described in detail herein.
Further, the computer device 1 may also comprise a network interface, optionally comprising a wired interface and/or a wireless interface (e.g. WI-FI interface, bluetooth interface, etc.), typically used for establishing a communication connection between the computer device 1 and other computer devices.
The computer device 1 may optionally further comprise a user interface, which may be a Display, an input unit, such as a Keyboard (Keyboard), or a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like. The display may also be referred to as a display screen or display unit, as appropriate, for displaying information processed in the computer device 1 and for displaying a visual user interface.
It should be understood that the embodiments described are for illustrative purposes only and are not limited to this configuration in the scope of the patent application.
Fig. 3 shows only a computer device 1 with components 12-13, it being understood by those skilled in the art that the structure shown in fig. 3 is not limiting of the computer device 1 and may include fewer or more components than shown, or may combine certain components, or a different arrangement of components.
In connection with fig. 1, the memory 12 in the computer device 1 stores a plurality of instructions to implement a transaction data storage method, the processor 13 being executable to implement:
message subscription processing is carried out through an OnMessage interface;
when binary transaction data reported through the OnMessage interface is received, a reserved memory area is acquired;
synchronously writing the binary transaction data into the memory area based on a memory file mapping technology;
starting an asynchronous thread every preset time interval to write the data in the memory area into a file area of a disk; wherein the file area comprises an index file and a data file;
reading a message from the data file based on the index file and writing the message into a binary message list;
Transferring the information in the binary information list into a callback function to analyze the information by calling a thread in an asynchronous thread pool to obtain an information file with an information sequence number;
writing the message file into a persistent message annular queue according to the message sequence number;
reading messages from the persistent message annular queue according to batches and submitting the messages to an asynchronous distribution thread pool;
and executing persistence processing on the messages read from the persistence message annular queue according to the message type based on the asynchronous distribution thread pool.
Specifically, the specific implementation method of the above instructions by the processor 13 may refer to the description of the relevant steps in the corresponding embodiment of fig. 1, which is not repeated herein.
The data in this case were obtained legally.
In the several embodiments provided in the present invention, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The invention is operational with numerous general purpose or special purpose computer system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. The units or means stated in the invention may also be implemented by one unit or means, either by software or hardware. The terms first, second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (9)

1. A transaction data storage method, the transaction data storage method comprising:
message subscription processing is carried out through an OnMessage interface;
when binary transaction data reported through the OnMessage interface is received, a reserved memory area is acquired;
synchronously writing the binary transaction data into the memory area based on a memory file mapping technology;
starting an asynchronous thread every preset time interval to write the data in the memory area into a file area of a disk; wherein the file area comprises an index file and a data file;
Reading a message from the data file based on the index file and writing the message into a binary message list;
transferring the information in the binary information list into a callback function to analyze the information by calling a thread in an asynchronous thread pool to obtain an information file with an information sequence number;
writing the message file into a persistent message annular queue according to the message sequence number;
reading messages from the persistent message annular queue according to batches and submitting the messages to an asynchronous distribution thread pool;
performing persistence processing on the messages read from the persistence message ring queue according to message types based on the asynchronous distribution thread pool, including: acquiring a message of a non-state message type from the read message to form a first message set; determining the message attribute of each first message in the first message set, and determining the message processing amount of each asynchronous distribution thread in the asynchronous distribution thread pool; grouping the messages in the first message set according to the message attribute of each first message and the message processing capacity of each asynchronous distribution thread to obtain at least one message group; obtaining a preset batch warehouse entry number; carrying out batch processing on the messages in each message group according to the batch warehousing number to obtain a multi-batch message list; submitting the multi-batch message list to an asynchronous database-falling thread pool; calling a service interface corresponding to each message attribute based on each thread in the asynchronous database-falling thread pool, and writing each first message into a corresponding service table in a database in parallel; wherein the non-status message type messages have no order keeping requirements.
2. The transaction data storage method of claim 1, wherein prior to the acquiring the reserved memory area, the method further comprises:
pre-distributing a storage space with a preset volume in a memory based on a file writing type;
and initializing the storage space to obtain the memory area.
3. The transaction data storage method of claim 1, wherein the reading messages from the data file and writing to a binary message list based on the index file includes:
reading a message position and a message length from the index file;
and reading a message from the data file based on the message position and the message length and writing the message into the binary message list.
4. The transaction data storage method of claim 1, wherein reading messages from the persistent message ring queue by lot and submitting them to an asynchronous distribution thread pool comprises:
obtaining a pre-configured message processing amount of each batch;
reading messages from the persistent message annular queue according to the message processing amount of each batch and submitting the messages to the asynchronous distribution thread pool;
the asynchronous distributing thread pool comprises a plurality of asynchronous distributing threads, and each asynchronous distributing thread is used for distributing the messages read from the persistent message annular queue.
5. The transaction data storage method of claim 1, wherein the performing persistence processing on messages read from the persistence message ring queue by message type based on the asynchronous distribution thread pool comprises:
acquiring a message of a status message type from the read message to form a second message set;
acquiring a message identifier corresponding to the status message type;
marking each second message in the second set of messages based on the message identifier;
writing each marked second message into a status message annular queue;
reading the message from the state message annular queue for persistence processing;
wherein the messages of the status message type have order preserving requirements.
6. The transaction data storage method of claim 5, wherein the reading messages from the status message ring queue for persistence processing comprises:
acquiring the number of preset groups;
initializing at least one packet queue according to the packet number, and numbering each packet queue;
the message read from the state message annular queue is grouped according to service logic to obtain a grouping message list and a grouping number of each group;
Matching the packet message list with the at least one packet queue according to preconfigured calculation logic and the packet number of each group, and writing the message list under the corresponding packet into the corresponding packet queue;
in each grouping queue, sequentially calling corresponding service interfaces according to the processing sequence of the messages in the preset group, and serially writing each message into a corresponding service table in the database;
wherein message dropping is performed in parallel between each packet queue.
7. A transaction data storage device, the transaction data storage device comprising:
the processing unit is used for carrying out message subscription processing through an OnMessage interface;
the acquisition unit is used for acquiring a reserved memory area when receiving binary transaction data reported through the OnMessage interface;
the writing unit is used for synchronously writing the binary transaction data into the memory area based on a memory file mapping technology;
the writing unit is further used for starting an asynchronous thread at preset time intervals to write the data in the memory area into a file area of a disk; wherein the file area comprises an index file and a data file;
The writing unit is further used for reading the information from the data file based on the index file and writing the information into a binary information list;
the analysis unit is used for analyzing the message by calling a thread in the asynchronous thread pool to transfer the message in the binary message list into a callback function so as to obtain a message file with a message sequence number;
the writing unit is further used for writing the message file into a persistent message annular queue according to the message sequence number;
the reading unit is used for reading the messages from the persistent message annular queue according to batches and submitting the messages to an asynchronous distribution thread pool;
a persistence unit, configured to perform persistence processing on a message read from the persistent message ring queue according to a message type based on the asynchronous distribution thread pool, where the persistence unit includes: acquiring a message of a non-state message type from the read message to form a first message set; determining the message attribute of each first message in the first message set, and determining the message processing amount of each asynchronous distribution thread in the asynchronous distribution thread pool; grouping the messages in the first message set according to the message attribute of each first message and the message processing capacity of each asynchronous distribution thread to obtain at least one message group; obtaining a preset batch warehouse entry number; carrying out batch processing on the messages in each message group according to the batch warehousing number to obtain a multi-batch message list; submitting the multi-batch message list to an asynchronous database-falling thread pool; calling a service interface corresponding to each message attribute based on each thread in the asynchronous database-falling thread pool, and writing each first message into a corresponding service table in a database in parallel; wherein the non-status message type messages have no order keeping requirements.
8. A computer device, the computer device comprising:
a memory storing at least one instruction; and
A processor executing instructions stored in the memory to implement a transaction data storage method as claimed in any one of claims 1 to 6.
9. A computer-readable storage medium, characterized by: the computer-readable storage medium having stored therein at least one instruction for execution by a processor in a computer device to implement the transaction data storage method of any of claims 1-6.
CN202311035280.5A 2023-08-17 2023-08-17 Transaction data storage method, device, equipment and medium Active CN116755637B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311035280.5A CN116755637B (en) 2023-08-17 2023-08-17 Transaction data storage method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311035280.5A CN116755637B (en) 2023-08-17 2023-08-17 Transaction data storage method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN116755637A CN116755637A (en) 2023-09-15
CN116755637B true CN116755637B (en) 2024-02-09

Family

ID=87951835

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311035280.5A Active CN116755637B (en) 2023-08-17 2023-08-17 Transaction data storage method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN116755637B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106201443A (en) * 2016-07-27 2016-12-07 福建富士通信息软件有限公司 A kind of method and system based on the Storm how concurrent written document of streaming Computational frame
CN110221987A (en) * 2019-05-05 2019-09-10 福建天泉教育科技有限公司 Date storage method, storage medium in the case of system emergency
CN111625350A (en) * 2020-05-09 2020-09-04 苏州浪潮智能科技有限公司 Memory allocation method, device, equipment and storage medium for network message data
CN112486913A (en) * 2020-11-12 2021-03-12 思创数码科技股份有限公司 Log asynchronous storage method and device based on cluster environment
CN112817544A (en) * 2021-03-05 2021-05-18 北京星网锐捷网络技术有限公司 Data processing method, storage system and storage device
CN113836184A (en) * 2021-09-18 2021-12-24 深圳前海微众银行股份有限公司 Service persistence method and device
CN114490050A (en) * 2022-01-18 2022-05-13 北京京东拓先科技有限公司 Data synchronization method and device
CN115344270A (en) * 2022-08-23 2022-11-15 中国民航信息网络股份有限公司 Data processing method and device, storage medium and electronic equipment
CN115766610A (en) * 2022-10-26 2023-03-07 中国航空工业集团公司西安航空计算技术研究所 Message queue based on publish-subscribe
CN116155539A (en) * 2022-12-06 2023-05-23 吉林省吉林祥云信息技术有限公司 Automatic penetration test method, system, equipment and storage medium based on information flow asynchronous processing algorithm

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10628211B2 (en) * 2017-06-15 2020-04-21 Mastercard International Incorporated Systems and methods for asynchronously consolidating and transmitting data

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106201443A (en) * 2016-07-27 2016-12-07 福建富士通信息软件有限公司 A kind of method and system based on the Storm how concurrent written document of streaming Computational frame
CN110221987A (en) * 2019-05-05 2019-09-10 福建天泉教育科技有限公司 Date storage method, storage medium in the case of system emergency
CN111625350A (en) * 2020-05-09 2020-09-04 苏州浪潮智能科技有限公司 Memory allocation method, device, equipment and storage medium for network message data
CN112486913A (en) * 2020-11-12 2021-03-12 思创数码科技股份有限公司 Log asynchronous storage method and device based on cluster environment
CN112817544A (en) * 2021-03-05 2021-05-18 北京星网锐捷网络技术有限公司 Data processing method, storage system and storage device
CN113836184A (en) * 2021-09-18 2021-12-24 深圳前海微众银行股份有限公司 Service persistence method and device
CN114490050A (en) * 2022-01-18 2022-05-13 北京京东拓先科技有限公司 Data synchronization method and device
CN115344270A (en) * 2022-08-23 2022-11-15 中国民航信息网络股份有限公司 Data processing method and device, storage medium and electronic equipment
CN115766610A (en) * 2022-10-26 2023-03-07 中国航空工业集团公司西安航空计算技术研究所 Message queue based on publish-subscribe
CN116155539A (en) * 2022-12-06 2023-05-23 吉林省吉林祥云信息技术有限公司 Automatic penetration test method, system, equipment and storage medium based on information flow asynchronous processing algorithm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种分布式持久性内存文件***的一致性机制;陈波等;《计算机研究与发展》(第03期);第660-667页 *

Also Published As

Publication number Publication date
CN116755637A (en) 2023-09-15

Similar Documents

Publication Publication Date Title
CN112653760B (en) Cross-server file transmission method and device, electronic equipment and storage medium
CN112631806B (en) Asynchronous message arrangement and scheduling method and device, electronic equipment and storage medium
US20170053023A1 (en) System to organize search and display unstructured data
CN113163009A (en) Data transmission method, device, electronic equipment and storage medium
CN112559535A (en) Multithreading-based asynchronous task processing method, device, equipment and medium
CN114169303A (en) Method, device, equipment and medium for editing table based on vue.js
CN116743885B (en) UDP engine-based data transmission method, device, equipment and medium
CN113282854A (en) Data request response method and device, electronic equipment and storage medium
CN116755637B (en) Transaction data storage method, device, equipment and medium
CN115314570B (en) Data issuing method, device, equipment and medium based on protocol development framework
CN114201466B (en) Anti-cache breakdown method, device, equipment and readable storage medium
CN114124878B (en) Market data issuing method, device, system, equipment and medium
CN114697316A (en) Batch downloading method, device and equipment of data and computer readable medium
CN115269709B (en) Inquiry and playback method, device, equipment and medium based on market data
CN115277859B (en) Request scheduling method, device, equipment and medium
CN115964307B (en) Automatic test method, device, equipment and medium for transaction data
CN116662208B (en) Transaction testing method, device and medium based on distributed baffle
CN116483747B (en) Quotation snapshot issuing method, device, equipment and medium
CN115543214B (en) Data storage method, device, equipment and medium in low-delay scene
CN115731047B (en) Batch order processing method, equipment and medium
CN115174698B (en) Market data decoding method, device, equipment and medium based on table entry index
CN114860349B (en) Data loading method, device, equipment and medium
CN115065642B (en) Code table request method, device, equipment and medium under bandwidth limitation
CN113347230B (en) Load balancing method, device, equipment and medium based on programmable switch
CN115297062B (en) Application layer network data transmission method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant