CN115357552A - Message queue message storage method, device, equipment, storage medium and program product - Google Patents

Message queue message storage method, device, equipment, storage medium and program product Download PDF

Info

Publication number
CN115357552A
CN115357552A CN202211029122.4A CN202211029122A CN115357552A CN 115357552 A CN115357552 A CN 115357552A CN 202211029122 A CN202211029122 A CN 202211029122A CN 115357552 A CN115357552 A CN 115357552A
Authority
CN
China
Prior art keywords
message
file
memory
file system
disk
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211029122.4A
Other languages
Chinese (zh)
Inventor
谢波
王延友
程春生
龚展鸿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202211029122.4A priority Critical patent/CN115357552A/en
Publication of CN115357552A publication Critical patent/CN115357552A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/178Techniques for file synchronisation in file systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure provides a message queue message storage method, relates to the technical field of cloud computing, and can be applied to the field of financial science and technology. The method comprises the following steps: writing message data received by the message queue into a message log file of a memory file system according to a message theme; writing the message log file into a cache according to the sequence of the subject partition and the segmentation; and refreshing the cache according to the file utilization rate of the memory file system so as to complete the disk-dropping operation. The present disclosure also provides a message queue message storage device, apparatus, storage medium and program product.

Description

Message queue message storage method, device, equipment, storage medium and program product
Technical Field
The present disclosure relates to the field of cloud computing technologies, and in particular, to a method, an apparatus, a device, a storage medium, and a program product for storing message queues and messages.
Background
In the current message drop scheme of the message queue, different segmented log files of different partitions under the same message subject are synchronized into a page cache at a fixed refreshing time interval or in a fixed number, and then the content in the page cache is written into a disk log file according to a file read-write mechanism of an operating system so as to finish the storage of the message.
However, the disk dropping mode is easily affected by the busy degree of the disk IO of the operating system, once the service is concurrent and is high, the message sending rate is much higher than the message consumption rate, and the message is stacked due to the busy disk IO in superposition, so that a large amount of overtime delay of a message queue is caused, and the response of a client is affected.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
In view of the foregoing, the present disclosure provides a message queue message storage method, apparatus, device, medium, and program product that address message queue extrusion.
According to a first aspect of the present disclosure, a message queue message storage method is provided, the method including:
writing message data received by the message queue into a message log file of a memory file system according to a message theme;
writing the message log file into a cache according to the sequence of the subject partition and the segmentation;
and refreshing the cache according to the file utilization rate of the memory file system so as to finish the disk-dropping operation.
According to an embodiment of the present disclosure, the refreshing the cache according to the file usage rate of the memory file system to complete the disk-dropping operation includes:
acquiring the file utilization rate of the memory file system in real time; and
and when the file utilization rate is determined to be larger than a preset threshold value, storing the message log file into a disk.
According to the embodiment of the present disclosure, before writing the message data received by the message queue to the message log of the memory file system according to the message topic, the method further includes:
and in the initialization stage of the message queue server, the initialization of the memory file system is completed.
According to an embodiment of the present disclosure, the completing initialization of the memory file system includes:
virtualizing an operating system memory to generate a memory file system;
binding a message queue message directory with the memory file system; and
and binding the hard disk drive number with the message subject partition file.
According to the embodiment of the present disclosure, after completing the tray dropping operation, the method further comprises:
and clearing the message data in the message log file of the refreshed disk drop in batch.
According to the embodiment of the disclosure, message data of different partitions of the same message topic are stored in the same message log file.
And generating a test script change log according to the first page test script modification information and the second page test script modification information.
A second aspect of the present disclosure provides a message queue message storage device, including:
the message log file generation module is used for writing the message data received by the message queue into a message log file of the memory file system according to the message theme;
the memory message module is used for writing the message log file into a cache according to the sequence of the subject partition and the segmentation;
and the disk-dropping refreshing module is used for refreshing the cache according to the file utilization rate of the memory file system so as to complete the disk-dropping operation.
According to an embodiment of the present disclosure, further comprising:
the memory file initialization module is used for finishing the initialization of a memory file system in the initialization stage of the message queue server;
and the data clearing module is used for clearing the message data in the message log file with the refreshed disk drop in batches.
According to an embodiment of the present disclosure, a landing disk refresh module includes:
the acquisition submodule is used for acquiring the file utilization rate of the memory file system in real time; and
and the disk dropping sub-module is used for storing the message log file into a magnetic disk when the file utilization rate is determined to be greater than a preset threshold value.
According to an embodiment of the present disclosure, the memory file initialization module includes:
the generation submodule is used for virtualizing the memory of the operating system to generate a memory file system;
the first binding submodule is used for binding the message queue message directory with the memory file system; and
and the second binding submodule is used for binding the hard disk drive number with the message subject partition file.
A third aspect of the present disclosure provides an electronic device, comprising: one or more processors; a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the message queue message storage method described above.
A fourth aspect of the present disclosure also provides a computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to perform the above-mentioned message queue message storage method.
A fifth aspect of the present disclosure also provides a computer program product comprising a computer program which, when executed by a processor, implements the message queue message storage method described above.
According to the message queue message storage method provided by the embodiment of the disclosure, message data received by a message queue is written into a message log file of a memory file system according to a message theme; writing the message log file into a cache according to the sequence of the subject partition and the segmentation; and refreshing the cache according to the file utilization rate of the memory file system so as to complete the disk-dropping operation. By introducing the memory file system tmpfs of Linux, the data of the message queue is also put into the memory system, and in the service peak period, a message falling mechanism is not refreshed in a timed or quantitative mode any more, but the characteristic of high read-write efficiency of the memory file system is utilized.
Drawings
The foregoing and other objects, features and advantages of the disclosure will be apparent from the following description of embodiments of the disclosure, which proceeds with reference to the accompanying drawings, in which:
fig. 1a schematically illustrates a message drop scheme in the related art;
FIG. 1b schematically illustrates a block diagram of a message queue message storage device according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates an application scenario diagram of a message queue message storage method, apparatus, device, medium, and program product according to embodiments of the disclosure;
fig. 3 schematically shows a flowchart of a message queue message storage method provided in accordance with an embodiment of the present disclosure;
fig. 4 is a flowchart schematically illustrating a generation method of another message queue message storage method according to an embodiment of the present disclosure;
FIG. 5 is a block diagram schematically illustrating a structure of a message queue message storage device according to an embodiment of the present disclosure; and
fig. 6 schematically illustrates a block diagram of an electronic device adapted to implement a message queue message storage method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that these descriptions are illustrative only and are not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
In those instances where a convention analogous to "at least one of A, B, and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B, and C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.).
The terms appearing in the embodiments of the present disclosure are explained first:
temporary file system Tmpfs: the method is a linux file system based on a memory, tmpfs is a shm directory mounted to/dev/lower, is a temporary file system, has very high reading and writing speed, and supports quick dynamic expansion and release.
Page cache: the page cache is a cache of a file in the memory, when the file is opened, the file is loaded into the page cache firstly, and when the file is written into the page cache, the file is written into the page cache firstly, and then the file is flushed into a disk by the page cache. In a Linux system, when the percentage of dirty pages in the total memory exceeds a specific threshold, a background thread starts to refresh the dirty pages to a disk, the threshold can be properly reduced, and the file operation is accelerated by using the advantage of the memory.
Partion partitioning: in order to realize expandability and improve concurrency capability, a very large Topic can be distributed on a plurality of brokers (namely servers), one Topic can be divided into a plurality of partitions, and each partitIOn is generally an ordered queue which is continuous from small to large.
Segment File: the kafka message is stored in a Segment file storage mode, each partition is divided into N small files, all the messages in the partition are stored together, and the division rule can be divided according to the size of the file or the number of the messages.
Disk drive number: the Linux kernel loads a corresponding driver through a main device number (major number) and a sub device number (minor number), so as to read specific single hardware (such as a certain hard disk, a certain printer or a certain network card) with different serial numbers under the same hardware type (such as a disk type, a printer type or a network card type). The combination of the main equipment number and the secondary equipment number is a 32-bit number, and is represented by a dev _ t type, the first 12 bits represent the main equipment number, the last 20 bits represent the secondary equipment number, and the Linux can select specific equipment through the secondary equipment number, namely a minor number.
Fig. 1a schematically illustrates a message dropping scheme in the related art. Fig. 1b schematically shows a block diagram of a message queue message storage device according to an embodiment of the disclosure. As shown in fig. 1a, currently, the main stream message queue management stores messages according to a theme, a Partition, a copy, a segment, and an index, first, a queue message is classified by using the theme Topic as a basic unit, actually, the disk storage is stored according to Partition partitions, that is, each Topic is divided into multiple partitions, and each message in each Partition is assigned a unique message id, that is, an Offset. Then, each partition is further divided into a plurality of log segments LogSegment, which is equivalent to that one jumbo file is divided into relatively small files on average, so that the searching, maintenance and cleaning of messages are facilitated. The partitions are physically stored on disk in the form of folders, with each logsegment under the partition corresponding to one log file and two index files on disk, and possibly other files (such as a snapshot index file suffixed with ". Snapshot", etc.).
Kafka corresponds to a folder named < topoic > - < partitionion > when writing messages to the disk Log. For example, assuming an order Topic named "Topic-order" with 3 partitionins, it is represented by "Topic-order-0", "Topic-order-1" and "Topic-order-2" with 3 partitionion folders in the actual physical storage, each folder has a set of log files (with ". Log" as the file suffix) in the next and next logsegments, where each LogSegment has an Offset as the reference Offset (baseOffset) to represent the Offset of the first message in the current LogSegment, each subsequent segment file has the Offset of the last segment file, such as the reference Offset of the first LogSegment is 0, the corresponding log file is 00000000000000. 0000000000. 00000000000000. 000000000000000000000000.00000000.0000, the second log file is 0000000000000000.
From the operating system level, when a log segment message file is updated by a disk drop, a file buffer function (fflush ()) is firstly refreshed in an application level explicit or implicit calling an operating system standard library function, message data is copied to a kernel cache (page cache), then a system kernel calls a hard disk drive, dirty data in the page cache is written into a disk data area, and message data writing is completed. However, the scheme is affected by the busy degree of the disk IO of the operating system, once the service is carried out and is high, the message sending rate is far higher than the message consumption rate, and the problem of message accumulation and a large amount of overtime delay of a message queue due to the busy disk IO in superposition occurs.
Based on the above technical problem, an embodiment of the present disclosure provides a message queue message storage method, where the method includes: writing message data received by the message queue into a message log file of a memory file system according to a message theme; writing the message log file into a cache according to the sequence of the topic partition segmentation; and refreshing the cache according to the file utilization rate of the memory file system so as to complete the disk-dropping operation.
As shown in fig. 1b, the message queue message storage device in the embodiment of the present disclosure includes a memory file initialization unit 10, a front-end message queue unit 20, a memory message file unit 30, a refresh control unit 40, a batch-off-disk unit 50, and a memory cache cleaning unit 60. The memory file system initializing unit 01 is responsible for virtualizing a part of the memory of the virtual file system into a memory file system/tmpfs, and binding the memory file system/tmpfs with the message storage directory. The front-end message queue unit 20 is responsible for accessing and acquiring the service transaction message from the producer, as in the related art. The memory message file unit 30 binds the message disk to the memory file by introducing linux memory file system tmpfs, so as to realize fast reading and writing of message data when dealing with burst transaction flood peaks. The refresh control unit 40 directly controls the refresh timing by monitoring the use of the memory file system and by adjusting the usage threshold. The batch landing unit 50 performs batch background asynchronous flush refreshing on the corresponding disk files by taking the partition log files as units, so as to complete the landing of the memory message data. The cleaning memory cache unit 60 is responsible for identifying the log file list of the refreshed disk drop and performing batch cleaning.
Fig. 2 schematically illustrates an application scenario diagram of a message queue message storage method, apparatus, device, medium and program product according to an embodiment of the present disclosure.
As shown in fig. 2, the application scenario 100 according to this embodiment may include a message packet data drop scenario. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A user may use terminal devices 101, 102, 103 to interact with a server 105 over a network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have installed thereon various communication client applications, such as shopping-like applications, web browser applications, search-like applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a message queue server, and for a message generated by a producer, the message queue server writes received message data into a message log file of the memory file system, and performs a refresh-and-drop operation according to a file usage rate of the memory file system.
It should be noted that the message queue message storage method provided in the embodiment of the present disclosure may be generally executed by the server 105. Accordingly, the message queue message storage provided by the embodiments of the present disclosure may be generally disposed in the server 105. The message queue message storage method provided by the embodiment of the present disclosure may also be executed by a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Correspondingly, the message queue message storage device provided by the embodiment of the present disclosure may also be disposed in a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 2 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
It should be noted that the method and the apparatus for storing message in a message queue determined in the embodiment of the present disclosure may be used in the technical field of cloud computing, may also be used in the technical field of finance, and may also be used in any field other than the financial field.
The error correction method of the page test script according to the embodiment of the present disclosure will be described in detail below with reference to fig. 3 to 4 based on the scenario described in fig. 2.
Fig. 3 schematically shows a flowchart of a message queue message storage method according to an embodiment of the present disclosure. As shown in fig. 3, the message queue message storage method of this embodiment includes operations S210 to S230, which may be performed by a server or other computing device.
In operation S210, the message data received by the message queue is written into a message log file of the memory file system according to the message topic.
According to an embodiment of the present disclosure, message data of different partitions of the same message topic are stored in the same message log file.
In operation S220, the message log file is written into the cache in the order of the subject partition segments.
In operation S230, the cache is refreshed according to the file utilization rate of the memory file system, so as to complete the disk-dropping operation.
In one example, data produced by the Producer is transmitted to a Broker, and is written into log files in the directories of different PartitIOn files in the memory file system/tmpfs according to the theme, and subsequently produced data is continuously added to the end of the log file, and each piece of data still has its own offset.
In order to avoid that different segment log files of different partitions under one theme are dispersed under different disk directories in the prior art, in the embodiment of the disclosure, the entire disk is dropped in sequence according to a theme-partition unit, that is, message data of different partitions of the same message theme are stored in the same message log file, and in a Topic-Partiton manner, sequence number disorder is avoided, the orderliness of messages is ensured, the calculation work of message positioning and searching is reduced, and the message reading and writing efficiency is improved.
Because the message log file is a file of a memory file system, the design of a plurality of partitionIOs of one Topic is changed into one Topic one partitionIOn by utilizing the characteristic of high read-write efficiency of the memory file system, the integral ordering of the message is realized, different offsets are not maintained for each partitIOn, and a plurality of message data files are not needed.
In the memory file operating system, the cache is mapped with the operating system kernel control, message log data can be directly written into the cache according to the sequence of topic + partiton + segment, and the message data is numbered in sequence to serve as a control condition for subsequent batch disk-dropping. In order to greatly improve the write throughput of a Producer, the utilization rate of a message file in a tmpfs memory file system is obtained by monitoring, and active refreshing is carried out according to the utilization rate threshold of the memory file system without depending on the dirty page rate of the pagecache of a traditional operating system.
According to the message queue message storage method provided by the embodiment of the disclosure, message data received by a message queue is written into a message log file of a memory file system according to a message theme; writing the message log file into a cache according to the sequence of the topic partition segmentation; and refreshing the cache according to the file utilization rate of the memory file system so as to complete the disk-dropping operation. The method has the advantages that the data of the message queue are also put into the memory system by introducing the memory file system tmpfs of Linux, and the whole file falling is realized by using a Topic one partitION one log file form, so that the load-balanced message increment falling addition of segment files under different partitIOn file directories is avoided, the frequent IO read-write execution of a background is greatly reduced, and the whole IO throughput is optimized.
Fig. 4 is a flowchart schematically illustrating a generation method of another message queue message storage method according to an embodiment of the present disclosure, and as shown in fig. 4, the generation method includes operations S310 to S360. Before the method of the embodiment of the present disclosure is performed, the operation S310 needs to be performed to initialize the memory file system.
In operation S310, in the message queue server initialization phase, the initialization of the memory file system is completed.
According to the embodiment of the disclosure, an operating system memory is virtualized to generate a memory file system; binding a message queue message directory with the memory file system; and binding the hard disk drive number with the message subject partition file.
In one example, the Broker virtualizes a part of the memory as a memory file system/tmpfs through a virtual file system of the operating system in an initialization stage of the message queue server, and then binds the message queue message directory using a mount command. In order to reduce disk IO switching, the drive number of the hard disk is bound with the Topic partition file, that is, the partition file is fixed to the corresponding hard disk device.
In operation S320, the message data received by the message queue is written into the message log file of the memory file system according to the message topic.
In operation S330, the message log file is written into the cache in the order of the subject partition segments.
The technical solutions and principles of operations S320 and S330 may refer to operations S210 and S220, which are not described herein again.
In operation S340, a file usage rate of the memory file system is obtained in real time.
In operation S350, when it is determined that the file usage rate is greater than a preset threshold, the message log file is stored to a disk.
In an example, the file usage rate of the memory file system is obtained in real time, and when the file usage rate is greater than a preset threshold, for example, may be 80%, the data representing the current cache of the memory file system reaches the threshold, and a disk-down may be performed to refresh, and then the message log file is stored in the disk bound to the file. And if the file utilization rate is less than or equal to a preset threshold value, the operation is not executed, the message data generated by the producer is waited to be written into the memory file system, and the message data is written into the cache until the file utilization rate reaches a refresh tray-falling condition. And submitting the log file of the dropped message according to the utilization rate of the memory file system, copying the log file to a disk in batches, and refreshing the disk without repeatedly carrying out IO (input/output) according to the number of messages, so that the storage efficiency of message data is improved.
In operation S360, the message log files of the refreshed landed message are cleared in batch.
In one example, in order to ensure that there is enough free memory space in the memory file system to receive a newly generated message, the batch destaging unit may remove the destaged memory file in time after writing the message data.
Based on the message queue message storage method, the disclosure also provides a message queue message storage device. The apparatus will be described in detail below with reference to fig. 5.
Fig. 5 schematically shows a block diagram of a message queue message storage device according to an embodiment of the present disclosure.
As shown in fig. 5, the message queue message storage device 500 of this embodiment includes a message log file generating module 510, a memory message module 520, and a disk-down refreshing module 530.
The message log file generating module 510 is configured to write message data received by the message queue into a message log file of the memory file system according to a message topic. In an embodiment, the message log file generating module 510 may be configured to perform the operation S210 described above, which is not described herein again.
The memory message module 520 is configured to write the message log file into the cache according to the sequence of the subject partition and the segment. In an embodiment, the memory message module 520 may be configured to perform the operation S220 described above, which is not described herein again.
The disk-dropping refreshing module 530 is configured to refresh the cache according to the file utilization rate of the memory file system to complete a disk-dropping operation. In an embodiment, the landing pad refresh module 530 may be configured to perform the operation S230 described above, and is not described herein again.
According to the embodiment of the disclosure, the device further comprises a memory file initialization module and a data clearing module.
The memory file initialization module is used for completing initialization of a memory file system in the initialization stage of the message queue server. In an embodiment, the memory file initialization module may be configured to perform the operation S310 described above, which is not described herein again.
The data clearing module is used for clearing message data in the message log files of the refreshed disk drops in batches. In an embodiment, the data clearing module may be configured to perform operation S360 described above, which is not described herein again.
According to an embodiment of the present disclosure, the landing pad refreshing module includes an acquisition submodule and a landing pad submodule.
And the obtaining submodule is used for obtaining the file utilization rate of the memory file system in real time. In an embodiment, the data clearing module may be configured to perform the operation S340 described above, which is not described herein again.
And the disk dropping sub-module is used for storing the message log file into a disk when the file utilization rate is determined to be greater than a preset threshold value. In an embodiment, the landing sub-module may be configured to perform the operation S350 described above, and will not be described herein again.
According to the embodiment of the disclosure, the memory file initialization module comprises a generation submodule, a first binding submodule and a second binding submodule.
The generation submodule is used for virtualizing the memory of the operating system to generate a memory file system. In an embodiment, the generating submodule may be configured to perform the operation S310 described above, and is not described herein again.
The first binding submodule is used for binding the message queue message directory with the memory file system. In an embodiment, the first binding submodule may be configured to perform the operation S310 described above, and details are not described herein again.
And the second binding submodule is used for binding the hard disk drive number with the message subject partition file. In an embodiment, the second binding submodule may be configured to perform the operation S310 described above, and details are not described herein again.
According to the embodiment of the present disclosure, any multiple modules of the message log file generating module 510, the memory message module 520, and the disk-down refreshing module 530 may be combined into one module to be implemented, or any one of the modules may be split into multiple modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the message log file generating module 510, the memory message module 520, and the landing disk refreshing module 530 may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or implemented by any one of three implementations of software, hardware, and firmware, or any suitable combination of any of them. Alternatively, at least one of the message log file generating module 510, the memory message module 520, and the landing disk refreshing module 530 may be at least partially implemented as a computer program module, which when executed, may perform a corresponding function.
Fig. 6 schematically illustrates a block diagram of an electronic device adapted to implement a message queue message storage method according to an embodiment of the present disclosure.
As shown in fig. 6, an electronic apparatus 900 according to an embodiment of the present disclosure includes a processor 901 which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 902 or a program loaded from a storage portion 908 into a Random Access Memory (RAM) 903. Processor 901 may comprise, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 901 may also include on-board memory for caching purposes. The processor 901 may comprise a single processing unit or a plurality of processing units for performing the different actions of the method flows according to embodiments of the present disclosure.
In the RAM 903, various programs and data necessary for the operation of the electronic apparatus 900 are stored. The processor 901, the ROM 902, and the RAM 903 are connected to each other through a bus 904. The processor 901 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM 902 and/or the RAM 903. Note that the programs may also be stored in one or more memories other than the ROM 902 and the RAM 903. The processor 901 may also perform various operations of the method flows according to the embodiments of the present disclosure by executing programs stored in the one or more memories.
Electronic device 900 may also include input/output (I/O) interface 905, input/output (I/O) interface 905 also connected to bus 904, according to an embodiment of the present disclosure. The electronic device 900 may also include one or more of the following components connected to the I/O interface 905: an input portion 906 including a keyboard, a mouse, and the like; an output section 907 including components such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 908 including a hard disk and the like; and a communication section 909 including a network interface card such as a LAN card, a modem, or the like. The communication section 909 performs communication processing via a network such as the internet. The drive 910 is also connected to the I/O interface 905 as necessary. A removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 910 as necessary, so that a computer program read out therefrom is mounted into the storage section 908 as necessary.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement a message queue message storage method according to an embodiment of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, a computer-readable storage medium may include the ROM 902 and/or the RAM 903 described above and/or one or more memories other than the ROM 902 and the RAM 903.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the method illustrated in the flow chart. When the computer program product runs in a computer system, the program code is used for causing the computer system to realize the message queue message storage method provided by the embodiment of the disclosure.
The computer program performs the above-described functions defined in the system/apparatus of the embodiments of the present disclosure when executed by the processor 901. The systems, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
In one embodiment, the computer program may be hosted on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted, distributed in the form of a signal on a network medium, and downloaded and installed through the communication section 909 and/or installed from the removable medium 911. The computer program containing program code may be transmitted using any suitable network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 909, and/or installed from the removable medium 911. The computer program, when executed by the processor 901, performs the above-described functions defined in the system of the embodiment of the present disclosure. The above described systems, devices, apparatuses, modules, units, etc. may be implemented by computer program modules according to embodiments of the present disclosure.
In accordance with embodiments of the present disclosure, program code for executing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, these computer programs may be implemented using high level procedural and/or object oriented programming languages, and/or assembly/machine languages. The programming language includes, but is not limited to, programming languages such as Java, C + +, python, the "C" language, or the like. The program code may execute entirely on the user computing device, partly on the user device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments of the present disclosure and/or the claims may be made without departing from the spirit and teachings of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the disclosure, and these alternatives and modifications are intended to fall within the scope of the disclosure.

Claims (11)

1. A message queue message storage method is characterized by comprising the following steps:
writing message data received by the message queue into a message log file of a memory file system according to a message theme;
writing the message log file into a cache according to the sequence of the subject partition and the segmentation;
and refreshing the cache according to the file utilization rate of the memory file system so as to complete the disk-dropping operation.
2. The method of claim 1, wherein refreshing the cache according to the file usage rate of the memory file system to complete a disk crash comprises:
acquiring the file utilization rate of the memory file system in real time; and
and when the file utilization rate is determined to be greater than a preset threshold value, storing the message log file into a disk.
3. The method of claim 1, wherein before writing message data received by the message queue to a message log of the memory file system according to the message topic, further comprising:
and in the initialization stage of the message queue server, the initialization of the memory file system is completed.
4. The method of claim 3, wherein the completing initialization of the memory file system comprises:
virtualizing an operating system memory to generate a memory file system;
binding a message queue message directory with the memory file system; and
and binding the hard disk drive number with the message subject partition file.
5. The method of claim 1, further comprising, after completing the landing operation:
and clearing the message data in the message log file of the refreshed disk drop in batch.
6. The method according to any one of claims 1 to 5, wherein message data for different partitions of the same message topic are stored in the same message log file.
7. An apparatus for storing message in a message queue, the apparatus comprising:
the message log file generation module is used for writing the message data received by the message queue into a message log file of the memory file system according to the message theme;
the memory message module is used for writing the message log file into a cache according to the sequence of the subject partition and the segmentation;
and the disk-dropping refreshing module is used for refreshing the cache according to the file utilization rate of the memory file system so as to complete the disk-dropping operation.
8. The apparatus of claim 7, further comprising:
the memory file initialization module is used for finishing the initialization of a memory file system in the initialization stage of the message queue server;
and the data clearing module is used for clearing the message data in the message log file of the refreshed disk drop in batch.
9. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method recited in any of claims 1-6.
10. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the method according to any one of claims 1 to 6.
11. A computer program product comprising a computer program which, when executed by a processor, carries out the method according to any one of claims 1 to 6.
CN202211029122.4A 2022-08-25 2022-08-25 Message queue message storage method, device, equipment, storage medium and program product Pending CN115357552A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211029122.4A CN115357552A (en) 2022-08-25 2022-08-25 Message queue message storage method, device, equipment, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211029122.4A CN115357552A (en) 2022-08-25 2022-08-25 Message queue message storage method, device, equipment, storage medium and program product

Publications (1)

Publication Number Publication Date
CN115357552A true CN115357552A (en) 2022-11-18

Family

ID=84005296

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211029122.4A Pending CN115357552A (en) 2022-08-25 2022-08-25 Message queue message storage method, device, equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN115357552A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117675720A (en) * 2024-01-31 2024-03-08 井芯微电子技术(天津)有限公司 Message transmission method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117675720A (en) * 2024-01-31 2024-03-08 井芯微电子技术(天津)有限公司 Message transmission method and device, electronic equipment and storage medium
CN117675720B (en) * 2024-01-31 2024-05-31 井芯微电子技术(天津)有限公司 Message transmission method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US11755452B2 (en) Log data collection method based on log data generated by container in application container environment, log data collection device, storage medium, and log data collection system
US10074206B1 (en) Network-optimized graphics library for virtualized graphics processing
US9680954B2 (en) System and method for providing virtual desktop service using cache server
US9110806B2 (en) Opportunistic page caching for virtualized servers
US8799611B2 (en) Managing allocation of memory pages
US10176101B2 (en) Allocate a segment of a buffer to each of a plurality of threads to use for writing data
CN113918101A (en) Method, system, equipment and storage medium for writing data cache
US10579416B2 (en) Thread interrupt offload re-prioritization
US9514072B1 (en) Management of allocation for alias devices
US8688946B2 (en) Selecting an auxiliary storage medium for writing data of real storage pages
US9817754B2 (en) Flash memory management
US9798765B2 (en) Updating object attributes in a lock-coupled namespace traversal
US9442674B1 (en) Using a plurality of sub-buffers and a free segment list to allocate segments to a plurality of threads to use for writing data
CN110851276A (en) Service request processing method, device, server and storage medium
US8868876B2 (en) Dedicated large page memory pools
CN115357552A (en) Message queue message storage method, device, equipment, storage medium and program product
US8656133B2 (en) Managing storage extents and the obtaining of storage blocks within the extents
US11074202B1 (en) Efficient management of bus bandwidth for multiple drivers
US9405470B2 (en) Data processing system and data processing method
CN113204382A (en) Data processing method, data processing device, electronic equipment and storage medium
US20210073033A1 (en) Memory management using coherent accelerator functionality
US20160274799A1 (en) Increase memory scalability using table-specific memory cleanup
US20200356538A1 (en) Container-based virtualization for testing database system
US20150278260A1 (en) Data set management using transient data structures
US20140237149A1 (en) Sending a next request to a resource before a completion interrupt for a previous request

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination