CN111209228A - Method for accelerating storage of multi-path satellite load files - Google Patents

Method for accelerating storage of multi-path satellite load files Download PDF

Info

Publication number
CN111209228A
CN111209228A CN202010008656.3A CN202010008656A CN111209228A CN 111209228 A CN111209228 A CN 111209228A CN 202010008656 A CN202010008656 A CN 202010008656A CN 111209228 A CN111209228 A CN 111209228A
Authority
CN
China
Prior art keywords
cache
data
load data
storage
load
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010008656.3A
Other languages
Chinese (zh)
Other versions
CN111209228B (en
Inventor
韦杰
刘伟亮
白亮
田文波
滕树鹏
胡浩
双小川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai aerospace computer technology research institute
Original Assignee
Shanghai aerospace computer technology research institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai aerospace computer technology research institute filed Critical Shanghai aerospace computer technology research institute
Priority to CN202010008656.3A priority Critical patent/CN111209228B/en
Publication of CN111209228A publication Critical patent/CN111209228A/en
Application granted granted Critical
Publication of CN111209228B publication Critical patent/CN111209228B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0844Multiple simultaneous or quasi-simultaneous cache accessing
    • G06F12/0853Cache with multiport tag or data arrays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • G06F12/0895Caches characterised by their organisation or structure of parts of caches, e.g. directory or tag array
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radio Relay Systems (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention provides a method for accelerating storage of multipath satellite load files, which accelerates storage of multipath satellite load files by adopting a two-stage cache and multithreading pipeline processing method. In the load data receiving thread, the first-level cache receives each load data packet from an external interface without distinction through the control of a read-write pointer by utilizing the mutual cooperation of a circular queue and a counting semaphore. In the load data processing thread, the second-level cache adopts double-cache alternate reading and writing aiming at each path of load data, and controls the empty, receiving and storing states of each cache by matching with a state machine. And in the payload data storage thread, writing the payload data in the cache with the stored state into a file according to the memory page size for storage. The first-level cache receives external multipath load data quickly, the second-level double-cache ping-pong operation accelerates the quick storage of each path of load file data, fully utilizes processor resources, and achieves the effect of accelerating the storage of multipath satellite load files.

Description

Method for accelerating storage of multi-path satellite load files
Technical Field
The invention relates to a method for accelerating storage of multi-path satellite load files.
Background
With the rapid development of aerospace technology in China, the field of satellite application is more and more extensive, the corresponding requirement on satellite processing capacity is higher and higher, the types and the quantity of load data serving as objects to be processed on the satellite are also increased rapidly, and a large amount of load data needs to be received, processed and stored on the satellite in time. The traditional satellite data receiving and storing mode is to write the received data into the memory in sequence, then read out in sequence and transmit to the ground analysis process. If the traditional data receiving and storing mode is adopted, not only the development difficulty is multiplied and the development cost is increased sharply, but also the data can not be processed flexibly on the satellite.
In order to meet the requirement of processing and storing on-satellite data, the embedded file system is adopted to manage the processing and storing of on-satellite data, and the embedded file system is a component which supports the operation of an operating system and cannot be lacked. However, the problem of reduced data reading and writing speed is faced after the file system is adopted to manage data, because after the file system manages data, in order to flexibly read and write data, the data reading and writing does not read and write the Flash memory in sequence like the traditional method, but a certain algorithm is adopted to maintain the wear balance of the memory.
In order to improve the reading and writing speed of the memory, the embedded file system ensures that the memory can read and write data according to the page size by adding some cache mechanisms on the drive layer, thereby increasing the reading and writing efficiency of the memory. However, the reading and writing speed still has a great loss after the file system is adopted to manage the reading and writing of the data.
Disclosure of Invention
The invention aims to provide a method for accelerating storage of multi-path satellite load files.
In order to solve the above problems, the present invention provides a method for accelerating storage of a multipath satellite load file, comprising:
the method comprises the steps of performing a load data storage acceleration process by using a two-level cache and multithread line production processing method, wherein the first-level cache receives load data of each path in a non-differentiated manner by adopting a mode of matching a circular cache queue with a counting semaphore; the second-level cache is provided with double caches for each path of load data, ping-pong operation and alternate reading and writing are carried out under the control of a cache state machine, one cache is used for receiving the load data analyzed by the first-level cache, and the other cache is used for writing the data in the cache into a file for storage.
Further, in the above method, the multithreading pipeline includes:
the load data receiving thread is responsible for receiving the load data of each path from the external interface without distinguishing;
the load data processing thread is responsible for analyzing the received load data of each path on the satellite;
and the load data storage thread is responsible for writing the processed data into a file for storage to form a file with specified requirements.
Further, in the above method, the circular buffer queue has a read-write pointer, the count semaphore is configured with a corresponding read-write count semaphore, the read-write pointer is initialized to the start position of the circular buffer queue, the read-count semaphore is initialized to zero, and the write-count semaphore is initialized to the number of the load data packets accommodated by the circular buffer queue.
Further, in the above method, the method of matching the circular buffer queue with the count semaphore further includes:
between the load data receiving thread and the load data processing thread, the used space of the circular buffer queue is maintained by using the read counting semaphore, the unused space of the circular buffer queue is maintained by using the write counting semaphore, the load data address to be processed in the circular buffer is maintained by using the read pointer, and the address written by external data into the circular buffer queue is maintained by using the write pointer.
Further, in the above method, the method of matching the circular buffer queue with the count semaphore further includes:
in a load data receiving thread, when one writing counting semaphore is obtained, one reading counting semaphore is correspondingly released, one packet of load data is successfully written into a circular buffer queue, and a write pointer is correspondingly updated;
similarly, in the load data processing thread, when one read count semaphore is acquired, one write count semaphore is correspondingly released, and when one load data is successfully read out from the circular buffer queue, the read pointer is correspondingly updated.
Further, in the above method, each load data is provided with a double cache, and the method further includes:
the cache states in the double caches are initialized to be empty, and when data are stored, the states of the data are updated to be receiving states;
when the data written into the cache reaches the upper limit of the cache, updating the state of the data to be a storage state;
and when all the data in the cache is written into the file for storage, updating the state of the data to be a null state.
Further, in the above method, each load data is provided with a double cache, and the method further includes:
each path of load data type is provided with a corresponding double cache and is combined with state machine control, wherein,
in a load data processing thread, storing the effective data of the corresponding type into a cache of which the corresponding state of the load data is empty or a receiving state;
and in the payload data storage thread, writing the data in the cache with the storage state into a specified file for storage.
Further, in the above method, each load data is provided with a double cache, and the method further includes:
in the load data processing thread, after a load data packet is obtained from the circular cache queue, according to the state of each cache in the corresponding double caches, the cache with the receiving state is preferentially selected to store the extracted effective load data;
if the cache in the receiving state does not exist, judging whether the cache 1 in the double caches is in an empty state, if so, selecting the cache to store valid data, otherwise, judging whether the cache 2 is in an empty state, and if so, selecting the cache 2 to store the valid load data;
and if the double caches corresponding to the load data are in the storage state, blocking the load data processing thread, and setting the return value as an information code that the second-level cache is busy.
Further, in the above method, a processing method of two-level caching and multi-thread pipeline operation is used to accelerate storage of payload data, and the method includes:
step one, in a load data receiving thread, when load data comes in, after a write count semaphore is obtained, a data packet is stored in a memory pointed by a write pointer of a circular cache queue, the write pointer is updated after the data is written in, and meanwhile, the read count semaphore is released;
in the load data processing thread, when the read counting semaphore is obtained, taking out a data packet from a memory address pointed by a read pointer of a circular cache queue, and when the load data is successfully processed, updating the read pointer and releasing the write counting semaphore at the same time;
step three, in the load data processing thread, after the load data packet is taken out from the circular buffer queue of the first-level buffer, effective data is extracted by analyzing data characteristics, and the effective data is stored in a buffer with a receiving or empty state in a double buffer corresponding to the load data;
step four, in the load data processing thread, when the state in the double caches is that the load data stored in the received caches reaches the upper limit of the caches, the state of the caches is converted into a storage state, and the corresponding load data type and the cache number of the full data are sent to the load data storage thread through socket communication of the unix domain;
step five, in the load data storage thread, after receiving information sent by a socket of an unix domain of the load data processing thread, the storage thread is awakened to begin analyzing the received data so as to extract the type of the load data and a corresponding cache address from the data;
step six, in the load data storage thread, judging whether the corresponding load data has an opened file, if not, creating a new file according to the attribute of the corresponding load data, writing the obtained data in the cache into the file according to the page size of the memory for storage, and converting the cache state into an empty state after the data is written;
and step seven, in the load data storage thread, after the data in the corresponding load double cache is written into the file for storage, judging whether the current attribute of the file meets the condition for forming a closed file, if so, closing the opened file to form a file with a specified requirement.
Compared with the prior art, the method has the advantages that the storage of the multipath satellite load files is accelerated by adopting a two-level cache and multithreading line production processing method. In the load data receiving thread, the first-level cache receives each load data packet from an external interface without distinction through the control of a read-write pointer by utilizing the mutual cooperation of a circular queue and a counting semaphore. In the load data processing thread, the second-level cache adopts double-cache alternate reading and writing aiming at each path of load data, and controls the empty, receiving and storing states of each cache by matching with a state machine. And in the payload data storage thread, writing the payload data in the cache with the stored state into a file according to the memory page size for storage. The first-level cache can quickly receive external multi-path load data, the second-level double-cache ping-pong operation accelerates the quick storage of each path of load file data, fully utilizes processor resources, and achieves the effect of accelerating the storage of multi-path satellite load files.
Drawings
FIG. 1 is a schematic flow chart of a method for accelerating storage of a multipath satellite payload file according to an embodiment of the present invention;
FIG. 2 is a flow chart of a dual cache process for obtaining data to be written according to an embodiment of the present invention;
FIG. 3 is a diagram of dual cache state transition, according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
As shown in fig. 1 to 3, the present invention provides a method for accelerating storage of a multi-path satellite load file, including:
the method comprises the steps of performing a load data storage acceleration process by using a two-level cache and multithread line production processing method, wherein the first-level cache receives load data of each path in a non-differentiated manner by adopting a mode of matching a circular cache queue with a counting semaphore; the second-level cache is provided with double caches for each path of load data, ping-pong operation and alternate reading and writing are carried out under the control of a cache state machine, one cache is used for receiving the load data analyzed by the first-level cache, and the other cache is used for writing the data in the cache into a file for storage.
The invention aims at the problem that the requirement of the satellite processing task on the storage rate of the load file cannot be met after the satellite platform adopts the embedded file system, and accelerates the storage of the multipath satellite load files by adopting a two-stage cache and multithread flow process processing method in application.
The method for accelerating the storage of the multipath satellite load files can solve the problem that the storage rate of the load files cannot meet the requirement of satellite task processing after an embedded file system is adopted by a satellite platform. The invention further improves the use efficiency of the CPU and the memory of the satellite platform and improves the satellite task processing capability.
In an embodiment of the method for accelerating storage of a multipath satellite load file, the multithreading pipeline operation includes:
the load data receiving thread is used for receiving the load data from the external interface without distinguishing the load data of each path;
the load data processing thread is responsible for analyzing the received load data of each path on the satellite;
the load data storage thread has the function of writing the processed data into a file for storage to form a file with specified requirements.
The three threads form a pipeline operation mode, and the storage process of the satellite multipath load files is accelerated.
In one embodiment of the method for accelerating the storage of the multipath satellite load files, the first-level cache is used by combining a circular cache queue and a counting semaphore. The circular buffer queue is provided with a read-write pointer, the counting semaphore is matched with a corresponding read-write counting semaphore, the read-write pointer is initialized to the initial position of the circular buffer queue, the read-write counting semaphore is initialized to zero, and the write-counting semaphore is initialized to the number of the circular buffer queue capable of accommodating the load data packets.
In an embodiment of the method for accelerating storage of a multipath satellite load file, a manner of matching the circular buffer queue with the count semaphore further includes:
between the load data receiving thread and the load data processing thread, the used space of the circular buffer queue is maintained by using the read counting semaphore, the unused space of the circular buffer queue is maintained by using the write counting semaphore, the load data address to be processed in the circular buffer is maintained by using the read pointer, and the address written by external data into the circular buffer queue is maintained by using the write pointer.
In an embodiment of the method for accelerating storage of a multipath satellite load file, a manner of matching the circular buffer queue with the count semaphore further includes:
in a load data receiving thread, when one writing counting semaphore is obtained, one reading counting semaphore is correspondingly released, one packet of load data is successfully written into a circular buffer queue, and a write pointer is correspondingly updated;
similarly, in the load data processing thread, when one read count semaphore is acquired, one write count semaphore is correspondingly released, and when one load data is successfully read out from the circular buffer queue, the read pointer is correspondingly updated.
In an embodiment of the method for accelerating storage of a plurality of paths of satellite load files, each path of load data is provided with a double cache, and the method further includes:
the cache states in the double caches are initialized to be empty, and when data are stored, the states of the data are updated to be receiving states;
when the data written into the cache reaches the upper limit of the cache, updating the state of the data to be a storage state;
and when all the data in the cache is written into the file for storage, updating the state of the data to be a null state.
In an embodiment of the method for accelerating storage of a plurality of paths of satellite load files, each path of load data is provided with a double cache, and the method further includes:
each path of load data type is provided with a corresponding double cache and is combined with state machine control, wherein,
in a load data processing thread, storing the effective data of the corresponding type into a cache of which the corresponding state of the load data is empty or a receiving state;
and in the payload data storage thread, writing the data in the cache with the storage state into a specified file for storage.
In an embodiment of the method for accelerating storage of a plurality of paths of satellite load files, each path of load data is provided with a double cache, and the method further includes:
each load data path is provided with double caches which have three states: null, receive, and store, all initialized to a null state, wherein,
in the load data processing thread, after a load data packet is obtained from the circular cache queue, according to the state of each cache in the corresponding double caches, the cache with the receiving state is preferentially selected to store the extracted effective load data;
if the cache in the receiving state does not exist, judging whether the cache 1 in the double caches is in an empty state, if so, selecting the cache to store valid data, otherwise, judging whether the cache 2 is in an empty state, and if so, selecting the cache 2 to store the valid load data;
and if the double caches corresponding to the load data are in the storage state, blocking the load data processing thread, and setting the return value as an information code that the second-level cache is busy.
In an embodiment of the method for accelerating storage of a multipath satellite load file, a processing method of two-level cache and multithreading pipeline operation is used for accelerating storage of load data, and the method comprises the following steps:
step one, in a load data receiving thread, when load data comes in, after a write count semaphore is obtained, a data packet is stored in a memory pointed by a write pointer of a circular cache queue, the write pointer is updated after the data is written in, and meanwhile, the read count semaphore is released;
in the load data processing thread, when the read counting semaphore is obtained, taking out a data packet from a memory address pointed by a read pointer of a circular cache queue, and when the load data is successfully processed, updating the read pointer and releasing the write counting semaphore at the same time;
step three, in the load data processing thread, after the load data packet is taken out from the circular buffer queue of the first-level buffer, effective data is extracted by analyzing data characteristics, and the effective data is stored in a buffer with a receiving or empty state in a double buffer corresponding to the load data;
step four, in the load data processing thread, when the state in the double caches is that the load data stored in the received caches reaches the upper limit of the caches, the state of the caches is converted into a storage state, and the corresponding load data type and the cache number of the full data are sent to the load data storage thread through socket communication of the unix domain;
step five, in the load data storage thread, after receiving information sent by a socket of an unix domain of the load data processing thread, the storage thread is awakened to begin analyzing the received data so as to extract the type of the load data and a corresponding cache address from the data;
step six, in the load data storage thread, judging whether the corresponding load data has an opened file, if not, creating a new file according to the attribute of the corresponding load data, writing the obtained data in the cache into the file according to the page size of the memory for storage, and converting the cache state into an empty state after the data is written;
and step seven, in the load data storage thread, after the data in the corresponding load double cache is written into the file for storage, judging whether the current attribute of the file meets the condition for forming a closed file, if so, closing the opened file to form a file with a specified requirement.
The design principle and the design thought of the invention mainly comprise the following three parts:
(1) multithread pipeline mode: the load data receiving thread is used for receiving the load data of each path on the satellite from the external interface without distinguishing, the load data processing thread is used for analyzing the received load data of each path, the load data storage thread has the function of writing the processed data into a file for storage, and the three threads are in a pipeline type operation mode to accelerate the storage process of the load file on the satellite.
(2) The use of the circular buffer queue and the count semaphore is matched: the read count semaphore is used to maintain used space in the circular buffer queue and the write count semaphore is used to maintain unused space in the circular buffer queue. The read pointer maintains the address of the load data to be processed in the circular buffer, and the write pointer maintains the address of the external data written into the circular buffer queue.
(3) Each load data is equipped with double-buffer ping-pong operations: in a load data processing thread, storing the effective data of the corresponding type into a cache of which the corresponding state of the load data is empty or a receiving state; and in the payload data storage thread, writing the data in the cache with the storage state into a memory.
In summary, the invention accelerates the storage of multipath satellite load files by means of two-stage caching and multithreading pipeline operation, aiming at the requirement that the satellite tasks cannot be processed in real time by the load file rate after the satellite platform adopts the file system. Compared with the traditional data storage mode, the mode can fully utilize processor resources, improve the storage rate of the satellite load files, and has strong engineering practice value.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (9)

1. A method for accelerating storage of multi-path satellite load files is characterized by comprising the following steps:
the method comprises the steps of performing a load data storage acceleration process by using a two-level cache and multithread line production processing method, wherein the first-level cache receives load data of each path in a non-differentiated manner by adopting a mode of matching a circular cache queue with a counting semaphore; the second-level cache is provided with double caches for each path of load data, ping-pong operation and alternate reading and writing are carried out under the control of a cache state machine, one cache is used for receiving the load data analyzed by the first-level cache, and the other cache is used for writing the data in the cache into a file for storage.
2. The method for accelerating the storage of a multipath on-board payload file of claim 1, wherein the multithreaded pipeline comprises:
the load data receiving thread is responsible for receiving the load data of each path from the external interface without distinguishing;
the load data processing thread is responsible for analyzing the received load data of each path on the satellite;
and the load data storage thread is responsible for writing the processed data into a file for storage to form a file with specified requirements.
3. The method for accelerating storage of a multi-way satellite payload file as recited in claim 1, wherein the circular buffer queue has read and write pointers, the count semaphores are provided with corresponding read and write count semaphores, the read and write pointers are initialized to the start position of the circular buffer queue, the read count semaphores are initialized to zero, and the write count semaphores are initialized to the number of payload packets received by the circular buffer queue.
4. The method for accelerating the storage of a multipath satellite payload file as recited in claim 1, wherein the manner of coordinating the circular buffer queue with the count semaphore further comprises:
between the load data receiving thread and the load data processing thread, the used space of the circular buffer queue is maintained by using the read counting semaphore, the unused space of the circular buffer queue is maintained by using the write counting semaphore, the load data address to be processed in the circular buffer is maintained by using the read pointer, and the address written by external data into the circular buffer queue is maintained by using the write pointer.
5. The method for accelerating the storage of a multipath satellite payload file as recited in claim 1, wherein the manner of coordinating the circular buffer queue with the count semaphore further comprises:
in a load data receiving thread, when one writing counting semaphore is obtained, one reading counting semaphore is correspondingly released, one packet of load data is successfully written into a circular buffer queue, and a write pointer is correspondingly updated;
similarly, in the load data processing thread, when one read count semaphore is acquired, one write count semaphore is correspondingly released, and when one load data is successfully read out from the circular buffer queue, the read pointer is correspondingly updated.
6. The method for accelerating storage of a multipath satellite payload file as recited in claim 1, wherein each path of payload data is provided with a double cache, further comprising:
the cache states in the double caches are initialized to be empty, and when data are stored, the states of the data are updated to be receiving states;
when the data written into the cache reaches the upper limit of the cache, updating the state of the data to be a storage state;
and when all the data in the cache is written into the file for storage, updating the state of the data to be a null state.
7. The method for accelerating storage of a multipath satellite payload file as recited in claim 1, wherein each path of payload data is provided with a double cache, further comprising:
each path of load data type is provided with a corresponding double cache and is combined with state machine control, wherein,
in a load data processing thread, storing the effective data of the corresponding type into a cache of which the corresponding state of the load data is empty or a receiving state;
and in the payload data storage thread, writing the data in the cache with the storage state into a specified file for storage.
8. The method for accelerating storage of a multipath satellite payload file as recited in claim 1, wherein each path of payload data is provided with a double cache, further comprising:
in the load data processing thread, after a load data packet is obtained from the circular cache queue, according to the state of each cache in the corresponding double caches, the cache with the receiving state is preferentially selected to store the extracted effective load data;
if the cache in the receiving state does not exist, judging whether the cache 1 in the double caches is in an empty state, if so, selecting the cache to store valid data, otherwise, judging whether the cache 2 is in an empty state, and if so, selecting the cache 2 to store the valid load data;
and if the double caches corresponding to the load data are in the storage state, blocking the load data processing thread, and setting the return value as an information code that the second-level cache is busy.
9. The method for accelerating the storage of the multi-path satellite load files according to claim 1, wherein the process of accelerating the storage of the load data by using a processing method of two-level cache and multi-thread pipeline operation comprises the following steps:
step one, in a load data receiving thread, when load data comes in, after a write count semaphore is obtained, a data packet is stored in a memory pointed by a write pointer of a circular cache queue, the write pointer is updated after the data is written in, and meanwhile, the read count semaphore is released;
in the load data processing thread, when the read counting semaphore is obtained, taking out a data packet from a memory address pointed by a read pointer of a circular cache queue, and when the load data is successfully processed, updating the read pointer and releasing the write counting semaphore at the same time;
step three, in the load data processing thread, after the load data packet is taken out from the circular buffer queue of the first-level buffer, effective data is extracted by analyzing data characteristics, and the effective data is stored in a buffer with a receiving or empty state in a double buffer corresponding to the load data;
step four, in the load data processing thread, when the state in the double caches is that the load data stored in the received caches reaches the upper limit of the caches, the state of the caches is converted into a storage state, and the corresponding load data type and the cache number of the full data are sent to the load data storage thread through socket communication of the unix domain;
step five, in the load data storage thread, after receiving information sent by a socket of an unix domain of the load data processing thread, the storage thread is awakened to begin analyzing the received data so as to extract the type of the load data and a corresponding cache address from the data;
step six, in the load data storage thread, judging whether the corresponding load data has an opened file, if not, creating a new file according to the attribute of the corresponding load data, writing the obtained data in the cache into the file according to the page size of the memory for storage, and converting the cache state into an empty state after the data is written;
and step seven, in the load data storage thread, after the data in the corresponding load double cache is written into the file for storage, judging whether the current attribute of the file meets the condition for forming a closed file, if so, closing the opened file to form a file with a specified requirement.
CN202010008656.3A 2020-01-02 2020-01-02 Method for accelerating storage of multi-path on-board load file Active CN111209228B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010008656.3A CN111209228B (en) 2020-01-02 2020-01-02 Method for accelerating storage of multi-path on-board load file

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010008656.3A CN111209228B (en) 2020-01-02 2020-01-02 Method for accelerating storage of multi-path on-board load file

Publications (2)

Publication Number Publication Date
CN111209228A true CN111209228A (en) 2020-05-29
CN111209228B CN111209228B (en) 2023-05-26

Family

ID=70788407

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010008656.3A Active CN111209228B (en) 2020-01-02 2020-01-02 Method for accelerating storage of multi-path on-board load file

Country Status (1)

Country Link
CN (1) CN111209228B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111859458A (en) * 2020-07-31 2020-10-30 北京无线电测量研究所 Data updating recording method, system, medium and equipment
CN112002115A (en) * 2020-08-05 2020-11-27 中车工业研究院有限公司 Data acquisition method and data processor
CN112231246A (en) * 2020-10-31 2021-01-15 王志平 Method for realizing processor cache structure
CN112702418A (en) * 2020-12-21 2021-04-23 潍柴动力股份有限公司 Double-cache data downloading control method and device and vehicle
CN113014308A (en) * 2021-02-23 2021-06-22 湖南斯北图科技有限公司 Satellite communication high-capacity channel parallel Internet of things data receiving method
CN113611102A (en) * 2021-07-30 2021-11-05 中国科学院空天信息创新研究院 Multi-channel radar echo signal transmission method and system based on FPGA
CN113885811A (en) * 2021-10-19 2022-01-04 展讯通信(天津)有限公司 Data receiving method, device, chip and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107172037A (en) * 2017-05-11 2017-09-15 华东师范大学 A kind of real-time subpackage analytic method of multichannel multi-channel high-speed data stream
CN110347369A (en) * 2019-06-05 2019-10-18 天津职业技术师范大学(中国职业培训指导教师进修中心) A kind of more caching Multithread Data methods

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107172037A (en) * 2017-05-11 2017-09-15 华东师范大学 A kind of real-time subpackage analytic method of multichannel multi-channel high-speed data stream
CN110347369A (en) * 2019-06-05 2019-10-18 天津职业技术师范大学(中国职业培训指导教师进修中心) A kind of more caching Multithread Data methods

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
姚舜,马旭东: "工业机器人控制器实时多任务软件设计与实现" *
董振兴 等: "星载存储器吞吐率瓶颈与高速并行缓存机制" *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111859458A (en) * 2020-07-31 2020-10-30 北京无线电测量研究所 Data updating recording method, system, medium and equipment
CN112002115A (en) * 2020-08-05 2020-11-27 中车工业研究院有限公司 Data acquisition method and data processor
CN112231246A (en) * 2020-10-31 2021-01-15 王志平 Method for realizing processor cache structure
CN112702418A (en) * 2020-12-21 2021-04-23 潍柴动力股份有限公司 Double-cache data downloading control method and device and vehicle
CN113014308A (en) * 2021-02-23 2021-06-22 湖南斯北图科技有限公司 Satellite communication high-capacity channel parallel Internet of things data receiving method
CN113611102A (en) * 2021-07-30 2021-11-05 中国科学院空天信息创新研究院 Multi-channel radar echo signal transmission method and system based on FPGA
CN113885811A (en) * 2021-10-19 2022-01-04 展讯通信(天津)有限公司 Data receiving method, device, chip and electronic equipment
CN113885811B (en) * 2021-10-19 2023-09-19 展讯通信(天津)有限公司 Data receiving method and device, chip and electronic equipment

Also Published As

Publication number Publication date
CN111209228B (en) 2023-05-26

Similar Documents

Publication Publication Date Title
CN111209228A (en) Method for accelerating storage of multi-path satellite load files
US10693787B2 (en) Throttling for bandwidth imbalanced data transfers
EP1854015B1 (en) Packet processor with wide register set architecture
KR100932038B1 (en) Message Queuing System for Parallel Integrated Circuit Architecture and Its Operation Method
US10235219B2 (en) Backward compatibility by algorithm matching, disabling features, or throttling performance
CN110134439B (en) Lock-free data structure construction method and data writing and reading methods
US20090133023A1 (en) High Performance Queue Implementations in Multiprocessor Systems
US20080320478A1 (en) Age matrix for queue dispatch order
CN111124641B (en) Data processing method and system using multithreading
US20150040140A1 (en) Consuming Ordered Streams of Messages in a Message Oriented Middleware
CN113688099B (en) SPDK-based database storage engine acceleration method and system
US8600990B2 (en) Interacting methods of data extraction
Jeong et al. CASINO core microarchitecture: Generating out-of-order schedules using cascaded in-order scheduling windows
EP4363991A1 (en) Providing atomicity for complex operations using near-memory computing
CN106445472B (en) A kind of character manipulation accelerated method, device, chip, processor
US7130990B2 (en) Efficient instruction scheduling with lossy tracking of scheduling information
CN104049947A (en) Dynamic Rename Based Register Reconfiguration Of A Vector Register File
CN117331714A (en) Multi-thread inter-high concurrency real-time communication method and device based on shared linked list
US10198784B2 (en) Capturing commands in a multi-engine graphics processing unit
CN112948136A (en) Method for implementing asynchronous log record of embedded operating system
US11119766B2 (en) Hardware accelerator with locally stored macros
CN110515659A (en) Atomic instruction execution method and device
US6584518B1 (en) Cycle saving technique for managing linked lists
US20140068173A1 (en) Content addressable memory scheduling
CN111198659B (en) Concurrent I/O stream model identification method and system based on multi-sliding window implementation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant