CN106557358B - Data storage method and device based on dual-core processor - Google Patents

Data storage method and device based on dual-core processor Download PDF

Info

Publication number
CN106557358B
CN106557358B CN201510634222.3A CN201510634222A CN106557358B CN 106557358 B CN106557358 B CN 106557358B CN 201510634222 A CN201510634222 A CN 201510634222A CN 106557358 B CN106557358 B CN 106557358B
Authority
CN
China
Prior art keywords
core processor
data
data message
utilization rate
resource utilization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510634222.3A
Other languages
Chinese (zh)
Other versions
CN106557358A (en
Inventor
苏盘社
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Armyfly Technology Co Ltd
Original Assignee
Beijing Armyfly Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Armyfly Technology Co Ltd filed Critical Beijing Armyfly Technology Co Ltd
Priority to CN201510634222.3A priority Critical patent/CN106557358B/en
Publication of CN106557358A publication Critical patent/CN106557358A/en
Application granted granted Critical
Publication of CN106557358B publication Critical patent/CN106557358B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The embodiment of the invention provides a data storage method and a device based on a dual-core processor, wherein the method is suitable for switching equipment comprising the dual-core processor; the first core processor receives the data message to generate interruption and stores the data message into a packet receiving cache queue; and the second core processor takes out the data message from the packet receiving cache queue and analyzes the data message, and stores the analyzed data message into the data cache queue to realize the storage of the data message. In the embodiment of the invention, the data receiving and the data analyzing are respectively processed in different core processors, so that the interruption caused by the data receiving can be effectively avoided, the influence on the data storage is avoided, the storage efficiency is improved, and the reliability of the system is ensured.

Description

Data storage method and device based on dual-core processor
Technical Field
The invention relates to the technical field of data storage, in particular to a data storage method and device based on a dual-core processor.
Background
The conventional storage method comprises the steps of firstly starting a DMA in an application program, driving an interrupt processing function to release a signal to inform the application program after a drive receives a DMA completion interrupt, then calling a read method by the application program, copying a large amount of data from a kernel space to a user space cache region by calling a copy _ to _ user function by the drive, finally writing the data into a hard disk by the application program, and then starting the next DMA.
However, the whole process needs to be completed by the cooperation of the application program and the driver, and simultaneously, a large amount of data needs to be copied from the kernel space to the user space, which requires frequent switching between the system mode and the user mode, occupies a large amount of system resources, reduces the response speed of the operating system, and additionally, a large amount of operations need to be completed in the application program, which inevitably causes low storage speed and large fluctuation.
With the development of storage technology, storage can be completed in a kernel space, the problem of excessive system resource occupation caused by switching between modes is avoided, the storage speed is improved, but the consumption of system resources is still large, the dual-kernel storage technology brings more system resources, and when a data message is received, the data message is distributed to a kernel for analysis and storage according to a set load sharing rule, so that the storage efficiency is further improved.
In the actual data storage process, about 20% of system resources of the processor are used for data reception and interruption, and about 50% of system resources are used for processing data analysis of the cache queue, so that even in the dual-core data processing method, when the distributed cores finish data analysis and store, the current write operation is interrupted when a new data message is received, and the storage efficiency is affected.
Therefore, even if a dual-core processor is used for data storage in the prior art, because the data allocated to each core locally performs the operations of receiving, analyzing and storing, when new data is received, an interrupt will be generated, which affects the storage efficiency, and particularly when large-flow data such as a gigabit network is encountered, frequent interrupts will be generated in the cores, which causes the CPU to work normally, thereby reducing the data storage efficiency and failing to ensure the reliability of the system.
Disclosure of Invention
In view of the above, the present invention has been made to provide a data storage method and apparatus based on a dual core processor that overcomes or at least partially solves the above-mentioned problems.
The embodiment of the invention provides a data storage method based on a dual-core processor, which is suitable for a switching device comprising the dual-core processor, and comprises the following steps:
receiving a data message through a receiving port, and sending the data message to a first core processor;
the first core processor receives the data message to generate interruption and stores the data message into a packet receiving cache queue;
and the second core processor takes out the data message from the packet receiving cache queue and analyzes the data message, and stores the analyzed data message into the data cache queue to realize the storage of the data message.
In order to reduce the influence of frequent interruption of short data packets on data storage, improve storage efficiency, and further ensure system reliability, before sending the data packets to the first core processor in the embodiment of the present invention, the method further includes:
sending the data message to the FPGA;
the FPGA updates the number of the received data messages and judges whether the number of the updated data messages reaches a set first number threshold value;
when the number of the updated data messages reaches a set first number threshold value, sending the data messages to a first core processor, and clearing the number of the received data messages;
otherwise, the data message is stored locally.
In order to reduce the number of packet receiving interrupts, reduce the influence of frequent interrupts on the storage efficiency, and further improve the reliability of the system, in the embodiment of the present invention, the receiving of the data packet by the first core processor to generate the interrupts includes:
after the first core processor receives the data messages, updating the number of the received data messages; judging whether the number of the received data messages reaches a set second number threshold value or not; when the number of the received data messages reaches a set second number threshold value, generating interruption, and resetting the number of the received data messages; or
After receiving the data message, the first core processor judges whether other data messages are received within a set time length; and when determining that other data messages are not received within the set time length, generating interruption and resetting the number of the received data messages.
In order to reduce the influence of data analysis on data reception and further improve the reliability of the system, in the embodiment of the present invention, storing the analyzed data packet in the data cache queue to implement the storage of the data packet includes:
the second core processor stores the analyzed data message into a data cache queue and stores the data message; or the like, or, alternatively,
and the second core processor stores the analyzed data message into a data cache queue, and the first core processor stores the data message.
In order to fully utilize dual-core system resources, improve data storage efficiency, and further improve system reliability, in the embodiment of the present invention, the storing the parsed data packet in the data cache queue to achieve the storage of the data packet further includes:
the second core processor stores the analyzed data message into a data cache queue, acquires the current resource utilization rate of the first core processor, judges whether the current resource utilization rate is greater than the current resource utilization rate of the first core processor, and informs the first core processor to store the data message when determining that the current resource utilization rate is greater than the current resource utilization rate of the first core processor, otherwise, stores the data message; or the like, or, alternatively,
the second core processor stores the analyzed data message into a data cache queue, the first core processor obtains the current resource utilization rate of the second core processor, judges whether the current resource utilization rate is greater than the current resource utilization rate of the second core processor, and informs the second core processor to store the data message when determining that the current resource utilization rate is greater than the current resource utilization rate of the first core processor, otherwise, the first core processor stores the data message.
The embodiment of the invention provides a data storage device based on a dual-core processor, which comprises:
the receiving module is used for receiving a data message through a receiving port and sending the data message to the first core processor;
the first core processor is used for receiving the data message, generating interruption and storing the data message into a packet receiving cache queue;
and the second core processor is used for taking out the data message from the packet receiving cache queue, analyzing the data message, and storing the analyzed data message into the data cache queue to realize the storage of the data message.
In order to reduce frequent interrupts generated by short data packets, improve storage efficiency, and further ensure reliability of the system, before sending the data packet to the first core processor, the apparatus further includes:
the FPGA is used for receiving the data message; updating the quantity of the received data messages, and judging whether the quantity of the updated data messages reaches a set first quantity threshold value; when the number of the updated data messages reaches a set first number threshold value, sending the data messages to a first core processor, and clearing the number of the received data messages; otherwise, the data message is stored locally.
In order to reduce the packet receiving interruption times, reduce the influence of frequent interruption on the storage efficiency, and further improve the reliability of the system, in the embodiment of the present invention, the first core processor is specifically configured to update the number of received data packets after receiving the data packets; judging whether the number of the received data messages reaches a set second number threshold value or not; when the number of the received data messages reaches a set second number threshold value, generating interruption, and resetting the number of the received data messages; or, after receiving the data message, judging whether other data messages are received within a set time length; and when determining that other data messages are not received within the set time length, generating interruption and resetting the number of the received data messages.
In order to reduce the influence of data analysis on data reception and further improve the reliability of the system, in the embodiment of the present invention, the second core processor is specifically configured to store the analyzed data packet in a data cache queue and store the data packet; or the like, or, alternatively,
the first core processor is specifically configured to store the data packet after the second core processor stores the parsed data packet in the data cache queue.
In order to fully utilize dual-core system resources, improve data storage efficiency and further improve system reliability, in the embodiment of the present invention, the second core processor is specifically further configured to store the analyzed data packet in a data cache queue, obtain a current resource utilization rate of the first core processor, determine whether the current resource utilization rate is greater than the current resource utilization rate of the first core processor, notify the first core processor to store the data packet when determining that the current resource utilization rate is greater than the current resource utilization rate of the first core processor, and otherwise, store the data packet by itself; or the like, or, alternatively,
the first core processor is further specifically configured to, after the second core processor stores the parsed data packet in the data cache queue, obtain a current resource utilization rate of the second core processor, determine whether the current resource utilization rate of the second core processor is greater than the current resource utilization rate of the second core processor, notify the second core processor to store the data packet when it is determined that the current resource utilization rate of the second core processor is greater than the current resource utilization rate of the first core processor, and otherwise, store the data packet by the second core processor.
The embodiment of the invention provides a data storage method and a device based on a dual-core processor, wherein the method is suitable for switching equipment comprising the dual-core processor; the first core processor receives the data message to generate interruption and stores the data message into a packet receiving cache queue; and the second core processor takes out the data message from the packet receiving cache queue and analyzes the data message, and stores the analyzed data message into the data cache queue to realize the storage of the data message. In the embodiment of the invention, the data receiving and the data analyzing are respectively processed in different core processors, so that the interruption caused by the data receiving can be effectively avoided, the influence on the data storage is avoided, the storage efficiency is improved, and the reliability of the system is ensured.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a data storage process based on a dual-core processor according to an embodiment of the present invention;
fig. 2 is a specific process of data storage based on a dual-core processor according to an embodiment of the present invention;
fig. 3 is a specific process of data storage based on a dual-core processor according to another embodiment of the present invention:
fig. 4 is a schematic structural diagram of a data storage device based on a dual-core processor according to an embodiment of the present invention.
Detailed Description
In order to avoid that the currently-performed data storage operation is influenced by interruption generated by receiving new data in the process of data storage overweight, so that the data storage efficiency is improved, and the system reliability is ensured, the embodiment of the invention provides a data storage method and a data storage device based on a dual-core processor.
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The embodiments of the present invention will be described with reference to the accompanying drawings.
Fig. 1 is a data storage process based on a dual-core processor according to an embodiment of the present invention, where the process includes the following steps:
s101: the switching equipment receives the data message through the receiving port and sends the data message to the first core processor.
The switching equipment in the embodiment of the invention adopts a dual-core processor, and is used for generating packet receiving interruption according to a received data message when receiving the data message sent by a network side, storing the data message into a packet receiving cache queue, analyzing the data message in the cache queue, and storing the analyzed data message to the local, wherein one core is arbitrarily selected from the dual-core processor to be used for receiving the data message, and the selected core is a first core processor.
S102: and the first core processor receives the data message to generate interruption and stores the data message into a packet receiving buffer queue.
The switching equipment determines one of the cores as a first core processor responsible for receiving the data message, and after receiving the data message sent by the receiving port, the first core processor receives the data message through packet receiving interruption, stores the data message into a packet receiving cache queue and waits for analyzing the data message.
S103: and the second core processor takes out the data message from the packet receiving cache queue and analyzes the data message, and stores the analyzed data message into the data cache queue to realize the storage of the data message.
The switching equipment selects another inner core except the first core processor as a second core processor which is responsible for analyzing the data message, the second core processor obtains the data message in a packet receiving cache queue, analyzes the data message, stores the analyzed data message into the data cache queue, and selects different processors to perform storage operation of the data message according to a data storage rule.
In the embodiment of the invention, the data receiving and data analyzing are respectively processed in different core processors by adopting the switching equipment of the dual-core processor, so that the interruption generated by data receiving can be effectively avoided, the influence on data storage is avoided, the storage efficiency is improved, and the reliability of the system is ensured.
In addition, in order to reduce the influence of frequent interruption of short data packets on data storage, improve storage efficiency, and further ensure system reliability, in the embodiment of the present invention, before sending the data packets to the first core processor, the method further includes:
sending the data message to the FPGA;
the FPGA updates the number of the received data messages and judges whether the number of the updated data messages reaches a set first number threshold value;
when the number of the updated data messages reaches a set first number threshold value, sending the data messages to a first core processor, and clearing the number of the received data messages;
otherwise, the data message is stored locally.
Specifically, when a receiving port of the switching device receives a data message from a network side, the data message is sent to the FPGA, the FPGA is responsible for performing long packet encapsulation on the data message, and judges whether the number of the data messages included in the encapsulated long packet reaches a set threshold value, when it is determined that the number of the data messages included in the encapsulated long packet reaches the set threshold value, the encapsulated long packet is sent to the first core processor, the number of the received data messages is cleared, and a new round of long packet encapsulation process is performed again; when the number of the data messages contained in the long packet after encapsulation is determined not to reach a set threshold value, storing the data of the data messages into the long packet, wherein the packet structure in the long packet encapsulation process is as follows: the packet magic word (0 xaabbcdd), the total packet length, the number of the included messages, the length of the first message, the content of the first message, the length of the second message, and the content of the second message … …, until the end of the packet.
In the embodiment of the present invention, the determination of whether the long packet is encapsulated may be based on whether the total length of the packet reaches a set threshold.
In the embodiment of the invention, the long packet of the data message is encapsulated by the FPGA, so that the encapsulation interruption times of the first core processor are reduced, the influence of frequent interruption on data storage is reduced, the storage efficiency is further improved, and the reliability of the system is ensured.
Fig. 2 is a specific process of data storage based on a dual-core processor according to an embodiment of the present invention, where the process includes the following steps:
s201: the switching equipment receives the data message through the receiving port and sends the data message to the FPGA.
S202: the FPGA updates the number of the received data messages, judges whether the number of the updated data messages reaches a set first number threshold value, and if so, performs step S204, otherwise, performs step S203.
S203: and storing the data message locally.
S204: and when the updated number of the data messages reaches a set first number threshold, sending the data messages to a first core processor, and clearing the number of the received data messages.
S205: and the first core processor receives the data message to generate interruption and stores the data message into a packet receiving buffer queue.
S206: and the second core processor takes out the data message from the packet receiving cache queue and analyzes the data message, and stores the analyzed data message into the data cache queue to realize the storage of the data message.
In addition, in order to reduce the number of packet receiving interrupts, reduce the influence of frequent interrupts on the storage efficiency, and further improve the reliability of the system in the embodiment of the present invention, the receiving, by the first core processor, the data packet to generate the interrupt includes:
after the first core processor receives the data messages, updating the number of the received data messages; judging whether the number of the received data messages reaches a set second number threshold value or not; when the number of the received data messages reaches a set second number threshold value, generating interruption, and resetting the number of the received data messages; or
After receiving the data message, the first core processor judges whether other data messages are received within a set time length; and when determining that other data messages are not received within the set time length, generating interruption and resetting the number of the received data messages.
Specifically, a first core processor of the switching device sets a packet receiving interruption of the switching device, the number of times of the packet receiving interruption is further reduced according to a set second number threshold, when the first core processor receives a data packet, the data packet is subjected to interruption aggregation processing, a counter of the received data packet of the switching device is incremented, whether the value of the counter reaches the set second number threshold after the incrementing is judged, when the value of the counter reaches the set second number threshold is determined, the interruption triggering condition is determined to be met, the interruption is generated to perform the packet receiving processing, and the counter is reset.
In order to prevent that the received data message cannot be received in time due to the fact that a new data message is not received for a long time, the first core processor updates the timer after receiving the data message, judges whether the timer receives the new data message within the specified time, updates the timer for re-timing when the fact that the new data message is received within the specified time is confirmed, otherwise, confirms that an interruption triggering condition is met, generates interruption to carry out data packet receiving processing, and updates the timer for re-timing.
In addition, in order to reduce the influence of data analysis on data reception and further improve the reliability of the system in the embodiment of the present invention, the storing the analyzed data packet in the data cache queue to realize the storage of the data packet includes:
the second core processor stores the analyzed data message into a data cache queue and stores the data message; or the like, or, alternatively,
and the second core processor stores the analyzed data message into a data cache queue, and the first core processor stores the data message.
Specifically, the second core processor is responsible for analyzing and processing the received data message, the analyzed data message is stored in the data cache queue to wait for writing operation, and the data message in the data cache queue is written, so that any one of the first core processor and the second core processor can be selected to be independently completed, and the data message can be jointly completed in the first core processor and the second core processor.
Fig. 3 is a specific process of data storage based on a dual-core processor according to an embodiment of the present invention, where the process includes the following steps:
s301: the switching equipment receives the data message through the receiving port and sends the data message to the FPGA.
S302: the FPGA updates the number of the received data messages, judges whether the number of the updated data messages reaches a set first number threshold value, and if so, performs the step S304, otherwise, performs the step S303.
S303: and storing the data message locally.
S304: and when the updated number of the data messages reaches a set first number threshold, sending the data messages to a first core processor, and clearing the number of the received data messages.
S305: and after the first core processor receives the data messages, updating the number of the received data messages.
S306: and judging whether the number of the received data messages reaches a set second number threshold, if so, performing step S307, otherwise, performing step S305.
S307: and when the number of the received data messages reaches the set second number threshold, generating interruption, storing the data messages into a packet buffer queue, and clearing the number of the received data messages.
S308: and the second core processor takes out the data message from the packet receiving cache queue and analyzes the data message, stores the analyzed data message into a data cache queue and stores the data message, or the first core processor stores the data message.
In the embodiment of the invention, the dual-core processor is adopted to respectively process the receiving and data storage of the data message in different cores, thereby avoiding the influence of packet receiving interruption on data storage, and further reducing the packet receiving interruption times through the packet receiving long packet processing and interruption aggregation operation, thereby improving the data storage efficiency and ensuring the reliability of the system.
In addition, in order to fully utilize dual-core system resources, improve data storage efficiency, and further improve system reliability in the embodiments of the present invention, the storing the parsed data packet in the data cache queue to implement the storing of the data packet includes:
the second core processor stores the analyzed data message into a data cache queue, acquires the current resource utilization rate of the first core processor, judges whether the current resource utilization rate is greater than the current resource utilization rate of the first core processor, and informs the first core processor to store the data message when determining that the current resource utilization rate is greater than the current resource utilization rate of the first core processor, otherwise, stores the data message; or the like, or, alternatively,
the second core processor stores the analyzed data message into a data cache queue, the first core processor obtains the current resource utilization rate of the second core processor, judges whether the current resource utilization rate is greater than the current resource utilization rate of the second core processor, and informs the second core processor to store the data message when determining that the current resource utilization rate is greater than the current resource utilization rate of the first core processor, otherwise, the first core processor stores the data message.
Specifically, the second core processor stores the analyzed data message into a data cache queue to wait for write operation, in order to fully utilize system resources, the current resource utilization rates of the first core processor and the second core processor are periodically compared, and the core processor with the low resource utilization rate is preferentially selected to perform the write operation of the data.
In order to save the system overhead caused by frequent comparison of the current resource utilization rate of each core processor, the method for selecting the optimal core processor to store data can be completed by a preset weight value, the resource utilization rates of the two core processors are counted within a set time period, data storage weights are distributed to the two core processors according to the counting result, and the data messages stored in the data cache queues are distributed to the corresponding core processors according to the distributed storage weight values to complete writing operation.
Taking the example that 100 Mbytes of data are stored in a dual-core processor within 1 second, the storage weight values allocated to the first core processor and the second core processor are 60% and 40% respectively, the first 60 Mbytes of data stored in the data buffer queue are sent to the write hard disk buffer queue 1 of the first core processor, the second 40 Mbytes of data are sent to the write hard disk buffer queue 2 of the second core processor, and the operations are repeated in sequence, wherein each data message can only be completely stored in one core processor.
In order to further improve the storage efficiency, the set weight value can be dynamically adjusted according to the average resource utilization rate of each core processor, the average resource utilization rate of each core processor is periodically compared, when the difference value of the average resource utilization rates of the two core processors reaches a set threshold value, the storage weight values of the two core processors are proportionally adjusted, still taking the weight distribution as an example, the average resource utilization rates of the first core processor and the second core processor are obtained within a detection period of 10 seconds, when the average resource utilization rate of the first core processor is greater than the average resource utilization rate of the second core processor and the difference value exceeds the set threshold value, the storage weight values of the first core processor and the second core processor are respectively adjusted to be 55 percent and 45 percent according to a set adjustment factor (5M), so that the dynamic adjustment is realized, and the data storage efficiency is improved.
Fig. 4 is a data storage device based on a dual-core processor according to an embodiment of the present invention, where the data storage device includes:
a receiving module 41, configured to receive a data packet through a receiving port, and send the data packet to a first core processor;
the first core processor 42 is configured to receive the data packet, generate an interrupt, and store the data packet in a packet receiving buffer queue;
and the second core processor 43 is configured to take out the data packet from the packet receiving buffer queue, analyze the data packet, and store the analyzed data packet in the data buffer queue to implement storage of the data packet.
In order to reduce the influence of frequent interruption of short data packets on data storage, improve storage efficiency, and further ensure system reliability, before sending the data packet to the first core processor, the apparatus further includes:
the FPGA 44 is used for receiving the data message; updating the quantity of the received data messages, and judging whether the quantity of the updated data messages reaches a set first quantity threshold value; when the number of the updated data messages reaches a set first number threshold value, sending the data messages to a first core processor, and clearing the number of the received data messages; otherwise, the data message is stored locally.
In order to reduce the number of packet receiving interrupts, reduce the influence of frequent interrupts on the storage efficiency, and further improve the reliability of the system, in the embodiment of the present invention, the first core processor 42 is specifically configured to update the number of received data packets after receiving the data packets; judging whether the number of the received data messages reaches a set second number threshold value or not; when the number of the received data messages reaches a set second number threshold value, generating interruption, and resetting the number of the received data messages; or, after receiving the data message, judging whether other data messages are received within a set time length; and when determining that other data messages are not received within the set time length, generating interruption and resetting the number of the received data messages.
In order to reduce the influence of data analysis on data reception and further improve the reliability of the system, in the embodiment of the present invention, the second core processor 43 is specifically configured to store the analyzed data packet in the data cache queue and store the data packet; or the like, or, alternatively,
the first core processor 42 is specifically configured to store the data packet after the second core processor stores the parsed data packet in the data cache queue.
In order to fully utilize dual-core system resources, improve data storage efficiency, and further improve system reliability, in the embodiment of the present invention, the second core processor 43 is specifically further configured to store the analyzed data packet in a data cache queue, obtain a current resource utilization rate of the first core processor, determine whether the current resource utilization rate is greater than the current resource utilization rate of the first core processor, notify the first core processor to store the data packet when determining that the current resource utilization rate is greater than the current resource utilization rate of the first core processor, and otherwise, store the data packet by itself; or the like, or, alternatively,
the first core processor 42 is further specifically configured to, after the second core processor stores the analyzed data packet in the data cache queue, obtain a current resource utilization rate of the second core processor, determine whether the current resource utilization rate of the second core processor is greater than the current resource utilization rate of the second core processor, notify the second core processor to store the data packet when it is determined that the current resource utilization rate of the second core processor is greater than the current resource utilization rate of the first core processor, and otherwise, store the data packet by the second core processor.
The embodiment of the invention provides a data storage method and a device based on a dual-core processor, wherein the method is suitable for switching equipment comprising the dual-core processor; the first core processor receives the data message to generate interruption and stores the data message into a packet receiving cache queue; and the second core processor takes out the data message from the packet receiving cache queue and analyzes the data message, and stores the analyzed data message into the data cache queue to realize the storage of the data message. In the embodiment of the invention, the data receiving and the data analyzing are respectively processed in different core processors, so that the interruption caused by the data receiving can be effectively avoided, the influence on the data storage is avoided, the storage efficiency is improved, and the reliability of the system is ensured.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components in a dual-core processor based data storage device according to embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (4)

1. A data storage method based on a dual-core processor is characterized in that the method is applied to a switching device comprising the dual-core processor, and the method comprises the following steps:
receiving a data message through a receiving port, and sending the data message to a first core processor;
the first core processor receives the data message to generate interruption and stores the data message into a packet receiving cache queue;
the second core processor takes out the data message from the packet receiving cache queue and analyzes the data message, and stores the analyzed data message into the data cache queue to realize the storage of the data message;
wherein, the storing the analyzed data message into the data buffer queue to realize the storage of the data message further comprises:
the second core processor stores the analyzed data message into a data cache queue, acquires the current resource utilization rate of the first core processor, judges whether the current resource utilization rate is greater than the current resource utilization rate of the first core processor, and informs the first core processor to store the data message when determining that the current resource utilization rate is greater than the current resource utilization rate of the first core processor, otherwise, stores the data message; or the like, or, alternatively,
the second core processor stores the analyzed data message into a data cache queue, the first core processor obtains the current resource utilization rate of the second core processor, judges whether the current resource utilization rate is greater than the current resource utilization rate of the second core processor, and informs the second core processor to store the data message when determining that the current resource utilization rate is greater than the current resource utilization rate of the second core processor, otherwise, the first core processor stores the data message;
wherein, the first core processor receiving the data message and generating an interrupt comprises:
after the first core processor receives the data messages, updating the number of the received data messages; judging whether the number of the received data messages reaches a set second number threshold value or not; when the number of the received data messages reaches a set second number threshold value, generating interruption, and resetting the number of the received data messages; or
After receiving the data message, the first core processor judges whether other data messages are received within a set time length; and when determining that other data messages are not received within the set time length, generating interruption and resetting the number of the received data messages.
2. The method of claim 1, wherein prior to sending the datagram to the first core processor, the method further comprises:
sending the data message to the FPGA;
the FPGA updates the number of the received data messages and judges whether the number of the updated data messages reaches a set first number threshold value;
when the number of the updated data messages reaches a set first number threshold value, sending the data messages to a first core processor, and clearing the number of the received data messages;
otherwise, the data message is stored locally.
3. A dual-core processor-based data storage device, the device comprising:
the receiving module is used for receiving a data message through a receiving port and sending the data message to the first core processor;
the first core processor is used for receiving the data message, generating interruption and storing the data message into a packet receiving cache queue;
the second core processor is used for taking out the data message from the packet receiving cache queue, analyzing the data message, and storing the analyzed data message into the data cache queue to realize the storage of the data message;
the second core processor is specifically further configured to store the analyzed data packet in a data cache queue, obtain a current resource utilization rate of the first core processor, determine whether the current resource utilization rate of the second core processor is greater than the current resource utilization rate of the first core processor, notify the first core processor to store the data packet when determining that the current resource utilization rate of the second core processor is greater than the current resource utilization rate of the first core processor, and otherwise, store the data packet by the second core processor; or the like, or, alternatively,
the first core processor is specifically further configured to, after the second core processor stores the analyzed data packet in the data cache queue, obtain a current resource utilization rate of the second core processor, determine whether the current resource utilization rate of the first core processor is greater than the current resource utilization rate of the second core processor, notify the second core processor to store the data packet when it is determined that the current resource utilization rate of the first core processor is greater than the current resource utilization rate of the second core processor, and otherwise, store the data packet by the first core processor;
the first core processor is specifically configured to update the number of received data packets after receiving the data packets; judging whether the number of the received data messages reaches a set second number threshold value or not; when the number of the received data messages reaches a set second number threshold value, generating interruption, and resetting the number of the received data messages; or, after receiving the data message, judging whether other data messages are received within a set time length; and when determining that other data messages are not received within the set time length, generating interruption and resetting the number of the received data messages.
4. The apparatus of claim 3, wherein before sending the datagram to the first core processor, the apparatus further comprises:
the FPGA is used for receiving the data message; updating the quantity of the received data messages, and judging whether the quantity of the updated data messages reaches a set first quantity threshold value; when the number of the updated data messages reaches a set first number threshold value, sending the data messages to a first core processor, and clearing the number of the received data messages; otherwise, the data message is stored locally.
CN201510634222.3A 2015-09-29 2015-09-29 Data storage method and device based on dual-core processor Active CN106557358B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510634222.3A CN106557358B (en) 2015-09-29 2015-09-29 Data storage method and device based on dual-core processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510634222.3A CN106557358B (en) 2015-09-29 2015-09-29 Data storage method and device based on dual-core processor

Publications (2)

Publication Number Publication Date
CN106557358A CN106557358A (en) 2017-04-05
CN106557358B true CN106557358B (en) 2020-08-11

Family

ID=58415979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510634222.3A Active CN106557358B (en) 2015-09-29 2015-09-29 Data storage method and device based on dual-core processor

Country Status (1)

Country Link
CN (1) CN106557358B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680001A (en) * 2017-08-01 2018-02-09 国网陕西省电力公司电力科学研究院 Signal processing system and its double-core exchange method based on double-core embedded type processor
CN107436859A (en) * 2017-08-04 2017-12-05 湖南利能科技股份有限公司 A kind of collecting method based on AMP
CN107704421B (en) * 2017-09-12 2021-04-27 新华三信息安全技术有限公司 Multi-core processor and message processing method
CN110119304B (en) * 2018-02-07 2021-08-31 华为技术有限公司 Interrupt processing method and device and server
CN112015159B (en) * 2019-05-31 2021-11-30 中车株洲电力机车研究所有限公司 Fault record storage method based on dual-core MCU and computer system
CN112104572B (en) * 2020-09-11 2023-02-07 北京天融信网络安全技术有限公司 Data processing method and device, electronic equipment and storage medium
US11888938B2 (en) * 2021-07-29 2024-01-30 Elasticflash, Inc. Systems and methods for optimizing distributed computing systems including server architectures and client drivers
CN115361451B (en) * 2022-10-24 2023-03-24 中国人民解放军国防科技大学 Network communication parallel processing method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101345665A (en) * 2008-08-28 2009-01-14 华为技术有限公司 Method and system for media surface controlling signaling surface
CN102591602A (en) * 2011-12-30 2012-07-18 浙江大学 High-speed digital printing processing system and method on basis of multi-core processor
CN102752198A (en) * 2012-06-21 2012-10-24 北京星网锐捷网络技术有限公司 Multi-core message forwarding method, multi-core processor and network equipment
CN104243363A (en) * 2014-09-30 2014-12-24 杭州华三通信技术有限公司 Message processing method and message processing device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5434919A (en) * 1994-01-11 1995-07-18 Chaum; David Compact endorsement signature systems
CN1221104C (en) * 2002-07-12 2005-09-28 华为技术有限公司 Method for converting packet mode data from high-bit width format to low-bit width format using frame alignment
CN101022405B (en) * 2006-06-23 2010-08-25 华为技术有限公司 Universal framing rule packaging method
CN101616081A (en) * 2009-07-30 2009-12-30 杭州华三通信技术有限公司 A kind of message forwarding method and router based on the tunnel
US8572677B2 (en) * 2010-07-14 2013-10-29 William G. Bartholomay Devices, systems, and methods for enabling reconfiguration of services supported by a network of devices
US8510267B2 (en) * 2011-03-08 2013-08-13 Rackspace Us, Inc. Synchronization of structured information repositories
CN102905311B (en) * 2012-09-29 2015-07-15 北京傲天动联技术股份有限公司 Data-message aggregating device and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101345665A (en) * 2008-08-28 2009-01-14 华为技术有限公司 Method and system for media surface controlling signaling surface
CN102591602A (en) * 2011-12-30 2012-07-18 浙江大学 High-speed digital printing processing system and method on basis of multi-core processor
CN102752198A (en) * 2012-06-21 2012-10-24 北京星网锐捷网络技术有限公司 Multi-core message forwarding method, multi-core processor and network equipment
CN104243363A (en) * 2014-09-30 2014-12-24 杭州华三通信技术有限公司 Message processing method and message processing device

Also Published As

Publication number Publication date
CN106557358A (en) 2017-04-05

Similar Documents

Publication Publication Date Title
CN106557358B (en) Data storage method and device based on dual-core processor
US7953915B2 (en) Interrupt dispatching method in multi-core environment and multi-core processor
US9218203B2 (en) Packet scheduling in a multiprocessor system using inter-core switchover policy
US8554968B1 (en) Interrupt technique for a nonvolatile memory controller
US20210034560A1 (en) Persistent kernel for graphics processing unit direct memory access network packet processing
CN109697122B (en) Task processing method, device and computer storage medium
US9460032B2 (en) Apparatus and method for processing an interrupt
KR20200011958A (en) Continuous Analysis Tasks for GPU Task Scheduling
CN112650558B (en) Data processing method and device, readable medium and electronic equipment
US10614006B2 (en) Low latency interrupt with existence of interrupt moderation
US9727364B2 (en) Virtual device hypervisor memory limited receive queuing
CN106030559A (en) Syncronization of interrupt processing to reduce power consumption
CN104753711A (en) Method and device for adjusting resource allocation of network equipment
CN105978821B (en) The method and device that network congestion avoids
CN105743690B (en) Techniques for managing power and performance of network devices
WO2016202153A1 (en) Gpu resource allocation method and system
US20180285293A1 (en) Interrupt based on a last interrupt request indicator and a work acknowledgement
US9239804B2 (en) Back-off mechanism for a peripheral page request log
US8990521B2 (en) Information processing device and computer program product
US20140136659A1 (en) Timeout Value Adaptation
US9128771B1 (en) System, method, and computer program product to distribute workload
CN117097679A (en) Aggregation method and device for network interruption and network communication equipment
KR20220113710A (en) GPU Packet Aggregation System
CN110659132B (en) Request processing optimization method and computer-readable storage medium
CN102098215A (en) Priority management method for multi-application packet reception

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant