WO2016011894A1 - 报文处理方法和装置 - Google Patents

报文处理方法和装置 Download PDF

Info

Publication number
WO2016011894A1
WO2016011894A1 PCT/CN2015/083601 CN2015083601W WO2016011894A1 WO 2016011894 A1 WO2016011894 A1 WO 2016011894A1 CN 2015083601 W CN2015083601 W CN 2015083601W WO 2016011894 A1 WO2016011894 A1 WO 2016011894A1
Authority
WO
WIPO (PCT)
Prior art keywords
processing
queue
cache
packet
prefetch
Prior art date
Application number
PCT/CN2015/083601
Other languages
English (en)
French (fr)
Inventor
洪涛
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2016011894A1 publication Critical patent/WO2016011894A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming

Definitions

  • the embodiments of the present invention relate to computer technologies, and in particular, to a packet processing method and apparatus.
  • the central processing unit (English: Central Processing Unit, CPU for short) includes multiple processor cores, which can process multiple packets concurrently. For multiple different flow messages, the order control unit is used to process multiple messages. control.
  • the processing of messages for multiple streams is sequential processing.
  • two cache queues Q0 and Q1 are included in the task cache.
  • T-flow packets T1 and T2 and K-flow packets K1 and K2 are added to queue Q0.
  • G-stream messages G1 and G2 are added to queue Q1.
  • the order-preserving mode of the current order-preserving control unit is configured as an atomic operation (English: ATOMIC)
  • the message T1 in the queue Q0 is taken out to one of the processor cores for processing, and the processing information is recorded in the order-preserving queue.
  • the sequence-preserving queue prevents the message T2 from being processed by other processor cores, and the subsequent messages K1 and K2 cannot be processed by other processor cores.
  • the queue Q0 is A strict priority relationship is configured with the queue Q1. When all the packets of the queue Q0 are not processed, the packets of the G stream in the queue Q1 cannot be processed.
  • the embodiment of the invention provides a packet processing method and device, which overcomes the problem that the scheduling of the packet is blocked during the processing of the packet, and the efficiency of processing the packet by the entire CPU is low.
  • a first aspect of the embodiments of the present invention provides a packet processing method, including:
  • the packets to be processed in the cache queue are respectively stored in different prefetch cache queues according to the flow class of the packet; wherein the prefetch cache queue is in the task cache and the save queue. Pre-established cache;
  • the messages in each of the prefetch buffer queues are respectively allocated to different processor cores for processing.
  • the method further includes:
  • Processing information of the packets in each of the prefetch cache queues is separately recorded in the sequence.
  • the number of the prefetch cache queues is greater than or equal to one.
  • any one of the first and second possible embodiments of the first aspect in a third possible implementation manner of the first aspect, the flow class of the to-be-processed message
  • the first type of stream and the second type of stream are included, and the to-be-processed message in the buffer queue is taken out, and the message is respectively placed into different pre-fetch buffer queues according to the flow category of the packet, including :
  • All the packets whose flow class is the first type of flow are put into one of the prefetch cache queues, and all the packets whose flow class is the second type of flow are placed differently from the first type of flow. Another of the prefetch cache queues.
  • the packets in each of the prefetch buffer queues are respectively allocated to different processor cores. And processing, in the sequence, the processing information of the packets in each of the prefetch cache queues, including:
  • the packet of the first type of stream in the prefetch buffer queue is allocated to the first processor core for processing, and the packet processing information of the first type of stream is recorded in the preserving queue, and another The packet of the second type of stream in the prefetch buffer queue is allocated to the second processor core for processing, and the packet processing information of the second type of stream is recorded in the sequence.
  • a second aspect of the embodiments of the present invention provides a packet processing apparatus, including:
  • a processing module configured to store the to-be-processed packet in the cache queue into different pre-fetch cache queues according to the flow category of the packet, where the pre-fetch cache queue is in the task cache and a pre-established cache between the queues;
  • an allocating module configured to allocate packets in each of the prefetch buffer queues to different processor cores for processing.
  • the method further includes:
  • the recording module is configured to separately record processing information of the packets in each of the prefetch cache queues in the save order queue.
  • any one of the first and second possible embodiments of the second aspect in a third possible implementation manner of the second aspect, the flow class of the to-be-processed message
  • the first type stream and the second type stream are included, and the processing module is specifically configured to:
  • the method includes:
  • the allocating module is further configured to allocate a packet of the first type of stream in the prefetch buffer queue to a first processor core for processing, where the recording module records the first in the sequence queue Class flow message processing information;
  • the allocating module is further configured to allocate a packet of the second type of stream in another prefetch buffer queue to the second processor core for processing, where the recording module records the second in the orderly queue Class flow message processing information.
  • a third aspect of the embodiment of the present invention provides a CPU of a central processing unit, including: a second party The packet processing apparatus and the at least two processor cores provided by any one of the first, second, third, and fourth embodiments of the second aspect.
  • Embodiment 2 is a flowchart of Embodiment 2 of a packet processing method according to the present invention
  • FIG. 3 is a schematic diagram of processing of a second embodiment of a packet processing method according to the present invention.
  • Embodiment 1 of a message processing apparatus according to the present invention
  • FIG. 5 is a schematic structural diagram of Embodiment 2 of a packet processing apparatus according to the present invention.
  • FIG. 6 is a schematic structural diagram of an embodiment of a sequence control unit provided by the present invention.
  • FIG. 7 is a schematic structural diagram of an embodiment of a CPU provided by the present invention.
  • the cache queue that is displayed in the embodiment of the present invention is a logical unit that is set in the task cache, and is used to cache the packets of the task to be processed.
  • the preserving queue that appears in the embodiment of the present invention is a logical unit that buffers a to-be-processed packet, and the pre-fetching cache queue is established between the cache queue and the preserving queue in the embodiment of the present invention. Logical unit.
  • the order-preserving control unit appearing in the embodiment of the present invention is a hardware accelerator for completing a stream order-preserving function inside the multi-core CPU, and the order-preserving control unit can control multiple processor cores in the CPU to process multiple reports of the same stream.
  • the sequence of the texts ensures that the packets are completed in the order of entry, and no out-of-order occurs.
  • the packets belonging to different flows can be processed concurrently. It is not necessary to ensure that the order of entry of different flows is consistent with the order of completion.
  • Embodiment 1 is a flowchart of Embodiment 1 of a packet processing method according to the present invention.
  • the execution body of this embodiment may be a sequence control unit in a CPU. As shown in FIG. 1, the sequence control unit performs the message processing.
  • the specific steps of the method are:
  • S101 The packets to be processed in the cache queue are respectively stored in different prefetch cache queues according to the flow category of the packet; wherein the prefetch cache queue is in the task cache and the save order.
  • the to-be-processed packet is a collective name of all the packets waiting to be processed in the buffer queue, and the to-be-processed packet may be multiple packets of different flow classes.
  • the pre-fetching cache queue is pre-established between the cache queue and the pre-order queue.
  • the pre-fetch cache queue is used to separate the pending packets in the cache queue according to the flow category.
  • a packet of a flow class may be placed in the cache queue, or a packet of multiple flow classes may be placed, and the present invention does not limit the present invention.
  • the sequence control unit sequentially takes the packets to be processed in the cache queue according to the current message sequence in the cache queue, and places all the pending packets in the prefetch cache queue according to the flow class.
  • S102 Distribute the packets in each of the prefetch cache queues to different processor cores for processing.
  • the sequence control unit calls the corresponding circuit to respectively allocate the messages in each prefetch buffer queue to different processor cores for processing.
  • each prefetch buffer queue may be used.
  • the packets are randomly allocated to different processor cores for processing, and may be allocated according to the idle condition of the current processor core, and are preferentially allocated to the idle processor core. This embodiment of the present invention does not specifically limit this.
  • the packet processing method provided in this embodiment stores the packets to be processed in the buffer queue according to the flow class of the packet into different prefetch cache queues, and then in each prefetch cache queue.
  • the packets are respectively allocated to different processor cores for processing.
  • the packets are directly processed from the cache queue compared with the prior art. When the packets of the previous stream are not processed, the other streams of the same cache queue cannot be processed. Packets are separated from each other by the prefetch cache queue and are allocated to different processor cores for concurrent processing. This solves the problem that the scheduling of the packets is blocked and the efficiency of the entire CPU is low. , effectively improve the efficiency of the CPU processing of the message.
  • Embodiment 2 is a flowchart of Embodiment 2 of a packet processing method according to the present invention.
  • the processor core further includes: processing, in the sequence, the processing information of the packets in each of the prefetch cache queues.
  • the processor core by processing the packet processing information in each prefetch buffer queue, the processing progress of the subsequent query message and the idle state of the processor core are facilitated, and the new packet processing process is facilitated.
  • the new message can be allocated to different processor cores according to the idle state of the processor core for processing, reducing the idleness of the processor core and improving the packet processing efficiency.
  • the flow type of the packet to be processed may be many.
  • S202 Put all the packets whose flow class is the first class flow into one of the prefetch cache queues, and put all the packets whose flow class is the second class flow into the first class. Another of the prefetch cache queues that are different in flow.
  • the number of the prefetch cache queues is greater than or equal to one, and the specific number is determined according to the configuration of the CPU, mainly considering hardware resources and data concurrent requests, and optionally, the number and processing of the prefetch cache queues.
  • the number of cores is the same.
  • At least two prefetch cache queues may be selected, and the packets of the first type of stream are sequentially placed into one of the prefetch cache queues, and the packets of the second type of stream are sequentially placed into the first class.
  • Another prefetch cache queue with different streams is called by the hold-up control unit to process the message concurrently.
  • S203 The packet of the first type of stream in the prefetch cache queue is allocated to the first processor core for processing, and the packet processing information of the first type of stream is recorded in the save order queue, and The packet of the second type of stream in the pre-fetch cache queue is allocated to the second processor core for processing, and the packet processing information of the second type of stream is recorded in the sequence.
  • the to-be-processed messages respectively placed in the two prefetch buffer queues are respectively allocated to two different processor cores for processing, and the processing information of each packet is recorded to complete the order preservation.
  • the control unit processes the message.
  • the sequence control unit includes at least one cache queue, at least one prefetch cache queue, and a sequence hold queue to perform a sequence process on the packets entering the CPU.
  • the packet processing method provided in this embodiment removes the to-be-processed packet in the buffer queue, puts the first-type stream packet into one pre-fetch buffer queue, and puts another type of stream packet into another packet.
  • the packets in the two prefetch cache queues are respectively allocated to different processor cores for processing, and the processing information of the packets in each prefetch cache queue is separately recorded in the preserving queue.
  • the flow messages are separated and allocated to different processor cores for concurrent processing, which effectively solves the problem of blocking of scheduling in the same cache queue during the processing of the message, and the problem of multiple processor cores being idle, thereby effectively improving the processing of the packets by the CPU. s efficiency.
  • FIG. 3 is a schematic diagram of processing of a second embodiment of a packet processing method according to the present invention. As shown in FIG. 3, the following describes the processing method of the invented packet by using an example of a cache queue, a prefetch cache queue, and a preserving queue in the order control unit.
  • the specific processing procedure is as follows:
  • the task cache includes two cache queues Q_0 and Q_1.
  • the priority queue (English: Priority Queue, PQ for short) is used for scheduling processing.
  • the pending packets include three types of flows. Assuming that the T stream, the K stream, and the G stream message, it can be understood that the T stream, the K stream, and the G stream are merely exemplary descriptions for distinguishing the types of messages, and may represent any kind of the prior art. The message does not constitute any limitation on the embodiment of the present invention.
  • the T-stream messages T1 and T2 and the K-stream messages K1 and K2 are sequentially added to the buffer queue Q_0.
  • the G-stream messages G1 and G2 are sequentially added to the queue Q_1, and the current order control mode of the order-preserving control unit is configured. Operate for the atom.
  • the prefetch buffer queue established by the order control unit is three, and each prefetch buffer queue may be a polling schedule (English: Round-Robin, referred to as RR) mechanism, and the order control unit All the packets to be processed in the buffer queue Q_0 and the buffer queue Q_1 are taken out, and the T-stream packets T1 and T2 are placed in the first pre-fetch buffer queue according to the flow class of the packet to be processed; The texts K1 and K2 are placed in the second prefetch buffer queue; the G stream messages G1 and G2 are placed in the third prefetch buffer queue.
  • RR Round-Robin
  • the CPU may include at least: a first processor core, a second processor core, and a third processor core
  • the order control unit may allocate the T stream packet in the first prefetch buffer queue to The first processor core performs processing; the K stream message in the second prefetch buffer queue is allocated to the second processor core for processing; and the G stream message in the third prefetch buffer queue is allocated to the third The processor core processes.
  • the K-stream message K1 can be concurrently processed by the second processor core, and the processing information of the message K1 is recorded in the sequence-preserving queue, and the same cache is solved. Flow blocking problem in the queue.
  • the priority of Q_0 is greater than the priority of Q_1
  • the scheduling congestion between different cache queues is resolved, there is no need to wait for the T flow and the K flow report.
  • the G-flow packets are taken out.
  • the G-stream messages can be taken out in advance and allocated idle processor cores for processing.
  • the idle processor core is fully utilized to process the G-streams in the low-priority queue Q_1, which can improve the CPU. Concurrent processing capability without preempting the resources of the processor core of the T stream that is blocked in the high priority queue.
  • first, second, and third embodiments do not represent the order of the processor cores, and are used to distinguish each of the processor cores, and do not constitute any limitation to the embodiments of the present invention.
  • the T-stream packet is put into the first pre-fetch buffer queue, and the K-stream packet is placed in the first step by taking the packet to be processed in the buffer queues Q_0 and Q_1.
  • the two prefetch cache queues are placed in the third prefetch cache queue, and the packets in the three prefetch cache queues are respectively allocated to different processor cores for processing.
  • the pre-fetch cache queue is added between the cache and the sequence-preserving queues, and different stream packets are separated and allocated to different processor cores for concurrent processing, which improves the efficiency of concurrent processing of multiple streams, and solves the problem of stream blocking and scheduling blocking, especially In the case of blocking, the task concurrency can be improved, and the processing capability of the CPU for the pre-ordered concurrent stream is improved, thereby effectively improving the efficiency of processing the message by the CPU.
  • a plurality of packets are prefetched from the cache queue of the task cache, and are divided into different prefetch cache queues according to the flow category, not directly from the task cache.
  • the queue head of the cache queue starts to be fetched and put into the prefetch cache queue in turn.
  • the packets can be fetched in parallel from multiple cache queues or all the packets in the cache queue can be taken out and then placed into different according to the stream category.
  • Prefetch cache queue is
  • the packets of each flow are respectively obtained from the plurality of prefetch cache queues according to the flow category, and randomly allocated or allocated according to the idle state of the current processor core to each processor core for processing. Then, it is recorded in the order-preserving queue, which can continue to maintain the flow order, and also prevent stream blocking and scheduling blocking.
  • prefetch cache queue it is divided according to the category of the stream and the order of the first input first output (English: First Input First Output, FIFO). Therefore, according to the processing method of the present application, The message ordering within the same stream is not destroyed.
  • the number of prefetch cache queues is set in the preserving control unit in the CPU.
  • the balance between the hardware resources and the concurrency requirements is mainly considered, and the packets of one stream are not limited to one prefetch buffer queue. That is, in the same prefetch buffer queue, packets of several types of flows may be included, but under the same conditions, the more the number of prefetch cache queues, the more dispersed the packets of different flows, and the packets of the flow are The better the concurrent processing power.
  • the task cache includes two cache queues Q_0 and Q_1, the T-stream messages T1 and T2 and the K-stream messages K1 and K2 are sequentially added to the buffer queue Q_0.
  • the G-flow message G1, G2, and the H-stream messages H1 and H2 are sequentially added to the queue Q_1, and the current order-preserving control unit's order-preserving mode is configured as an atomic operation.
  • the prefetch buffer queue established by the order control unit is three, and the sequence control unit takes all the to-be-processed packets in the buffer queue Q_0 and the buffer queue Q_1, and according to the flow category of the packet to be processed.
  • the T-stream messages T1 and T2 are placed in the first pre-fetch buffer queue; the K-stream messages K1 and K2 are placed in the second pre-fetch buffer queue; and the G-flow messages G1, G2, and H are sent to the H1 packet.
  • H2 is placed in the third prefetch cache queue, that is, the T, K, G, and H streams are respectively placed into the corresponding first, second, and third prefetch cache queues.
  • the sequence control unit may allocate the T stream message in the first prefetch buffer queue to one processor core for processing; and the K in the second prefetch buffer queue The flow message is allocated to another processor core for processing; the G stream and the H stream message in the third prefetch buffer queue are allocated to the third processor core for processing.
  • the packet processing apparatus 40 includes a processing module 41 and an allocation module 42.
  • the processing module 41 is configured to The packets to be processed in the cache queue are respectively stored in different prefetch cache queues according to the flow class of the packet, and the prefetch cache queue is pre-established between the task cache and the save order queue.
  • the buffering module 42 is configured to allocate the messages in each of the prefetch buffer queues to different processor cores for processing.
  • the number of the prefetch cache queues established by the processing module 41 is greater than or equal to one.
  • the allocating module 42 is further configured to allocate a packet of the first type of stream in the prefetch buffer queue to a first processor core, where the recording module 43 records the Packet processing information of the first type of stream;
  • the allocating module 42 is further configured to allocate a packet of the second type of stream in another prefetch buffer queue to the second processor core for processing, where the recording module 43 records the The packet processing information of the second type of stream.
  • FIG. 6 is a schematic structural diagram of an embodiment of a sequence control unit provided by the present invention.
  • the sequence control unit 50 includes a processor 51 and a memory 52.
  • the processor 51 is configured to take out the to-be-processed message in the buffer queue and store the message in different pre-fetch cache queues according to the flow class of the message, where the pre-fetch cache queue is
  • the processor 51 is configured to cache between the task cache and the preserving queue; the processor 51 is further configured to allocate the messages in each of the prefetch buffer queues to different processor cores for processing.
  • the method further includes: a memory 52, configured to separately record each in the save order queue The processing information of the packet in the prefetch cache queue.
  • the sequence control unit provided in this embodiment is used to perform the technical solution of the method shown in FIG. 1.
  • the implementation principle and the technical effect are similar.
  • the processor removes the to-be-processed message in the buffer queue according to the packet.
  • the traffic class stores the packets in different prefetch cache queues, and allocates the packets in each prefetch cache queue to different processor cores for processing.
  • the memory records each prefetch in the preserving queue.
  • the processing information of the packets in the cache queue is separated by the pre-fetching cache queues and distributed to different processor cores for processing. This effectively solves the blocking of the packets during the processing of the packets, resulting in the entire CPU.
  • the problem of low efficiency of text processing effectively improves the efficiency of CPU processing of messages.
  • the number of the prefetch cache queues established by the processor 51 is greater than or equal to one.
  • the processor 51 is specifically configured to: take out the packet in the cache queue; Putting all the packets of the first type of flow into one of the prefetch cache queues, and placing all the packets whose flow class is the second type of flow into another one different from the first type of flow The prefetch cache queue.
  • the processor 51 is further configured to allocate a packet of the second type of stream in another prefetch buffer queue to the second processor core for processing, where the memory 52 records the first in the save queue Packet processing information of the second type of stream.
  • FIG. 7 is a schematic structural diagram of an embodiment of a CPU provided by the present invention. As shown in FIG. 7, the CPU 60 includes: a message processing apparatus 61 and at least two processor cores 62 provided by any one of FIG. 4 and FIG. .
  • the CPU provided in this embodiment is used to perform the packet processing method provided by any of the embodiments shown in FIG. 1 to FIG. 3, and the implementation principle and the technical effects are similar, and other embodiments of the present invention may be referred to, and details are not described herein. .
  • the packet processing device 61 takes out the to-be-processed packet in the buffer queue and stores the packet into different pre-fetch buffer queues according to the packet type of the packet, and then each The packets in the prefetch buffer queue are respectively allocated to different processor cores 62 for processing.
  • the pre-fetch buffer queues separate and distribute different packets according to the prior art.
  • the processing of the processor core concurrently solves the problem that the scheduling of the packet is blocked during processing of the packet, which causes the entire CPU to process the packet with low efficiency, thereby effectively improving the efficiency of processing the packet by the CPU.
  • the aforementioned program can be stored in a computer readable storage medium.
  • the program when executed, performs the steps including the foregoing method embodiments; and the foregoing storage medium includes various media that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明实施例提供一种报文处理方法和装置,该报文处理方法包括:将缓存队列中的待处理的报文按照所述报文的流类别将所述报文分别存入不同的预取缓存队列;其中,所述预取缓存队列为在任务缓存和保序队列之间预先建立的缓存;将每个所述预取缓存队列中的报文分别分配给不同的处理器核进行处理。通过预取缓存队列将不同的流报文分开并分配不同的处理器核并发处理,有效解决了报文的处理过程中调度的阻塞,多个处理器核闲置,造成整个CPU对报文的处理的效率较低的问题,有效提高了CPU对报文的处理的效率。

Description

报文处理方法和装置
本申请要求于2014年7月25日提申的申请号为201410360849.X的中国发明申请的优先权。所述申请的所有内容在此并入本申请以供参考。
技术领域
本发明实施例涉及计算机技术,尤其涉及一种报文处理方法和装置。
背景技术
中央处理器(英文:Central Processing Unit,简称CPU)包括多个处理器核,可以并发处理多个报文,对于多个不同的流报文一般使用保序控制单元进行多个报文的处理顺序的控制。
现有技术中,对于多个流的报文的处理方式为顺序处理。例如:在任务缓存中包括两个缓存队列Q0、Q1,队列Q0中依次加入了T流报文T1、T2以及K流报文K1、K2;队列Q1中依次加入了G流报文G1、G2,在当前保序控制单元的保序模式配置为原子操作(英文:ATOMIC)时,队列Q0中的报文T1被取出给其中一个处理器核进行处理,并将处理信息记录在保序队列中,在报文T1没有被处理完成时,保序队列会阻止报文T2被其他处理器核进行处理,进而导致后续报文K1、K2不能被别的处理器核进行处理,另外,若队列Q0和队列Q1配置了严格的优先级关系,则在队列Q0的所有报文没有全部处理完成时,队列Q1中的G流的报文也不能被处理。
但是,上述的处理方法中,处理器核对报文T1没有处理完成时,同一队列中的其他流的报文不能进行处理,且在队列之间有优先级关系时,别的队列中的报文也不能进行处理,导致调度处理的阻塞,造成整个CPU对报文的处理的效率较低。
发明内容
本发明实施例提供一种报文处理方法和装置,以克服报文的处理过程中调度的阻塞,造成整个CPU对报文的处理的效率较低的问题。
本发明实施例第一方面提供一种报文处理方法,包括:
将缓存队列中的待处理的报文按照所述报文的流类别将所述报文分别存入不同的预取缓存队列;其中,所述预取缓存队列为在任务缓存和保序队列之间预先建立的缓存;
将每个所述预取缓存队列中的报文分别分配给不同的处理器核进行处理。
结合第一方面,在第一方面的第一种可能的实施方式中,所述将每个所述预取缓存队列中的报文分别分配给不同的处理器核进行处理之后,还包括:
在所述保序队列中分别记录每个所述预取缓存队列中的报文的处理信息。
结合第一方面的第一种可能的实施方式,在第一方面的第二种可能的实施方式中,所述预取缓存队列的数量大于或等于一个。
结合第一方面、第一方面的第一种和第二种中的任一种可能的实施方式,在第一方面的第三种可能的实施方式中,所述待处理的报文的流类别包括第一类流和第二类流,则所述将缓存队列中的待处理的报文取出并按照所述报文的流类别将所述报文分别放入不同的预取缓存队列,包括:
将所述缓存队列中的报文取出;
将流类别为所述第一类流的所有报文放入一个所述预取缓存队列中,将流类别为所述第二类流的所有报文放入与所述第一类流不同的另一所述预取缓存队列。
结合第一方面的第三种可能的实施方式,在第一方面的第四种可能的实施方式中,所述将每个所述预取缓存队列中的报文分别分配不同的处理器核进行处理,并在所述保序队列中分别记录每个所述预取缓存队列中的报文的处理信息,包括:
将所述预取缓存队列中所述第一类流的报文分配给第一处理器核进行处理,在所述保序队列中记录所述第一类流的报文处理信息,并将另 一所述预取缓存队列中第二类流的报文分配给第二处理器核进行处理,在所述保序队列中记录所述第二类流的报文处理信息。
本发明实施例第二方面提供一种报文处理装置,包括:
处理模块,用于将缓存队列中的待处理的报文按照所述报文的流类别将所述报文分别存入不同的预取缓存队列,所述预取缓存队列为在任务缓存和保序队列之间预先建立的缓存;
分配模块,用于将每个所述预取缓存队列中的报文分别分配给不同的处理器核进行处理。
结合第二方面,在第二方面的第一种可能的实施方式中,还包括:
记录模块,用于在所述保序队列中分别记录每个所述预取缓存队列中的报文的处理信息。
结合第二方面的第一种可能的实施方式,在第二方面的第二种可能的实施方式中,所述处理模块建立的所述预取缓存队列的数量大于等于一个。
结合第二方面、第二方面的第一种和第二种中的任一种可能的实施方式,在第二方面的第三种可能的实施方式中,所述待处理的报文的流类别包括第一类流和第二类流,则所述处理模块具体用于:
将所述缓存队列中的报文取出;
将流类别为所述第一类流的所有报文放入一个所述预取缓存队列中,并将流类别为所述第二类流的所有报文放入与所述第一类流不同的另一所述预取缓存队列。
结合第二方面的第三种可能的实施方式,在第二方面的第四种可能的实施方式中,包括:
所述分配模块还用于将所述预取缓存队列中所述第一类流的报文分配给第一处理器核进行处理,所述记录模块在所述保序队列中记录所述第一类流的报文处理信息;
所述分配模块还用于将另一所述预取缓存队列中第二类流的报文分配给第二处理器核进行处理,所述记录模块在所述保序队列中记录所述第二类流的报文处理信息。
本发明实施例第三方面提供一种中央处理器CPU,包括:第二方 面、第二方面的第一种、第二种、第三种至第四种实施方式中的任一实施方式提供的报文处理装置和至少两个处理器核。
本发明实施例提供的报文处理方法和装置,通过将缓存队列中的待处理的报文取出并按照报文的流类别将报文分别存入不同的预取缓存队列,再将每个预取缓存队列中的报文分别分配给不同的处理器核进行处理,相较于现有技术直接从缓存队列中取出报文处理,在没有处理完成上一个流的报文时,不能处理同一缓存队列的其他流的报文,通过预取缓存队列将不同的流报文分开并分配不同的处理器核并发处理,解决了报文的处理过程中调度的阻塞,造成整个CPU对报文的处理的效率较低的问题,有效提高了CPU对报文的处理的效率。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本发明提供的报文处理方法实施例一的流程图;
图2为本发明提供的报文处理方法实施例二的流程图;
图3为本发明提供的报文处理方法实施例二的处理示意图;
图4为本发明提供的报文处理装置实施例一的结构示意图;
图5为本发明提供的报文处理装置实施例二的结构示意图;
图6为本发明提供的保序控制单元的实施例的结构示意图;
图7为本发明提供的CPU的实施例的结构示意图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造 性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明实施例提供的报文处理方法适用于所有需要进行报文处理或者交互的通信***,可以适用于通信***中的各种设备,例如各类终端设备、各种网元节点例如基站、基站控制器等,该方法具体可以由上述设备中的CPU执行。本发明实施例中出现的CPU为多核CPU。
本发明实施例中出现的任务缓存为CPU中的临时存储器,它的容量比内存小,但是交换速度非常快,在任务缓存中存储的数据是内存中的一小部分,是短时间内CPU即将访问的,当CPU调用大量数据时,可直接从任务缓存中调用,从而加快读取速率。
本发明实施例中出现的缓存队列为设置在任务缓存中的逻辑单元,用于将待处理任务的报文进行缓存。本发明实施例中出现的保序队列为保序控制单元中设置的缓冲待处理的报文的逻辑单元,预取缓存队列为本发明实施例中在上述缓存队列和保序队列之间建立的逻辑单元。
本发明实施例中出现的保序控制单元为一种多核CPU内部用于完成流保序功能的硬件加速器,保序控制单元能控制CPU内的多个处理器核处理同一条流的多个报文时的先后顺序,保证报文按进入顺序完成,不会发生乱序;且属于不同流的报文间可以并发处理,不需要保证不同的流的进入顺序和处理完成顺序一致。
图1为本发明提供的报文处理方法实施例一的流程图,本实施例的执行主体可以是CPU中的保序控制单元,如图1所示,该保序控制单元执行该报文处理方法的具体步骤为:
S101:将缓存队列中的待处理的报文按照所述报文的流类别将所述报文分别存入不同的预取缓存队列;其中,所述预取缓存队列为在任务缓存和保序队列之间预先建立的缓存。
在本实施例中,上述待处理的报文为缓存队列中所有等待处理的报文的统称,该待处理的报文可以为不同流类别的多个报文。保序控制单元在缓存队列和保序队列之间预先建立了预取缓存队列,该预取缓存队列用来将缓存队列中的待处理的报文按照流类别分别分开放置,其中,每个预取缓存队列中可以放置一种流类别的报文,也可以放置多种流类别的报文,对此本发明不做限制。
保序控制单元将缓存队列中的待处理的报文按照缓存队列中当前的报文顺序依次取出,并将所有的待处理的报文按照流类别分别放置在预取缓存队列中。
S102:将每个所述预取缓存队列中的报文分别分配给不同的处理器核进行处理。
在本实施例中,保序控制单元调用相应的电路分别将每个预取缓存队列中的报文并行的分配给不同的处理器核进行处理,具体的:可以将每个预取缓存队列中的报文随机分配给不同的处理器核进行处理,也可以根据当前处理器核的闲置情况进行分配,优先分配给闲置的处理器核,本发明实施例对此不做特别限定。
本实施例提供的报文处理方法,通过将缓存队列中的待处理的报文按照报文的流类别将报文分别存入不同的预取缓存队列,再将每个预取缓存队列中的报文分别分配给不同的处理器核进行处理,相较于现有技术直接从缓存队列中取出报文处理,在没有处理完成上一个流的报文时,不能处理同一缓存队列的其他流的报文,通过预取缓存队列将不同的流报文分开并分配不同的处理器核并发处理,解决了报文的处理过程中调度的阻塞造成整个CPU对报文的处理的效率较低的问题,有效提高了CPU对报文的处理的效率。
图2为本发明提供的报文处理方法实施例二的流程图,在上述图1所示的实施例的基础上,所述将每个所述预取缓存队列中的报文分别分配给不同的处理器核进行处理之后,还包括:在所述保序队列中分别记录每个所述预取缓存队列中的报文的处理信息。在本实施例中,通过记录每个预取缓存队列中的报文处理信息,可供后续查询报文的处理进度,以及处理器核的空闲状态,便于在进行新的报文的处理过程中,能够根据处理器核的空闲状态将新的报文分配给不同的处理器核进行处理,降低处理器核的空闲度,提高报文处理效率。
如图2所示,实际情况下,待处理的报文的流类别可能有很多,本实施例中假设有两种类别的流在缓存队列中,若所述待处理的报文的流类别包括第一类流和第二类流,且这两类流的报文缓存在同一缓存队列中,则具体的报文的处理方法的步骤为:
S201:将缓存队列中的报文取出。
S202:将流类别为所述第一类流的所有报文放入一个所述预取缓存队列中,并将流类别为所述第二类流的所有报文放入与所述第一类流不同的另一所述预取缓存队列。
在本实施例中,所述预取缓存队列的数量大于等于一个,具体的个数按照CPU的配置而定,主要考量硬件资源和数据并发要求,可选地,预取缓存队列的数量与处理器核的数量相同。
本实施例中可选择至少有两个预取缓存队列,将第一类流的报文按顺序放入其中一个预取缓存队列,将第二类流的报文按顺序放入与第一类流不同的另一个预取缓存队列,以被保序控制单元调用,并发处理报文。
S203:将所述预取缓存队列中所述第一类流的报文分配给第一处理器核进行处理,在所述保序队列中记录所述第一类流的报文处理信息,并将另一所述预取缓存队列中第二类流的报文分配给第二处理器核进行处理,在所述保序队列中记录所述第二类流的报文处理信息。
在本实施例中,将分别放置在两个预取缓存队列中的待处理的报文,分别分配给两个不同的处理器核进行处理,并记录每个报文的处理信息,完成保序控制单元对报文的处理。
在上述实施例一和实施例二中,保序控制单元至少包括了:至少一个缓存队列、至少一个预取缓存队列和保序队列,以便对于进入该CPU的报文进行保序处理。
本实施例提供的报文处理方法,通过将缓存队列中的待处理的报文取出,将第一类流报文放入一个预取缓存队列,并将另一类流报文放入另一个预取缓存队列中,再将两个预取缓存队列中的报文分别分配给不同的处理器核进行处理,并在保序队列中分别记录每个预取缓存队列中的报文的处理信息,相较于直接从缓存队列中取出报文处理,在没有处理完成上一个流的报文时,不能处理同一缓存队列的其他流的报文的现有技术,通过预取缓存队列将不同的流报文分开并分配不同的处理器核并发处理,有效解决了报文的处理过程中同一缓存队列中调度的阻塞,以及多个处理器核闲置的问题,有效提高了CPU对报文的处理的效率。
在上述实施例的基础上,下面特举一实例,对本发明提供的报文的处理方法进行详细的说明。
图3为本发明提供的报文处理方法实施例二的处理示意图。如图3所示,其中,下面以保序控制单元中的缓存队列、预取缓存队列和保序队列的实例来说明被发明的报文的处理方法,具体处理过程如下:
任务缓存中包括两个缓存队列Q_0、Q_1,其中,Q_0和Q_1之间设置为优先级队列(英文:Priority Queue,简称:PQ)机制进行调度处理,待处理的报文包括三种类别的流,假设为T流、K流和G流报文,可以理解,T流、K流和G流仅是为了区分报文种类而作的示例性的说明,可以表示现有技术中的任意种类的报文,不构成对本发明实施例的任何限定。在缓存队列Q_0中依次加入了T流报文T1、T2以及K流报文K1、K2;队列Q_1中依次加入了G流报文G1、G2,且当前的保序控制单元的保序模式配置为原子操作。
在本实施例中,保序控制单元建立的预取缓存队列为三个,每个预取缓存队列之间可以为轮询调度(英文:Round-Robin,简称:RR)机制,保序控制单元将缓存队列Q_0和缓存队列Q_1中的待处理的报文全部取出,并根据待处理的报文的流类别将T流报文T1、T2放入第一个预取缓存队列;将K流报文K1、K2放入第二个预取缓存队列;将G流报文G1、G2放入第三个预取缓存队列。
完成上述操作过程后,任务缓存中的所有的待处理报文按流类别分类放入不同的预取缓存队列中,即T、K、G流分别放入对应的第一、第二、第三个预取缓存队列。
可选地,该CPU可以至少包括:第一处理器核、第二处理器核和第三处理器核,则保序控制单元可以将第一个预取缓存队列中的T流报文分配给第一处理器核进行处理;将第二个预取缓存队列中的K流报文分配给第二处理器核进行处理;将第三个预取缓存队列中的G流报文分配给第三处理器核进行处理。
即在处理过程中,若报文T1被阻塞时,K流的报文K1可以并发的被第二处理器核进行处理,并在保序队列中记录报文K1的处理信息,解决了同一缓存队列内的流阻塞问题。
若T1被阻塞处理时,G流的报文G1可以并发被第三处理器核进行处理,并在保序队列中记录报文G1的处理信息,解决了不同缓存队列之间的调度阻塞问题。
特别的,在每个缓存队列之间设置有严格优先级时(Q_0的优先级大于Q_1优先级),在解决不同缓存队列之间的调度阻塞时,并不需要等待T流、K流的报文全部取出后再取出G流报文,G流报文的可以提前取出并分配空闲的处理器核进行处理,充分利用空闲处理器核以处理低优先级队列Q_1中的G流,可提高CPU并发处理能力,而并未抢占在高优先级队列中被阻塞的T流的处理器核的资源。
可以理解,上述第一、第二和第三并不表示处理器核的次序,只用于将每个处理器核进行区分,不构成对本发明实施例的任何限定。
本实施例提供的报文处理方法,通过将缓存队列Q_0、Q_1中的待处理的报文取出,将T流报文放入第一个预取缓存队列,并将K流报文放入第二个预取缓存队列中、将G流报文放入第三个预取缓存队列中,再将三个预取缓存队列中的报文分别分配给不同的处理器核进行处理,通过在任务缓存、保序队列之间增加预取缓存队列,将不同的流报文分开并分配不同的处理器核并发处理,提高多个流并发处理效率,解决了流阻塞问题和调度阻塞问题,特别是在出现阻塞时,可以提高任务并发度,提升CPU对保序并发流的处理能力,有效提高了CPU对报文的处理的效率。
特别的,在上述所有实施例的中,在处理过程中,从任务缓存的缓存队列中预取若干报文,并按流类别区分放入不同的预取缓存队列中,并不是直接从任务缓存的缓存队列的队头依次开始取出并放入预取缓存队列,而是可以从多个缓存队列中并行的取出报文或者将所有缓存队列中的报文全部取出后根据流类别放入不同的预取缓存队列。
进一步的,按流类别从多个预取缓存队列中分别获取每个流的报文,并随机分配或者按照当前处理器核的空闲状态将每个流的报文分配给各个处理器核进行处理,然后记录在保序队列中,既可以继续保持流保序,也可以防止流阻塞及调度阻塞。
在预取缓存队列中,既是按流的类别划分也是按照先入先出(英文:First Input First Output,简称:FIFO)原则保序的,因此,按照本申请的处理方法 并未破坏同一条流内的报文保序。
在CPU中的保序控制单元中建立预取缓存队列的个数,根据CPU的配置而定,主要考量硬件资源与并发度要求的权衡,不限定为一个流的报文对应一个预取缓存队列,即在同一个预取缓存队列中可以包括几个类型的流的报文,只是在同等条件下,预取缓存队列数量越多,将不同的流的报文越分散,流的报文的并发处理能力越好。
可选地,作为本发明的另一个实施例,若任务缓存中包括两个缓存队列Q_0、Q_1,其中,缓存队列Q_0中依次加入了T流报文T1、T2以及K流报文K1、K2;队列Q_1中依次加入了G流报文G1、G2和H流报文H1、H2,且当前的保序控制单元的保序模式配置为原子操作。
可选的,保序控制单元建立的预取缓存队列为三个,保序控制单元将缓存队列Q_0和缓存队列Q_1中的待处理的报文全部取出,并根据待处理的报文的流类别将T流报文T1、T2放入第一个预取缓存队列;将K流报文K1、K2放入第二个预取缓存队列;将G流报文G1、G2和H流报文H1、H2放入第三个预取缓存队列,即T、K、G和H流分别放入对应的第一、第二、第三个预取缓存队列。
若CPU至少包括三个处理器核,则保序控制单元可以将第一个预取缓存队列中的T流报文分配给一个处理器核进行处理;将第二个预取缓存队列中的K流报文分配给另一个处理器核进行处理;将第三个预取缓存队列中的G流和H流报文分配给第三个处理器核进行处理。
图4为本发明提供的报文处理装置实施例一的结构示意图,如图4所示,该报文处理装置40,包括:处理模块41、分配模块42,其中,处理模块41,用于将缓存队列中的待处理的报文按照所述报文的流类别将所述报文分别存入不同的预取缓存队列,所述预取缓存队列为在任务缓存和保序队列之间预先建立的缓存;分配模块42,用于将每个所述预取缓存队列中的报文分别分配给不同的处理器核进行处理。
本实施例提供的报文处理装置,用于执行图1所示的方法实施例的技术方案,通过处理模块将缓存队列中的待处理的报文取出并按照报文的流类别将报文分别存入不同的预取缓存队列,分配模块将每个预取缓存队列中的报文分别分配给不同的处理器核进行处理,相较于现有技术直 接从缓存队列中取出报文处理,在没有处理完成上一个流的报文时,不能处理同一缓存队列的其他流的报文,通过预取缓存队列将不同的流报文分开并分配不同的处理器核并发处理,解决了报文的处理过程中调度的阻塞,造成整个CPU对报文的处理的效率较低的问题,有效提高了CPU对报文的处理的效率。
图5为本发明提供的报文处理装置实施例二的结构示意图,如图5所示,在本发明报文的处理装置的实施例二中,在上述实施例的基础上,该装置40还包括;记录模块43,用于在所述保序队列中分别记录每个所述预取缓存队列中的报文的处理信息。
所述处理模块41建立的所述预取缓存队列的数量大于等于一个。
进一步的,所述待处理的报文的流类别包括第一类流和第二类流,则所述处理模块41具体用于:将所述缓存队列中的报文取出;将流类别为所述第一类流的所有报文放入一个所述预取缓存队列中,并将流类别为所述第二类流的所有报文放入与所述第一类流不同的另一所述预取缓存队列。
所述分配模块42还用于将所述预取缓存队列中所述第一类流的报文分配给第一处理器核进行处理,所述记录模块43在所述保序队列中记录所述第一类流的报文处理信息;
所述分配模块42还用于将另一所述预取缓存队列中第二类流的报文分配给第二处理器核进行处理,所述记录模块43在所述保序队列中记录所述第二类流的报文处理信息。
本实施例提供的报文处理装置,用于执行图1至图3所示的任一方法实施例的技术方案,其执行原理和技术效果类似,在此不作赘述。
图6为本发明提供的保序控制单元的实施例的结构示意图,如图6所示,该保序控制单元50包括:处理器51和存储器52。其中,处理器51用于将缓存队列中的待处理的报文取出并按照所述报文的流类别将所述报文分别存入不同的预取缓存队列,所述预取缓存队列为所述处理器51在任务缓存和保序队列之间建立的缓存;处理器51还用于将每个所述预取缓存队列中的报文分别分配给不同的处理器核进行处理。
可选的,还包括:存储器52,用于在所述保序队列中分别记录每个 所述预取缓存队列中的报文的处理信息。
本实施例提供的保序控制单元,用于执行图1所示的方法的技术方案,其实现原理和技术效果类似,通过处理器将缓存队列中的待处理的报文取出并按照报文的流类别将报文分别存入不同的预取缓存队列,并将每个预取缓存队列中的报文分别分配给不同的处理器核进行处理,存储器在保序队列中分别记录每个预取缓存队列中的报文的处理信息,通过预取缓存队列将不同的流报文分开并分配不同的处理器核并发处理,有效解决了报文的处理过程中调度的阻塞,造成整个CPU对报文的处理的效率较低的问题,有效提高了CPU对报文的处理的效率。
在上述实施例的基础上,所述处理器51建立的所述预取缓存队列的数量大于等于一个。
可选的,若所述待处理的报文的流类别包括第一类流和第二类流,则所述处理器51具体用于:将所述缓存队列中的报文取出;将流类别为所述第一类流的所有报文放入一个所述预取缓存队列中,并将流类别为所述第二类流的所有报文放入与所述第一类流不同的另一所述预取缓存队列。
所述处理器51还用于将所述预取缓存队列中所述第一类流的报文分配给第一处理器核进行处理,所述存储器52在所述保序队列中记录所述第一类流的报文处理信息;
所述处理器51还用于将另一所述预取缓存队列中第二类流的报文分配给第二处理器核进行处理,所述存储器52在所述保序队列中记录所述第二类流的报文处理信息。
本实施例提供的保序控制单元,用于执行图1至图3任一项所示的方法实施例的技术方案,其实现原理和技术效果类似,在此不再赘述。
图7为本发明提供的CPU的实施例的结构示意图,如图7所示,该CPU60包括:图4、图5中任一实施例提供的报文处理装置61和至少两个处理器核62。
本实施例提供的CPU用于执行图1至图3所示的任一实施例提供的报文处理方法,其实现原理和技术效果类似,可以参照本发明的其他实施例,在此不做赘述。
采用本发明实施例提供的CPU,通过报文处理装置61将缓存队列中的待处理的报文取出并按照报文的流类别将报文分别存入不同的预取缓存队列,再将每个预取缓存队列中的报文分别分配给不同的处理器核62进行处理,相较于现有技术直接从缓存队列中取出报文处理,预取缓存队列将不同的流报文分开并分配不同的处理器核并发处理,解决了报文的处理过程中调度的阻塞,造成整个CPU对报文的处理的效率较低的问题,有效提高了CPU对报文的处理的效率。
本领域普通技术人员可以理解:实现上述各方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成。前述的程序可以存储于一计算机可读取存储介质中。该程序在执行时,执行包括上述各方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (11)

  1. 一种报文处理方法,其特征在于,包括:
    将缓存队列中的待处理的报文按照所述报文的流类别将所述报文分别存入不同的预取缓存队列;其中,所述预取缓存队列为在任务缓存和保序队列之间预先建立的缓存;
    将每个所述预取缓存队列中的报文分别分配给不同的处理器核进行处理。
  2. 根据权利要求1所述的方法,其特征在于,所述将每个所述预取缓存队列中的报文分别分配给不同的处理器核进行处理之后,还包括:
    在所述保序队列中分别记录每个所述预取缓存队列中的报文的处理信息。
  3. 根据权利要求2所述的方法,其特征在于,所述预取缓存队列的数量大于或等于一个。
  4. 根据权利要求1至3任一项所述的方法,其特征在于,所述待处理的报文的流类别包括第一类流和第二类流,则所述将缓存队列中的待处理的报文取出并按照所述报文的流类别将所述报文分别放入不同的预取缓存队列,包括:
    将所述缓存队列中的报文取出;
    将流类别为所述第一类流的所有报文放入一个所述预取缓存队列中,将流类别为所述第二类流的所有报文放入与所述第一类流不同的另一所述预取缓存队列。
  5. 根据权利要求4所述的方法,其特征在于,所述将每个所述预取缓存队列中的报文分别分配不同的处理器核进行处理,并在所述保序队列中分别记录每个所述预取缓存队列中的报文的处理信息,包括:
    将所述预取缓存队列中所述第一类流的报文分配给第一处理器核进行处理,在所述保序队列中记录所述第一类流的报文处理信息,并将另一所述预取缓存队列中第二类流的报文分配给第二处理器核进行处理,在所述保序队列中记录所述第二类流的报文处理信息。
  6. 一种报文处理装置,其特征在于,包括:
    处理模块,用于将缓存队列中的待处理的报文按照所述报文的流类别将所述报文分别存入不同的预取缓存队列,所述预取缓存队列为在任务缓存和保序队列之间预先建立的缓存;
    分配模块,用于将每个所述预取缓存队列中的报文分别分配给不同的处理器核进行处理。
  7. 根据权利要求6所述的装置,其特征在于,还包括:
    记录模块,用于在所述保序队列中分别记录每个所述预取缓存队列中的报文的处理信息。
  8. 根据权利要求7所述的装置,其特征在于,所述处理模块建立的所述预取缓存队列的数量大于等于一个。
  9. 根据权利要求6至8任一项所述的装置,其特征在于,所述待处理的报文的流类别包括第一类流和第二类流,则所述处理模块具体用于:
    将所述缓存队列中的报文取出;
    将流类别为所述第一类流的所有报文放入一个所述预取缓存队列中,并将流类别为所述第二类流的所有报文放入与所述第一类流不同的另一所述预取缓存队列。
  10. 根据权利要求9所述的装置,其特征在于,包括:
    所述分配模块还用于将所述预取缓存队列中所述第一类流的报文分配给第一处理器核进行处理,所述记录模块在所述保序队列中记录所述第一类流的报文处理信息;
    所述分配模块还用于将另一所述预取缓存队列中第二类流的报文分配给第二处理器核进行处理,所述记录模块在所述保序队列中记录所述第二类流的报文处理信息。
  11. 一种中央处理器CPU,其特征在于,包括权利要求6-10任一所述的报文处理装置和至少两个处理器核。
PCT/CN2015/083601 2014-07-25 2015-07-08 报文处理方法和装置 WO2016011894A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410360849.X 2014-07-25
CN201410360849.XA CN105337896A (zh) 2014-07-25 2014-07-25 报文处理方法和装置

Publications (1)

Publication Number Publication Date
WO2016011894A1 true WO2016011894A1 (zh) 2016-01-28

Family

ID=55162501

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/083601 WO2016011894A1 (zh) 2014-07-25 2015-07-08 报文处理方法和装置

Country Status (2)

Country Link
CN (1) CN105337896A (zh)
WO (1) WO2016011894A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108958953A (zh) * 2018-06-29 2018-12-07 南京南瑞继保电气有限公司 一种基于多核处理器的消息交互方法
CN110177054A (zh) * 2019-05-22 2019-08-27 新华三技术有限公司 一种端口队列调度方法、装置、网络控制器及存储介质
CN111262792A (zh) * 2020-01-17 2020-06-09 新华三信息安全技术有限公司 报文转发方法、装置、网络设备及存储介质
CN111865811A (zh) * 2020-06-24 2020-10-30 浪潮(北京)电子信息产业有限公司 一种数据处理方法、装置、设备及介质
CN113032798A (zh) * 2021-01-29 2021-06-25 新华三信息安全技术有限公司 一种文件加解密的方法、设备和存储介质
CN113098580A (zh) * 2021-02-25 2021-07-09 深圳震有科技股份有限公司 卫星空口报文缓存与限速处理方法、装置、设备及介质
CN113824651A (zh) * 2021-11-25 2021-12-21 上海金仕达软件科技有限公司 行情数据缓存处理方法、装置、存储介质及电子设备

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107329813B (zh) * 2017-06-09 2020-08-04 北京中科睿芯科技有限公司 一种面向众核处理器的全局感知数据主动预取方法及***
CN109391559B (zh) * 2017-08-10 2022-10-18 华为技术有限公司 网络设备
CN107454016A (zh) * 2017-09-29 2017-12-08 湖南恒茂高科股份有限公司 报文处理装置及方法
CN110532067A (zh) * 2018-05-25 2019-12-03 杭州海康威视数字技术股份有限公司 事件处理方法、装置、设备及存储介质
CN109617832B (zh) * 2019-01-31 2022-07-08 新华三技术有限公司合肥分公司 报文缓存方法及装置
CN112152940B (zh) * 2019-06-28 2022-08-09 华为云计算技术有限公司 报文处理方法、装置以及***
CN110661731B (zh) * 2019-09-26 2020-09-29 光大兴陇信托有限责任公司 一种报文处理方法及其装置
CN110855528B (zh) * 2019-11-22 2021-10-29 新华三技术有限公司 报文处理方法及测试设备
CN111083075B (zh) * 2019-12-20 2022-01-11 苏州盛科通信股份有限公司 多核SoC处理报文的方法及应用其的SoC
CN113128531B (zh) * 2019-12-30 2024-03-26 上海商汤智能科技有限公司 一种数据处理方法及装置
CN111294291B (zh) * 2020-01-16 2022-07-12 新华三信息安全技术有限公司 一种协议报文的处理方法和装置
CN113630318B (zh) * 2020-05-06 2023-05-12 华为技术有限公司 报文传输的方法和框式通信设备
CN112187665B (zh) * 2020-09-28 2023-04-07 杭州迪普科技股份有限公司 一种报文处理的方法及装置
CN115086248A (zh) * 2022-06-15 2022-09-20 Oppo广东移动通信有限公司 数据包处理方法、通信芯片及计算机设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060259661A1 (en) * 2005-05-13 2006-11-16 Microsoft Corporation Method and system for parallelizing completion event processing
CN101304322A (zh) * 2008-06-30 2008-11-12 杭州华三通信技术有限公司 一种网络设备和报文转发方法
CN102780616A (zh) * 2012-07-19 2012-11-14 北京星网锐捷网络技术有限公司 基于多核处理器的报文处理方法、装置和网络设备
CN102811169A (zh) * 2012-07-24 2012-12-05 成都卫士通信息产业股份有限公司 采用哈希算法进行多核并行处理的vpn实现方法及***

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4872952B2 (ja) * 2008-03-06 2012-02-08 日本電気株式会社 Tcpバッファコピー分散並列処理装置、方法及びプログラム
CN101616083B (zh) * 2009-08-06 2012-01-04 杭州华三通信技术有限公司 一种报文转发方法和装置
CN101699788A (zh) * 2009-10-30 2010-04-28 清华大学 模块化的网络入侵检测***
CN102521047B (zh) * 2011-11-15 2014-07-09 重庆邮电大学 实现多核处理器间中断负载均衡的方法
US8693490B1 (en) * 2012-12-20 2014-04-08 Unbound Networks, Inc. Parallel processing using multi-core processor
CN103218455B (zh) * 2013-05-07 2014-04-16 中国人民解放军国防科学技术大学 Key-Value数据库用户请求的高速并发处理方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060259661A1 (en) * 2005-05-13 2006-11-16 Microsoft Corporation Method and system for parallelizing completion event processing
CN101304322A (zh) * 2008-06-30 2008-11-12 杭州华三通信技术有限公司 一种网络设备和报文转发方法
CN102780616A (zh) * 2012-07-19 2012-11-14 北京星网锐捷网络技术有限公司 基于多核处理器的报文处理方法、装置和网络设备
CN102811169A (zh) * 2012-07-24 2012-12-05 成都卫士通信息产业股份有限公司 采用哈希算法进行多核并行处理的vpn实现方法及***

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108958953A (zh) * 2018-06-29 2018-12-07 南京南瑞继保电气有限公司 一种基于多核处理器的消息交互方法
CN110177054A (zh) * 2019-05-22 2019-08-27 新华三技术有限公司 一种端口队列调度方法、装置、网络控制器及存储介质
CN111262792A (zh) * 2020-01-17 2020-06-09 新华三信息安全技术有限公司 报文转发方法、装置、网络设备及存储介质
CN111262792B (zh) * 2020-01-17 2022-04-01 新华三信息安全技术有限公司 报文转发方法、装置、网络设备及存储介质
CN111865811A (zh) * 2020-06-24 2020-10-30 浪潮(北京)电子信息产业有限公司 一种数据处理方法、装置、设备及介质
CN111865811B (zh) * 2020-06-24 2022-06-17 浪潮(北京)电子信息产业有限公司 一种数据处理方法、装置、设备及介质
CN113032798A (zh) * 2021-01-29 2021-06-25 新华三信息安全技术有限公司 一种文件加解密的方法、设备和存储介质
CN113098580A (zh) * 2021-02-25 2021-07-09 深圳震有科技股份有限公司 卫星空口报文缓存与限速处理方法、装置、设备及介质
CN113098580B (zh) * 2021-02-25 2022-11-08 深圳震有科技股份有限公司 卫星空口报文缓存与限速处理方法、装置、设备及介质
CN113824651A (zh) * 2021-11-25 2021-12-21 上海金仕达软件科技有限公司 行情数据缓存处理方法、装置、存储介质及电子设备

Also Published As

Publication number Publication date
CN105337896A (zh) 2016-02-17

Similar Documents

Publication Publication Date Title
WO2016011894A1 (zh) 报文处理方法和装置
US11082366B2 (en) Method and apparatus for using multiple linked memory lists
US7277982B2 (en) DRAM access command queuing structure
US20160306663A1 (en) Method and system for allocating fpga resources
US8108571B1 (en) Multithreaded DMA controller
US8868672B2 (en) Server node interconnect devices and methods
US20130074091A1 (en) Techniques for ensuring resources achieve performance metrics in a multi-tenant storage controller
US9043512B1 (en) Method for combining non-latency-sensitive and latency-sensitive input and output
WO2017157110A1 (zh) 高速访问双倍速率同步动态随机存储器的控制方法及装置
WO2017000657A1 (zh) 一种缓存管理的方法、装置和计算机存储介质
US20140281349A1 (en) Receive-side scaling in a computer system
CN109388338A (zh) 云计算环境中的基于NVMe的存储***的混合框架
WO2009097788A1 (zh) 缓存数据处理方法、装置及***
US9117032B2 (en) Facilitating routing by selectively aggregating contiguous data units
US20160253216A1 (en) Ordering schemes for network and storage i/o requests for minimizing workload idle time and inter-workload interference
WO2017133439A1 (zh) 一种数据管理方法及装置、计算机存储介质
WO2016202113A1 (zh) 一种队列管理方法、装置及存储介质
EP3440547B1 (en) Qos class based servicing of requests for a shared resource
CN110519180B (zh) 网卡虚拟化队列调度方法及***
CA2896154C (en) Accessing payload portions of client requests
WO2017070869A1 (zh) 一种内存配置方法、装置及***
CN116107635A (zh) 命令分发器、命令分发方法、调度器、芯片、板卡、设备
WO2018090573A1 (zh) 缓存空间的管理方法和装置、电子设备和存储介质
CN107911317A (zh) 一种报文调度方法及装置
CN110764710A (zh) 低延迟高iops的数据访问方法与存储***

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15824596

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15824596

Country of ref document: EP

Kind code of ref document: A1