WO2021120374A1 - 报文处理方法、处理单元及虚拟专用网络服务器 - Google Patents

报文处理方法、处理单元及虚拟专用网络服务器 Download PDF

Info

Publication number
WO2021120374A1
WO2021120374A1 PCT/CN2020/074953 CN2020074953W WO2021120374A1 WO 2021120374 A1 WO2021120374 A1 WO 2021120374A1 CN 2020074953 W CN2020074953 W CN 2020074953W WO 2021120374 A1 WO2021120374 A1 WO 2021120374A1
Authority
WO
WIPO (PCT)
Prior art keywords
message
thread
preset
messages
processing
Prior art date
Application number
PCT/CN2020/074953
Other languages
English (en)
French (fr)
Inventor
***达
曹志文
Original Assignee
厦门网宿有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 厦门网宿有限公司 filed Critical 厦门网宿有限公司
Priority to EP20829297.9A priority Critical patent/EP3860062A4/en
Priority to US17/153,814 priority patent/US20210306302A1/en
Publication of WO2021120374A1 publication Critical patent/WO2021120374A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/02Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
    • H04L63/0272Virtual private networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6255Queue scheduling characterised by scheduling criteria for service slots or service orders queue load conditions, e.g. longest queue first
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/02Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
    • H04L63/0227Filtering policies
    • H04L63/0245Filtering by information in the payload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload

Definitions

  • the embodiments of the present application relate to the field of network communication technology, and in particular, to a message processing method, processing unit, and virtual private network server.
  • VPN virtual private network
  • the VPN server When the VPN server receives a user request, it needs to process the request packet sent by the user and forward it to the internal network of the government and enterprise organization, and when obtaining internal data from the internal network of the government and enterprise organization, it also needs to go through the VPN server After performing certain processing, return to the user terminal.
  • the VPN server needs to queue up multiple received data packets, and the processing thread will process each data packet in turn until all the data packets are received. All data packets are processed, and the processed data packets are forwarded, so that the received data packets are processed and transmitted.
  • the purpose of the embodiments of the present application is to provide a message processing method, processing unit, and virtual private network server, which improves the efficiency of message transmission on the basis of ensuring the sequence of message transmission.
  • the embodiments of the present application provide a message processing method, which is applied to a virtual private network server.
  • the server includes a packet receiving thread, multiple processing threads and a packet sending thread.
  • the method includes: a packet receiving thread receives a packet , And sequentially distribute messages to multiple processing threads in a preset order; multiple processing threads process the received messages in parallel after receiving the message; the packet sending thread sequentially obtains from multiple processing threads in a preset order The processed message.
  • the embodiment of the present application also provides a message processing unit, including: a receiving module, a distribution module, multiple processing modules, and an acquisition module; the receiving module is used to receive the message; Two processing modules distribute messages; multiple processing modules are used to process the received messages in parallel after the messages are received; the acquisition module is used to sequentially obtain processed messages from multiple processing modules in a preset order .
  • An embodiment of the present application also provides a virtual private network server, including: a plurality of the foregoing message processing units.
  • the embodiment of the present application also provides a virtual private network server, including: at least one processor; and a memory communicatively connected with the at least one processor; wherein the memory stores instructions that can be executed by the at least one processor. Is executed by at least one processor, so that at least one processor can execute the foregoing message processing method.
  • the embodiment of the present application also provides a storage medium storing a computer program, and when the computer program is executed by a processor, the foregoing message processing method is implemented.
  • the server in the virtual private network includes a packet receiving thread, a plurality of processing threads, and a packet sending thread.
  • the packet receiving thread in the server receives the packets, and distributes the packets to multiple processing threads in a preset order; after the processing thread processes the received packets, the packet sending thread sequentially starts from multiple processing threads in the same preset order.
  • the processed messages are obtained in one processing thread, which ensures that the sequence of processed messages is consistent with the sequence of messages received by the packet receiving thread, and the problem of message disorder is avoided.
  • multiple processing threads can process the received message in parallel, which improves the processing efficiency of the message, shortens the processing time of the message, and improves the efficiency of message transmission. .
  • sequentially distributing messages to multiple processing threads in a preset order includes: the receiving thread sequentially distributing messages to the preset first message queues corresponding to the multiple processing threads in the preset order; the multiple processing threads are After receiving the message, perform parallel processing on the received message, including: multiple processing threads obtain the message from the corresponding preset first message queue and perform parallel processing, and store the processed messages in the Corresponding to the preset second message queue; the packet sending thread sequentially obtains processed packets from multiple processing threads in a preset order, including: the packet sending thread sequentially obtains the processed packets from the multiple processing threads in the preset order. 2. Obtain the processed message from the message queue.
  • the efficiency of the packet receiving thread to distribute the messages to the processing thread can be improved.
  • the messages in the first message queue are sorted in the order of distribution, which will not cause disorder of the messages.
  • a second message queue is established between the packet sending thread and the processing thread to temporarily store the messages processed by the processing thread, so as to prevent the processed messages from occupying the processing thread and improve the processing efficiency of the processing thread.
  • the method further includes: determining the number of packets in the preset first packet queue to be distributed The preset maximum number of messages stored in the first message queue has not been reached.
  • the packet receiving thread can also send enqueue requests to the preset first message queue to be distributed multiple times , Until the message to be distributed can be received by the preset first message queue.
  • the packet sending thread sequentially obtains the processed packets from the preset second packet queues corresponding to the multiple processing threads in a preset order, it also includes: adding the address of the memory space occupied by the processed packets To the preset queue; after receiving the message again, the packet receiving thread obtains the address of the memory space for storing the message in the preset queue. In this way, the receiving thread will first obtain the memory space from the preset queue to save the received message when receiving the message, thereby reducing the operation of the receiving thread to apply for memory space every time it receives the message, and improving the receiving Package thread efficiency.
  • the preset first message queue and the preset second message queue are both lock-free queues. Compared with using a locked queue, the use of a lock-free queue eliminates the need for locking and unlocking operations, which improves the efficiency of message transmission.
  • the packet sending thread sequentially obtains processed messages from multiple processing threads in a preset order, it also includes: judging whether the processed message is an abnormal message; when the processed message is not an abnormal message , Forward the processed message; when the processed message is an abnormal message, discard the processed message. In this way, abnormal messages can be eliminated, and unprocessed messages can be avoided from being forwarded.
  • the receiving thread, multiple processing threads and the sending thread are bound as a group of working threads, and the working threads are bound to the user channel; before the receiving thread receives the message, it also includes: determining the message correspondence The user channel of the user channel, and the working thread bound to the user channel is determined according to the determined user channel; the packet receiving thread receives the message and replaced with: the packet receiving thread in the working thread bound to the user channel corresponding to the message receives the message .
  • Fig. 1 is a flowchart of a message processing method according to the first embodiment of the present application
  • FIG. 2 is a schematic diagram of packet transmission between multiple processing threads and a packet sending thread according to the packet receiving thread in the first embodiment of the present application;
  • Fig. 3 is a flowchart of a message processing method according to a second embodiment of the present application.
  • FIG. 4 is a schematic diagram of a user connection binding work thread group in a second embodiment of the present application.
  • Fig. 5 is a schematic structural diagram of a message processing unit in a third embodiment of the present application.
  • Fig. 6 is a schematic structural diagram of a virtual private network server in a fourth embodiment of the present application.
  • the first embodiment of the application relates to a message processing method, which is applied to a virtual private network server.
  • the server includes a packet receiving thread, multiple processing threads, and a packet sending thread.
  • the method includes: the packet receiving thread receives the message and presets Distribute messages to multiple processing threads in sequence; multiple processing threads process the received messages in parallel after receiving the message; the packet sending thread sequentially obtains processed messages from multiple processing threads in a preset order .
  • the efficiency of message transmission is improved.
  • the first embodiment relates to a message processing method, including:
  • Step 101 The packet receiving thread receives the message, and sequentially distributes the message to multiple processing threads in a preset order.
  • the receiving thread receives the message forwarded by the kernel mode protocol stack.
  • the message received by the receiving thread may be a message sent by the internal network or a message sent by the external network. There is no restriction on the flow of the message.
  • the receiving thread distributes packets to multiple processing threads in a preset order. For example, the preset order is processing thread 1, processing thread 2.
  • the receiving thread will first Distribute the message to processing thread 1, distribute the second message to processing thread 2, then distribute the third message to processing thread 1, and distribute the fourth message to processing thread 2, and so on .
  • the number of processing threads is two. In actual applications, the number of processing threads is set according to requirements, and the way of distributing messages is also distributed in a preset order. In order to avoid repetition, no further examples are described.
  • the first message queue can be established between the packet receiving thread and the processing thread.
  • a queue 0 is established between the packet receiving thread and the processing thread 1
  • a queue is established between the packet receiving thread and the processing thread 2.
  • Queue 0 and Queue 1 are used to temporarily store the messages distributed by the packet receiving thread. After the packet receiving thread receives the packet, it distributes the packet to the preset first packet queue corresponding to the multiple processing threads in a preset order. Taking the first packet queue established as shown in FIG.
  • the packet thread After the packet thread receives the message, it distributes the first message to queue 0 corresponding to processing thread 1, the second message to queue 1 corresponding to processing thread 2, and the third message is distributed to processing again Queue 0 corresponding to thread 1 distributes the fourth message to queue 1 corresponding to processing thread 2, and so on.
  • the above description takes the number of processing threads as two. In actual applications, the number of processing threads is set according to requirements.
  • the number of first message queues established is the same as the number of processing threads. Here, the number of processing threads and the first report There is no limit to the number of text queues created.
  • the packet receiving thread sequentially distributes packets to the preset first packet queues corresponding to the multiple processing threads in a preset order, it is determined whether the number of packets in the preset first packet queue to be distributed reaches the preset number.
  • Set the maximum number of messages stored in the first message queue For example, the maximum number of messages stored in the queue 0 established between the receiving thread and the processing thread 1 is 6, before the receiving thread distributes the messages to the queue 0 corresponding to the processing thread 1, it needs to be based on the current messages in the queue 0 Quantity, judge whether the number of packets in it reaches the maximum number. If it reaches the maximum number of 6, then queue 0 cannot store new packets.
  • the message can be distributed to the first message queue; the messages in the preset first message queue to be distributed The number of messages reaches the limited maximum number, and the message to be distributed fails to enter the queue.
  • the receiving thread can discard the message to be distributed at this time, and re-receive new messages for distribution.
  • the distributed message queue is the above-mentioned incoming message.
  • the default first message queue for the failure of the queue ensures that the packet receiving thread distributes the messages to the processing thread in a preset order.
  • the following example illustrates the process of the packet receiving thread discarding packets and re-receiving new packets:
  • the packet receiving thread wants to distribute the fifth packet to the queue 0 corresponding to the first processing thread, and it detects that the packet in the queue 0 The number of messages has reached the maximum number of storage. At this time, the receiving thread discards the fifth message and continues to receive the sixth message. The sixth message received is also distributed to queue 0.
  • the sixth packet will continue to be discarded and the seventh packet will be received again, until the number of packets in queue 0 is less than the stored maximum number, and the currently distributed packets will be distributed To queue 0, after the message distributed to queue 0 is successfully enqueued, the next message is distributed to queue 1 corresponding to processing thread 2.
  • the number of messages in the preset first message queue to be distributed reaches the limited maximum number, you can also choose not to discard the message or distribute the message, and the packet receiving thread needs to wait until it is waiting.
  • the message is distributed to the preset first message queue to be distributed, which can also ensure that the packet receiving thread is distributed to the processing thread. Is orderly.
  • the packet receiving thread can also mark the queue id on the received packet while receiving the packet, which is used to indicate the queue to which the packet should be distributed, so that the order of packet distribution can be checked in the future to avoid receiving packets. An error occurred in the thread distribution process, causing the messages to be out of order.
  • Step 102 After receiving the message, multiple processing threads perform parallel processing on the received message.
  • the processing thread when the processing thread receives a message transmitted from the external network to the internal network, the message is decrypted and unencapsulated; when the processing thread receives a message transmitted from the internal network to the external network, the message is Perform encryption and packaging processing.
  • Step 103 The packet sending thread sequentially obtains processed packets from multiple processing threads in a preset order.
  • the packet sending thread sequentially obtains the packets processed by the processing threads from multiple processing threads in the same preset order as the packet receiving thread distributes.
  • the packet receiving thread distributes the preset order as processing thread 1.
  • Processing thread 2 when the packet sending thread obtains the processed message from the processing thread, it obtains the message from processing thread 1 for the first time, obtains the message from processing thread 2 for the second time, and obtains the message from processing thread 1 for the third time , Get the message from processing thread 2 for the fourth time, and so on, after obtaining a complete data packet, forward the obtained processed message to the kernel mode protocol stack, and forward it through the kernel mode protocol stack , So as to ensure that the sequence of the processed messages is the same as the sequence of the packet receiving thread distribution, ensuring that the messages are in order.
  • a second message queue can be established between the processing thread and the packet sending thread.
  • the queue 0 is established between the processing thread 1 and the packet sending thread
  • the queue 1 is established between the processing thread 2 and the packet sending thread.
  • the established queue 0 and queue 1 are used to temporarily store processed messages.
  • the processed message is queued to queue 0.
  • the processed message is queued to queue 1
  • the processing thread needs to wait for the number of processed messages in the second message queue to be less than the maximum number , And then put the processed message into the second message queue.
  • the packet sending thread sequentially obtains the processed messages from multiple processing threads in a preset order, it specifically obtains the processed messages from the second message queue corresponding to the multiple processing threads, and the second message queue established in Figure 2 is used.
  • the second message queue Take the second message queue as an example.
  • the sending thread obtains the processed message from queue 0 corresponding to processing thread 1, the second time it obtains the processed message from queue 1 corresponding to processing thread 2, and the third time from processing
  • the processed message is obtained from queue 0 corresponding to thread 1
  • the processed message is obtained from queue 1 corresponding to processing thread 2 for the fourth time, and so on, until a complete packet of the message is obtained, the obtained
  • the processed message is forwarded to the kernel mode protocol stack.
  • the packet sending thread after the packet sending thread sequentially obtains processed messages from multiple processing threads in a preset order, it can also determine whether the processed message is an abnormal message; when the processed message is not an abnormal message, The processed message is forwarded; when the processed message is an abnormal message, the processed message is discarded.
  • the abnormal message When the message is a message transmitted from the external network to the internal network, the abnormal message may be a message that has not been decrypted and unencapsulated.
  • the message When the message is a message transmitted from the internal network to the external network, The abnormal message may be a message that has not been encrypted and encapsulated. In this way, abnormal messages can be eliminated, and unprocessed messages can be avoided from being forwarded.
  • the server in the virtual private network includes a packet receiving thread, a plurality of processing threads, and a packet sending thread.
  • the packet receiving thread in the server receives the packets, and distributes the packets to multiple processing threads in a preset order; after the processing thread processes the received packets, the packet sending thread sequentially starts from multiple processing threads in the same preset order.
  • the processed messages are obtained in one processing thread, which ensures that the sequence of processed messages is consistent with the sequence of messages received by the packet receiving thread, and the problem of message disorder is avoided.
  • multiple processing threads can process the received message in parallel, which improves the processing efficiency of the message, shortens the processing time of the message, and improves the efficiency of message transmission. .
  • the second embodiment of the present application relates to a message processing method.
  • it further includes: after the packet sending thread sequentially obtains the processed packets from the preset second packet queues corresponding to the multiple processing threads in a preset order, occupying the processed packets The memory space of is added to the preset queue; after the packet receiving thread receives the message again, it obtains the memory space for storing the message in the preset queue.
  • the second embodiment relates to a message processing method, including:
  • Step 301 The packet receiving thread receives the message, and sequentially distributes the message to multiple processing threads in a preset order.
  • Step 302 After receiving the message, multiple processing threads perform parallel processing on the received message.
  • Step 303 The packet sending thread sequentially obtains processed packets from multiple processing threads in a preset order.
  • Steps 301 to 303 correspond to steps 101 to 103 in the first embodiment respectively, and are not repeated here.
  • Step 304 Add the address of the memory space occupied by the processed message to the preset queue.
  • Step 305 After receiving the message again, the packet receiving thread obtains the address of the memory space for storing the message in the preset queue.
  • the receiving thread allocates a memory space for the received message to store the message, and this memory space is queued to the first message together with the stored message. Message queue, and enter the processing thread to process the message. After the message is processed, this memory space is also used for storage and merged into the corresponding second message queue until the processed message is obtained and combined by the packet sending thread. After forwarding, this memory space is in an idle state and can provide storage space for other messages. After the processed message is forwarded through the packet sending thread, the memory space to be reclaimed corresponding to the message is added to the preset queue.
  • the establishment of the preset queue is shown in Figure 2 as the recovery queue, and the preset queue (recovery queue) is established Between the sending thread and the receiving thread, a data structure composed of the address and size of the memory space to be reclaimed is stored in the preset queue to realize the addition of the memory space in the preset queue. After the receiving thread receives the message again, The data structure is first obtained in the preset queue, the address and size of the corresponding memory space are called according to the obtained data structure, and the received message is stored in the memory space. If there is no memory space in the preset queue, then apply for a new memory space to store the message. Doing so can reduce the memory operation of the packet receiving thread and improve the efficiency of the packet receiving thread to receive messages.
  • the receiving thread, multiple processing threads, and the sending thread in the above description can be bound as a set of worker threads, and then multiple sets of worker threads can provide services to the outside.
  • the traffic of the same user is all the same.
  • the group worker thread receives, there will be no situation where the traffic of the same user is received by multiple groups of worker threads.
  • worker thread group 0 is used to process the traffic of user A and user B.
  • Thread group 1 can receive the traffic of user C and user D, and process and forward the messages of user C and user D through work thread group 1, thereby improving the parallel processing capability of multi-user virtual private network (VPN) traffic and improving the system Processing bandwidth, and the traffic of user A and user B will only be received by worker thread group 0 and not by worker thread group 1, ensuring that user traffic will not be out of order in multiple groups of worker threads.
  • the first message queue, the second message queue, and the preset queue (recycling queue) in the above description can all be lock-free queues. Message delivery through independent lock-free queues can further improve the work thread group. Message delivery efficiency.
  • the memory space occupied by the processed packets is added to Preset queue: After the packet receiving thread receives the message again, it obtains the memory space for storing the message in the preset queue, thereby reducing the memory operation of the packet receiving thread and improving the efficiency of receiving the packet by the packet receiving thread.
  • the third embodiment of the present application relates to a message processing unit, as shown in FIG. 5, including: a receiving module 51, a distribution module 52, a plurality of processing modules 53, an acquisition module 54; the receiving module 51 is used to receive a message; The module 52 is used to sequentially distribute messages to multiple processing modules 53 in a preset order; the multiple processing modules 53 are used to process the received messages in parallel after the messages are received; the acquisition module 54 is used to pre-set The processed messages are sequentially obtained from the multiple processing modules 53 in sequence.
  • this embodiment is a system embodiment corresponding to the first embodiment, and this embodiment can be implemented in cooperation with the first embodiment.
  • the related technical details mentioned in the first embodiment are still valid in this embodiment, and in order to reduce repetition, they will not be repeated here.
  • the related technical details mentioned in this embodiment can also be applied in the first embodiment.
  • the distribution module 52 is configured to sequentially distribute messages to the first storage modules corresponding to the plurality of processing modules in a preset order; and a plurality of processing modules 53 is used to obtain messages from the corresponding first storage module and perform parallel processing, respectively, and store the processed messages in the corresponding second storage module; the acquisition module 54 is used to sequentially process multiple processing messages in a preset order The second storage module corresponding to each module obtains the processed message.
  • the distribution module 52 is specifically configured to, after determining that the number of messages in the first storage module to be distributed has not reached the maximum number of storage, distribute to the first storage modules respectively corresponding to the multiple processing modules in a preset order. Message.
  • the third storage module is used to store the memory space occupied by the processed message after the processed message is forwarded, and provide for the distribution module 52 when an application request is received
  • the memory space is used to store messages.
  • the acquisition module 54 is also used to determine whether the processed message is an abnormal message; when the processed message is not an abnormal message, forward the processed message; the processed message is an abnormal message When the time, the processed message is discarded.
  • modules involved in this embodiment are all logical modules.
  • a logical unit can be a physical unit, a part of a physical unit, or multiple physical units. The combination of units is realized.
  • this embodiment does not introduce units that are not closely related to solving the technical problems proposed by the present application, but this does not indicate that there are no other units in this embodiment.
  • the fourth embodiment of the present application relates to a virtual private network server, which includes a plurality of the foregoing message processing units.
  • the fifth embodiment of the present application relates to a virtual private network server, as shown in FIG. 6, including at least one processor 601; and a memory 602 communicatively connected with the at least one processor 601; wherein, the memory 602 stores at least An instruction executed by one processor 601 is executed by at least one processor 601, so that the at least one processor 601 can execute the foregoing message processing method.
  • the memory 602 and the processor 601 are connected in a bus manner.
  • the bus may include any number of interconnected buses and bridges, and the bus connects one or more various circuits of the processor 601 and the memory 602 together.
  • the bus can also connect various other circuits such as peripheral devices, voltage regulators, power management circuits, etc., which are all well-known in the art, and therefore, no further description will be given herein.
  • the bus interface provides an interface between the bus and the transceiver.
  • the transceiver may be one element or multiple elements, such as multiple receivers and transmitters, providing a unit for communicating with various other devices on the transmission medium.
  • the data processed by the processor is transmitted on the wireless medium through the antenna, and further, the antenna also receives the data and transmits the data to the processor 601.
  • the processor 601 is responsible for managing the bus and general processing, and can also provide various functions, including timing, peripheral interfaces, voltage regulation, power management, and other control functions.
  • the memory 602 may be used to store data used by the processor 601 when performing operations.
  • the sixth embodiment of the present application relates to a computer-readable storage medium, which stores a computer program.
  • the computer program is executed by the processor, the above method embodiment is realized.
  • the program is stored in a storage medium and includes several instructions to enable a device ( It may be a single-chip microcomputer, a chip, etc.) or a processor (processor) that executes all or part of the steps of the methods described in the embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请实施例涉及网络通信技术领域,公开了一种报文处理方法、处理单元及虚拟专用网络服务器。本申请应用于虚拟专用网络服务器,服务器包括收包线程,多个处理线程和发包线程,方法包括:收包线程接收报文,并以预设顺序依次向多个处理线程分发报文(101);多个处理线程在接收到报文后,对接收的报文进行并行处理(102);发包线程以预设顺序依次从多个处理线程中获取处理后的报文(103),从而在保证了报文传输顺序的基础上,提高了报文传输的效率。

Description

报文处理方法、处理单元及虚拟专用网络服务器
交叉引用
本申请引用于2019年12月19日递交的名称为“报文处理方法、处理单元及虚拟专用网络服务器”的第201911315330.9号中国专利申请,其通过引用被全部并入本申请。
技术领域
本申请实施例涉及网络通信技术领域,特别涉及一种报文处理方法、处理单元及虚拟专用网络服务器。
背景技术
随着网络的迅速发展,越来越多的政企机构开始借助公用网络来处理工作事物。为保证政企机构内部数据的安全性,虚拟专用网络(VPN)技术成为用户设备从外网访问内部数据的实现方式,VPN技术在公用网络上建立专用网络,用户在需要调用政企机构内部数据时,向公用网络上建立的专用网络的VPN服务器发送请求以实现远程访问。
VPN服务器在接收到用户请求时,需要将用户发送的请求数据包进行一定的处理后转发至政企机构的内部网络,且在从政企机构的内部网络获得内部数据时,也需经过VPN服务器进行一定处理后返回用户终端。在用户发送的请求数据包数量较多或政企机构返回的内部数据的数据包较多时,VPN服务器需要对接收到的多个数据包进行排队,处理线程依次对各个数据包进行处理,直至所有的数据包均处理完成,将处理后的数据包进行转发,从而实现将接收到的数据包处理并传输。
发明人发现相关技术中至少存在如下问题:由于在对每个VPN数据包进行转发时其余数据包均处于等待状态,若需处理的数据包数量较多,则处理较多的数据包所花费的时间较长,从而导致数据在传输过程中延时较高,用户无法及时获取所需的数据,影响用户体验。
发明内容
本申请实施例的目的在于提供一种报文处理方法、处理单元及虚拟专用网络服务器,在保证了报文传输顺序的基础上,提高了报文传输的效率。
为解决上述技术问题,本申请的实施例提供了一种报文处理方法,应用于虚拟专用网络服务器,服务器包括收包线程,多个处理线程和发包线程,方法包括:收包线程接收报文,并以预设顺序依次向多个处理线程分发报文;多个处理线程在接收到报文后,对接收的报文进行并行处理;发包线程以预设顺序依次从多个处理线程中获取处理后的报文。
本申请的实施例还提供了一种报文处理单元,包括:接收模块,分发模块,多个处理模块,获取模块;接收模块用于接收报文;分发模块用于以预设顺序依次向多个处理模块分发报文;多个处理模块用于在接收到报文后,对接收的报文进行并行处理;获取模块用于以预设顺序依次从多个处理模块中获取处理后的报文。
本申请的实施例还提供了一种虚拟专用网络服务器,包括:多个上述报文处理单元。
本申请的实施例还提供了一种虚拟专用网络服务器,包括:至少一个处理器;以及,与至少一个处理器通信连接的存储器;其中,存储器存储有可被至少一个处理器执行的指令,指令被至少一个处理器执行,以使至少一个处理器能够执行上述报文处理方法。
本申请的实施例还提供了一种存储介质,存储有计算机程序,计算机程序被处理器执行时实现上述报文处理方法。
本申请实施例相对于现有技术而言,在虚拟专用网络的服务器包括收包线程,多个处理线程,以及发包线程。服务器中的收包线程接收报文,并以预设顺序依次向多个处理线程分发报文;在处理线程将接收到的报文处理之后,再由发包线程以同样的预设顺序依次从多个处理线程中获取处理后的报文,保证了处理后的报文的顺序与收包线程接收的报文顺序一致,避免出现报文乱序的问题。另外,多个处理线程在接收到报文时,多个处理线程可以对接收的报文进行并行处理,提高了报文的处理效率,缩短了报文的处理时间,提高了报文传输的效率。
另外,以预设顺序依次向多个处理线程分发报文,包括:收包线程以预设顺序依次向多个处理线程分别对应的预设第一报文队列分发报文;多个处理线程在接收到报文后,对接收的报文进行并行处理,包括:多个处理线程分别从对应的预设第一报文队列中获取报文并进行并行处理,将处理后的报文分别存储于对应的预设第二报文队列;发包线程以预设顺序依次从多个处理线程中获取处理后的报文,包括:发包线程以预设顺序依次从多个处理线程分别对应的预设第二报文队列中获取处理后的报文。通过在收包线程和处理线程之间建立第 一报文队列,利用第一报文队列对分发给处理线程的报文进行保存,可以提高收包线程将报文分发至处理线程的分发效率,且第一报文队列中的报文以分发的顺序进行排序,不会造成报文的乱序。同理,在发包线程与处理线程之间建立第二报文队列用于暂时存储处理线程处理后的报文,避免处理后的报文占用处理线程,提高处理线程的处理效率。
另外,在收包线程以预设顺序依次向多个处理线程分别对应的预设第一报文队列分发报文之前,还包括:确定待分发的预设第一报文队列中的报文数量未达到预设第一报文队列存储的最大数量。在预设第一报文队列中的报文数量达到存储的最大数量时可以将待分发的报文丢弃,并将接收到的新的报文继续向这一预设第一报文队列中分发,直至所分发的报文可以被该预设第一报文队列所接收;或者在达到存储的最大数量时收包线程还可以多次向待分发的预设第一报文队列发送入队请求,直至待分发的报文可以被该预设第一报文队列所接收。
另外,在发包线程以预设顺序依次从多个处理线程分别对应的预设第二报文队列中获取处理后的报文之后,还包括:将处理后的报文占用的内存空间的地址添加至预设队列;收包线程再次接收报文之后,在预设队列中获取用于存储报文的内存空间的地址。这样做使收包线程在接收报文时优先从预设队列中获取内存空间保存所接收的报文,从而减少了收包线程在每次接收报文时均需申请内存空间的操作,提高收包线程效率。
另外,预设第一报文队列和预设第二报文队列均为无锁队列。利用无锁队列相较于使用加锁的队列,无需进行加锁和解锁的操作,提高了报文的传递效率。
另外,在发包线程以预设顺序依次从多个处理线程中获取处理后的报文之后,还包括:判断处理后的报文是否为异常报文;在处理后的报文不是异常报文时,将处理后的报文转发;在处理后的报文为异常报文时,将处理后的报文丢弃。这样做可以将处理异常的报文剔除,避免对未处理的报文进行转发。
另外,将收包线程,多个处理线程和发包线程作为一组工作线程进行绑定,并将工作线程与用户通道进行绑定;在收包线程接收报文之前,还包括:确定报文对应的用户通道,并根据确定的用户通道确定与用户通道绑定的工作线程;收包线程接收报文替换为:与报文对应的用户通道进行绑定的工作线程中的收包线程接收报文。这样做保证了单个用户的流量只会被单组工作线程处理,从而保证了用户流量在多组工作线程中不会出现乱序的问题。
附图说明
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特 别申明,附图中的图不构成比例限制。
图1是根据本申请第一实施例中的报文处理方法的流程图;
图2是根据本申请第一实施例中的收包线程,多个处理线程与发包线程之间报文传输示意图;
图3是根据本申请第二实施例中的报文处理方法的流程图;
图4是根据本申请第二实施例中的用户连接绑定工作线程组示意图;
图5是根据本申请第三实施例中的报文处理单元的结构示意图;
图6是根据本申请第四实施例中的虚拟专用网络服务器的结构示意图;
具体实施例
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合附图对本申请的各实施例进行详细的阐述。然而,本领域的普通技术人员可以理解,在本申请各实施例中,为了使读者更好地理解本申请而提出了许多技术细节。但是,即使没有这些技术细节和基于以下各实施例的种种变化和修改,也可以实现本申请所要求保护的技术方案。
以下各个实施例的划分是为了描述方便,不应对本申请的具体实现方式构成任何限定,各个实施例在不矛盾的前提下可以相互结合相互引用。
本申请的第一实施例涉及一种报文处理方法,应用于虚拟专用网络服务器,服务器包括收包线程,多个处理线程和发包线程,方法包括:收包线程接收报文,并以预设顺序依次向多个处理线程分发报文;多个处理线程在接收到报文后,对接收的报文进行并行处理;发包线程以预设顺序依次从多个处理线程中获取处理后的报文。从而在保证了报文传输顺序的基础上,提高了报文传输的效率。下面对本实施例的报文处理方法的实现细节进行具体的说明,以下内容仅为方便理解提供的实现细节,并非实施本方案的必须。
具体流程如图1所示,第一实施例涉及一种报文处理方法,包括:
步骤101,收包线程接收报文,并以预设顺序依次向多个处理线程分发报文。
收包线程接收内核态协议栈转发的报文,收包线程接收的报文可能是内网发送的报文,也可能是外网发送的报文,在此对报文的流向不做限制。收包线程在接收到报文之后以预设顺序依次向多个处理线程分发报文,例如,预设顺序为处理线程1,处理线程2,收报线程在接收到报文之后,将第一个报文分发至处理线程1,将第二个报文分发至处理线程2,然后将第三个报文再次分发至处理线程1,将第四个报文再次分发至处理线程2,以此类推。上述以处理线程数量为两个进行说明,在实际应用中处理线程的数量根据需求设定,分发报文的方式同样以预设顺序进行分发,为避免重复不再举例说明。
另外,可以在收包线程和处理线程之间建立第一报文队列,如图2中,在收包线程与处理线程1之间建立队列0,在收包线程与处理线程2之间建立队列1,队列0与队列1用于暂时存储收包线程分发的报文。在收包线程接收到报文之后,以预设顺序向多个处理线程分别对应的预设第一报文队列分发报文,以图2所示建立的第一报文队列为例,在收包线程接收到报文之后,将第一个报文分发至处理线程1对应的队列0,将第二个报文分发至处理线程2对应的队列1,将第三个报文再次分发至处理线程1对应的队列0,将第四个报文分发至处理线程2对应的队列1,以此类推。上述以处理线程数量为两个进行说明,在实际应用中处理线程的数量根据需求设定,建立的第一报文队列的数量与处理线程数量相同,在此对处理线程的数量及第一报文队列建立的数量不做限制。
另外,在收包线程以预设顺序依次向多个处理线程分别对应的预设第一报文队列分发报文之前,判断待分发的预设第一报文队列中的报文数量是否达到预设第一报文队列存储的最大数量。例如,收包线程与处理线程1之间建立的队列0存储的报文的最大数量为6,在收包线程向处理线程1对应的队列0分发报文之前,需根据队列0当前的报文数量,判断其内的报文数量是否达到限定的最大数量,若达到最大数量6,则队列0无法对新的报文进行存储。在待分发的预设第一报文队列中的报文数量未达到限定的最大数量,则可以向该第一报文队列分发报文;在待分发的预设第一报文队列中的报文数量达到限定的最大数量,待分发的报文入队失败,收包线程可以将此时待分发的报文丢弃,并重新接收新的报文进行分发,所分发的报文队列为上述入队失败的预设第一报文队列,从而保证收包线程将报文分发至处理线程是以预设顺序的。下面举例说明收包线程对报文进行丢弃和重新接收新的报文的过程:假设收包线程欲将第五个报文分发至第一处理线程对应的队列0,且检测到队列0中的报文数量已达到存储的最大数量,此时收包线程将第五个报文丢弃,并继续接收第六个报文,将接收的第六个报文同样分发至队列0,若队列0内的报文数量仍保持在存储的最大数量,则继续将第六个报文丢弃并重新接收第七个报文,直至队列0内报文数量小于存储的最大数量,将当前分发的报文分发至队列0,在分发给队列0的报文入队成功之后,将下一个报文分发至处理线程2对应的队列1。另外,在检测到待分发的预设第一报文队列中的报文数量达到限定的最大数量时,还可以选择不将报文丢弃也不将该报文分发,收包线程需等待直至待分发的预设第一报文队列中的报文数量小于限定的最大数量时,将报文分发至待分发的预设第一报文队列,同样可以保证收包线程分发至处理线程的报文是有序的。
在实际应用中,收包线程在接收报文的同时还可以在接收的报文上标记队列id,用于指示报文应分发的队列,以便日后对报文的分发顺序进行检验,避免收包线程分发过程出现 错误导致报文乱序。
步骤102,多个处理线程在接收到报文后,对接收的报文进行并行处理。
具体地说,在处理线程接收到由外网传输至内网的报文时,对报文进行解密解封装处理;在处理线程接收到由内网传输至外网的报文时,对报文进行加密封装处理。
步骤103,发包线程以预设顺序依次从多个处理线程中获取处理后的报文。
具体地说,发包线程按照与收包线程分发的同样的预设顺序,依次从多个处理线程中获取处理线程处理后的报文,例如,收包线程分发的预设顺序为处理线程1,处理线程2,发包线程在从处理线程获取处理后的报文时,第一次从处理线程1获取报文,第二次从处理线程2获取报文,第三次从处理线程1获取报文,第四次从处理线程2获取报文,以此类推,在获取一个完整数据包的报文之后,将所获取的处理后的报文转发至内核态协议栈,并通过内核态协议栈转发,从而保证处理后的报文的顺序与收包线程分发的顺序是相同的,确保了报文是有序的。
另外,还可以在处理线程与发包线程之间建立第二报文队列,如图2中,在处理线程1与发包线程之间建立队列0,在处理线程2与发包线程之间建立队列1,所建立的队列0和队列1用于暂时存储处理后的报文。在处理线程1对报文处理之后,将处理后的报文入队至队列0,同理在处理线程2对报文处理之后,将处理后的报文入队至队列1,在将处理后的报文放入队列中时,若第二报文队列中的处理后的报文的数量达到最大数量,处理线程需要等待第二报文队列中的处理后的报文的数量小于最大数量之后,再将处理后的报文入队第二报文队列。发包线程在以预设顺序依次从多个处理线程中获取处理后的报文时,具体从多个处理线程对应的第二报文队列中获取处理后的报文,以图2所建立的第二报文队列为例,发包线程第一次从处理线程1对应的队列0获取处理后的报文,第二次从处理线程2对应的队列1获取处理后的报文,第三次从处理线程1对应的队列0中获取处理后的报文,第四次从处理线程2对应的队列1获取处理后的报文,以此类推,直至获取一个完整数据包的报文之后,将所获取的处理后的报文转发至内核态协议栈。
另外,在发包线程以预设顺序依次从多个处理线程中获取处理后的报文之后,还可以判断处理后的报文是否为异常报文;在处理后的报文不是异常报文时,将处理后的报文转发;在处理后的报文为异常报文时,将处理后的报文丢弃。在报文为由外网传输至内网传输的报文时,异常报文可能是未进行解密解封装处理的报文,而在报文为由内网传输至外网传输的报文时,异常报文可能是未进行加密封装处理的报文。这样做可以将处理异常的报文剔除,避免对未处理的报文进行转发。
本申请实施例相对于现有技术而言,在虚拟专用网络的服务器包括收包线程,多个处理线程,以及发包线程。服务器中的收包线程接收报文,并以预设顺序依次向多个处理线程分发报文;在处理线程将接收到的报文处理之后,再由发包线程以同样的预设顺序依次从多个处理线程中获取处理后的报文,保证了处理后的报文的顺序与收包线程接收的报文顺序一致,避免出现报文乱序的问题。另外,多个处理线程在接收到报文时,多个处理线程可以对接收的报文进行并行处理,提高了报文的处理效率,缩短了报文的处理时间,提高了报文传输的效率。
本申请的第二实施例涉及一种报文处理方法。在本申请第二实施例中还包括:在发包线程以预设顺序依次从多个处理线程分别对应的预设第二报文队列中获取处理后的报文之后,将处理后的报文占用的内存空间添加至预设队列;收包线程再次接收报文之后,在预设队列中获取用于存储报文的内存空间。
具体流程如图3所示,第二实施例涉及一种报文处理方法,包括:
步骤301,收包线程接收报文,并以预设顺序依次向多个处理线程分发报文。
步骤302,多个处理线程在接收到报文后,对接收的报文进行并行处理。
步骤303,发包线程以预设顺序依次从多个处理线程中获取处理后的报文。
步骤301至303分别与第一实施例中步骤101至103一一对应,在此不再赘述。
步骤304,将处理后的报文占用的内存空间的地址添加至预设队列。
步骤305,收包线程再次接收报文之后,在预设队列中获取用于存储报文的内存空间的地址。
具体地说,在报文被收包线程接收时,收包线程为这一接收的报文分配一个内存空间用于存储报文,这一内存空间与存储的报文共同入队至第一报文队列,并进入处理线程对报文进行处理,在报文处理之后,同样利用这一内存空间进行存储,并入队对应的第二报文队列,直至处理后的报文通过发包线程获取并转发之后,这一内存空间方处于闲置状态,可为其他报文提供存储空间。在处理后的报文通过发包线程转发之后,将该报文对应的待回收内存空间添加至预设队列,预设队列的建立如图2所示的回收队列,预设队列(回收队列)建立在发包线程与收包线程之间,预设队列中存储上述待回收内存空间的地址和大小组成的数据结构体,实现预设队列中内存空间的添加,在收包线程再次接收报文之后,优先在预设队列中获取数据结构体,根据获取的数据结构体调用相应的内存空间的地址和大小,并将接受的报文存储至内存空间。若预设队列中并不存在内存空间,则申请新的内存空间存储报文。这样做可以减少收包线程申请内存操作,提高收包线程的接收报文的效率。
在实际应用中,可以将上述说明中的收包线程,多个处理线程以及发包线程作为一组工作线程进行绑定,再将多组工作线程对外提供服务,其中,同一用户的流量均被一组工作线程接收,不会存在同一用户的流量被多组工作线程接收的请况。如图4所示,工作线程组0用于处理用户A和用户B的流量,若接收到用户A或用户B的报文,则通过工作线程组0对报文进行处理并转发;同理工作线程组1可以接收用户C和用户D的流量,通过工作线程组1对用户C和用户D的报文进行处理并转发,从而提高多用户虚拟专用网络(VPN)流量的并行处理能力,提高***处理带宽,且用户A和用户B的流量只会被工作线程组0所接收,而不会被工作线程组1所接收,保证了用户流量在多组工作线程中不会出现乱序的问题。另外,上述说明中的第一报文队列、第二报文队列以及预设队列(回收队列)均可以为无锁队列,通过独立无锁队列进行报文传递,可以进一步提高工作线程组内的报文传递效率。
在本实施例中,在发包线程以预设顺序依次从多个处理线程分别对应的预设第二报文队列中获取处理后的报文之后,将处理后的报文占用的内存空间添加至预设队列;收包线程再次接收报文之后,在预设队列中获取用于存储报文的内存空间,从而减少收包线程申请内存操作,提高收包线程的接收报文的效率。
上面各种方法的步骤划分,只是为了描述清楚,实现时可以合并为一个步骤或者对某些步骤进行拆分,分解为多个步骤,只要包括相同的逻辑关系,都在本专利的保护范围内;对算法中或者流程中添加无关紧要的修改或者引入无关紧要的设计,但不改变其算法和流程的核心设计都在该专利的保护范围内。
本申请第三实施例涉及一种报文处理单元,如图5所示,包括:接收模块51,分发模块52,多个处理模块53,获取模块54;接收模块51用于接收报文;分发模块52用于以预设顺序依次向多个处理模块53分发报文;多个处理模块53用于在接收到报文后,对接收的报文进行并行处理;获取模块54用于以预设顺序依次从多个处理模块53中获取处理后的报文。
不难发现,本实施例为与第一实施例相对应的***实施例,本实施例可与第一实施例互相配合实施。第一实施例中提到的相关技术细节在本实施例中依然有效,为了减少重复,这里不再赘述。相应地,本实施例中提到的相关技术细节也可应用在第一实施例中。
另外还包括,多个第一存储模块和多个第二存储模块;分发模块52用于以预设顺序依次向所述多个处理模块分别对应的第一存储模块分发报文;多个处理模块53用于分别从对应的第一存储模块中获取报文并进行并行处理,将处理后的报文分别存储于对应的第二存储模块;获取模块54用于以预设顺序依次从多个处理模块分别对应的第二存储模块中获取处理 后的报文。
另外,分发模块52具体用于在判定待分发的第一存储模块中的报文数量未达到存储的最大数量之后,以预设顺序依次向所述多个处理模块分别对应的第一存储模块分发报文。
另外,还包括,第三存储模块;第三存储模块用于在处理后的报文被转发之后,存储处理后的报文占用的内存空间,并在接收到申请请求时,为分发模块52提供内存空间用于存储报文。
另外,获取模块54还用于判断处理后的报文是否为异常报文;在处理后的报文不是异常报文时,将处理后的报文转发;在处理后的报文为异常报文时,将处理后的报文丢弃。
值得一提的是,本实施例中所涉及到的各模块均为逻辑模块,在实际应用中,一个逻辑单元可以是一个物理单元,也可以是一个物理单元的一部分,还可以以多个物理单元的组合实现。此外,为了突出本申请的创新部分,本实施例中并没有将与解决本申请所提出的技术问题关系不太密切的单元引入,但这并不表明本实施例中不存在其它的单元。
本申请第四实施例涉及一种虚拟专用网络服务器,包括多个上述报文处理单元。
本申请第五实施例涉及一种虚拟专用网络服务器,如图6所示,包括至少一个处理器601;以及,与至少一个处理器601通信连接的存储器602;其中,存储器602存储有可被至少一个处理器601执行的指令,指令被至少一个处理器601执行,以使至少一个处理器601能够执行上述报文处理方法。
其中,存储器602和处理器601采用总线方式连接,总线可以包括任意数量的互联的总线和桥,总线将一个或多个处理器601和存储器602的各种电路连接在一起。总线还可以将诸如***设备、稳压器和功率管理电路等之类的各种其他电路连接在一起,这些都是本领域所公知的,因此,本文不再对其进行进一步描述。总线接口在总线和收发机之间提供接口。收发机可以是一个元件,也可以是多个元件,比如多个接收器和发送器,提供用于在传输介质上与各种其他装置通信的单元。经处理器处理的数据通过天线在无线介质上进行传输,进一步,天线还接收数据并将数据传送给处理器601。
处理器601负责管理总线和通常的处理,还可以提供各种功能,包括定时,***接口,电压调节、电源管理以及其他控制功能。而存储器602可以被用于存储处理器601在执行操作时所使用的数据。
本申请第六实施例涉及一种计算机可读存储介质,存储有计算机程序。计算机程序被处理器执行时实现上述方法实施例。
即,本领域技术人员可以理解,实现上述实施例方法中的全部或部分步骤是可以通过 程序来指令相关的硬件来完成,该程序存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域的普通技术人员可以理解,上述各实施例是实现本申请的具体实施例,而在实际应用中,可以在形式上和细节上对其作各种改变,而不偏离本申请的精神和范围。

Claims (11)

  1. 一种报文处理方法,应用于虚拟专用网络服务器,所述服务器包括收包线程,多个处理线程和发包线程,所述方法包括:
    所述收包线程接收报文,并以预设顺序依次向所述多个处理线程分发报文;
    所述多个处理线程在接收到报文后,对所述接收的报文进行并行处理;
    所述发包线程以所述预设顺序依次从所述多个处理线程中获取处理后的报文。
  2. 根据权利要求1所述的报文处理方法,其中,所述以预设顺序依次向所述多个处理线程分发报文,包括:
    所述收包线程以预设顺序依次向所述多个处理线程分别对应的预设第一报文队列分发报文;
    所述多个处理线程在接收到报文后,对所述接收的报文进行并行处理,包括:
    所述多个处理线程分别从对应的预设第一报文队列中获取报文并进行并行处理,将处理后的报文分别存储于对应的预设第二报文队列;
    所述发包线程以所述预设顺序依次从所述多个处理线程中获取处理后的报文,包括:
    所述发包线程以所述预设顺序依次从所述多个处理线程分别对应的预设第二报文队列中获取处理后的报文。
  3. 根据权利要求2所述的报文处理方法,其中,在所述收包线程以预设顺序依次向所述多个处理线程分别对应的预设第一报文队列分发报文之前,还包括:
    确定待分发的预设第一报文队列中的报文数量未达到所述预设第一报文队列存储的最大数量。
  4. 根据权利要求2所述的报文处理方法,其中,在所述发包线程以所述预设顺序依次从所述多个处理线程分别对应的预设第二报文队列中获取处理后的报文之后,还包括:
    将所述处理后的报文占用的内存空间的地址添加至预设队列;
    所述收包线程再次接收报文之后,在所述预设队列中获取用于存储所述报文的内存空间的地址。
  5. 根据权利要求2至4中任一项所述的报文处理方法,其中,所述预设第一报文队列和所述预设第二报文队列均为无锁队列。
  6. 根据权利要求1所述的报文处理方法,其中,在所述发包线程以所述预设顺序依次从所述多个处理线程中获取处理后的报文之后,还包括:
    判断所述处理后的报文是否为异常报文;
    在所述处理后的报文不是异常报文时,将所述处理后的报文转发;
    在所述处理后的报文为异常报文时,将所述处理后的报文丢弃。
  7. 根据权利要求1所述的报文处理方法,其中,包括:
    将所述收包线程,多个处理线程和发包线程作为一组工作线程进行绑定,并将所述工作线程与用户通道进行绑定;
    在所述收包线程接收报文之前,还包括:
    确定报文对应的用户通道,并根据所述确定的用户通道确定与所述用户通道绑定的工作线程;
    所述收包线程接收报文,替换为:
    与所述报文对应的用户通道进行绑定的工作线程中的收包线程接收所述报文。
  8. 一种报文处理单元,包括:接收模块,分发模块,多个处理模块,获取模块;
    所述接收模块用于接收报文;
    所述分发模块用于以预设顺序依次向多个处理模块分发报文;
    所述多个处理模块用于在接收到报文后,对接收的报文进行并行处理;
    所述获取模块用于以所述预设顺序依次从所述多个处理模块中获取处理后的报文。
  9. 一种虚拟专用网络服务器,包括:多个如权利要求8所述的报文处理单元。
  10. 一种虚拟专用网络服务器,包括:
    至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如权利要求1至7中任一所述的报文处理方法。
  11. 一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1至7中任一项所述的报文处理方法。
PCT/CN2020/074953 2019-12-19 2020-02-12 报文处理方法、处理单元及虚拟专用网络服务器 WO2021120374A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20829297.9A EP3860062A4 (en) 2019-12-19 2020-02-12 DATA PROCESSING METHOD, PROCESSING UNIT AND VPN SERVER
US17/153,814 US20210306302A1 (en) 2019-12-19 2021-01-20 Datagram processing method, processing unit and vpn server

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911315330.9 2019-12-19
CN201911315330.9A CN113014528B (zh) 2019-12-19 2019-12-19 报文处理方法、处理单元及虚拟专用网络服务器

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/153,814 Continuation US20210306302A1 (en) 2019-12-19 2021-01-20 Datagram processing method, processing unit and vpn server

Publications (1)

Publication Number Publication Date
WO2021120374A1 true WO2021120374A1 (zh) 2021-06-24

Family

ID=74859164

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/074953 WO2021120374A1 (zh) 2019-12-19 2020-02-12 报文处理方法、处理单元及虚拟专用网络服务器

Country Status (4)

Country Link
US (1) US20210306302A1 (zh)
EP (1) EP3860062A4 (zh)
CN (1) CN113014528B (zh)
WO (1) WO2021120374A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113641614A (zh) * 2021-07-07 2021-11-12 北京智芯微电子科技有限公司 基于spi的单通道多业务并行处理方法及芯片
CN114338830A (zh) * 2022-01-05 2022-04-12 腾讯科技(深圳)有限公司 数据传输方法、装置、计算机可读存储介质及计算机设备

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113676421B (zh) * 2021-10-25 2022-01-28 之江实验室 一种基于PCIe的多端口网络报文收发方法
CN114189462B (zh) * 2021-12-08 2024-01-23 北京天融信网络安全技术有限公司 一种流量采集方法、装置、电子设备及存储介质
CN114448573B (zh) * 2022-03-02 2024-02-23 新华三半导体技术有限公司 一种报文处理方法及装置
CN114900805A (zh) * 2022-05-07 2022-08-12 武汉星辰北斗科技有限公司 一种高并发北斗三号短报文收发方法、***和装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1842781A (zh) * 2003-07-25 2006-10-04 拉莎微电子有限公司 高级处理器
US20090135844A1 (en) * 2007-11-27 2009-05-28 Shrijeet Mukherjee Transmit-side scaler and method for processing outgoing information packets using thread-based queues
CN103336684A (zh) * 2013-07-18 2013-10-02 上海寰创通信科技股份有限公司 一种并发处理ap消息的ac及其处理方法
CN108647104A (zh) * 2018-05-15 2018-10-12 北京五八信息技术有限公司 请求处理方法、服务器及计算机可读存储介质
US20190109791A1 (en) * 2014-12-22 2019-04-11 Telefonaktiebolaget Lm Ericsson (Publ) Adaptive load balancing in packet processing

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3231571B2 (ja) * 1994-12-20 2001-11-26 日本電気株式会社 順序付きマルチスレッド実行方法とその実行装置
US7415540B2 (en) * 2002-12-31 2008-08-19 Intel Corporation Scheduling processing threads
CN102075427A (zh) * 2011-01-18 2011-05-25 中兴通讯股份有限公司 基于安全联盟的IPSec报文处理方法及装置
CN102204183A (zh) * 2011-05-09 2011-09-28 华为技术有限公司 报文保序处理方法、保序协处理器和网络设备
CN102789394B (zh) * 2011-05-19 2014-12-24 阿里巴巴集团控股有限公司 一种并行处理消息的方法、装置、节点及服务器集群
CN102780625B (zh) * 2012-07-30 2014-12-17 成都卫士通信息产业股份有限公司 一种实现ipsec vpn加解密处理的方法及装置
CN106357554A (zh) * 2015-07-13 2017-01-25 中兴通讯股份有限公司 一种设备内部处理器网口收包方法及装置
CN106899516B (zh) * 2017-02-28 2020-07-28 华为技术有限公司 一种队列清空方法以及相关设备
CN109688069A (zh) * 2018-12-29 2019-04-26 杭州迪普科技股份有限公司 一种处理网络流量的方法、装置、设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1842781A (zh) * 2003-07-25 2006-10-04 拉莎微电子有限公司 高级处理器
US20090135844A1 (en) * 2007-11-27 2009-05-28 Shrijeet Mukherjee Transmit-side scaler and method for processing outgoing information packets using thread-based queues
CN103336684A (zh) * 2013-07-18 2013-10-02 上海寰创通信科技股份有限公司 一种并发处理ap消息的ac及其处理方法
US20190109791A1 (en) * 2014-12-22 2019-04-11 Telefonaktiebolaget Lm Ericsson (Publ) Adaptive load balancing in packet processing
CN108647104A (zh) * 2018-05-15 2018-10-12 北京五八信息技术有限公司 请求处理方法、服务器及计算机可读存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3860062A4 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113641614A (zh) * 2021-07-07 2021-11-12 北京智芯微电子科技有限公司 基于spi的单通道多业务并行处理方法及芯片
CN114338830A (zh) * 2022-01-05 2022-04-12 腾讯科技(深圳)有限公司 数据传输方法、装置、计算机可读存储介质及计算机设备
CN114338830B (zh) * 2022-01-05 2024-02-27 腾讯科技(深圳)有限公司 数据传输方法、装置、计算机可读存储介质及计算机设备

Also Published As

Publication number Publication date
CN113014528A (zh) 2021-06-22
CN113014528B (zh) 2022-12-09
EP3860062A4 (en) 2021-10-20
US20210306302A1 (en) 2021-09-30
EP3860062A1 (en) 2021-08-04

Similar Documents

Publication Publication Date Title
WO2021120374A1 (zh) 报文处理方法、处理单元及虚拟专用网络服务器
CN107046542B (zh) 一种在网络级采用硬件实现共识验证的方法
US10897712B2 (en) Cyber security management system, method, and apparatus
US10419531B2 (en) Method for setting gateway device identity, and management gateway device
US7930446B2 (en) Methods and apparatuses for wireless network communication wherein a universal serial bus request block (URB) is generated that will vary parameters that controls wireless transmission commands between devices
US9537786B2 (en) Method, device, and system for information processing based on distributed buses
US20140157365A1 (en) Enhanced serialization mechanism
US20030058878A1 (en) Method and apparatus for minimizing spinlocks and retaining packet order in systems utilizing multiple transmit queues
US8539089B2 (en) System and method for vertical perimeter protection
WO2014005442A1 (zh) 网络业务访问的方法及***
CN113507483B (zh) 即时通讯方法、装置、服务器及存储介质
WO2016050109A1 (zh) 一种通信方法、云管理服务器及虚拟交换机
US11316916B2 (en) Packet processing method, related device, and computer storage medium
CN110213320B (zh) 通信连接的方法、装置、电子设备及计算机可读存储介质
CN104038505A (zh) 一种IPSec防重放的方法和装置
CN106464596A (zh) 开放流通信方法、***、控制器和业务网关
US20170295019A1 (en) Communication device and packet transmission/reception program
US20080118065A1 (en) Hardware acceleration for large volumes of channels
WO2015027931A1 (en) Method and system for realizing cross-domain remote command
US8559423B2 (en) Method and apparatus for packet processing
WO2019015487A1 (zh) 一种数据重传处理方法、rlc实体和mac实体
WO2012094901A1 (zh) 超长短信发送/接收方法、装置及***
CN113141390B (zh) Netconf通道管理方法及装置
US20210203604A1 (en) Load balancing method, device and system
CN109257227B (zh) 数据传输中的偶联管理方法、装置及***

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2020829297

Country of ref document: EP

Effective date: 20210111

NENP Non-entry into the national phase

Ref country code: DE