CN114422367B - Message processing method and device - Google Patents

Message processing method and device Download PDF

Info

Publication number
CN114422367B
CN114422367B CN202210312192.4A CN202210312192A CN114422367B CN 114422367 B CN114422367 B CN 114422367B CN 202210312192 A CN202210312192 A CN 202210312192A CN 114422367 B CN114422367 B CN 114422367B
Authority
CN
China
Prior art keywords
processing
flow table
message
processed
cloud host
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210312192.4A
Other languages
Chinese (zh)
Other versions
CN114422367A (en
Inventor
吕怡龙
陈子康
祝顺民
李星
宗志刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Cloud Computing Ltd
Original Assignee
Alibaba Cloud Computing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Cloud Computing Ltd filed Critical Alibaba Cloud Computing Ltd
Priority to CN202210312192.4A priority Critical patent/CN114422367B/en
Publication of CN114422367A publication Critical patent/CN114422367A/en
Application granted granted Critical
Publication of CN114422367B publication Critical patent/CN114422367B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • H04L41/083Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability for increasing network speed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/70Virtual switches
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

One or more embodiments of the present specification provide a packet processing method and apparatus, which are applied to a DPU assembled on a physical machine, where the DPU includes a control chip and a processing chip, and virtual DPUs that correspond to cloud hosts in the physical machine one to one are deployed on the DPU, where the virtual DPU includes: the control terminals are deployed in the control chip, and the processing flow tables which are established in the processing chip and respectively correspond to the control terminals; the control end is used for receiving configuration information issued by a management user of the corresponding cloud host, and issuing flow table entries to the processing flow table corresponding to the corresponding cloud host according to the configuration information; the method comprises the following steps: the processing chip receives a message to be processed aiming at a target cloud host; the processing chip searches whether a flow table entry matched with the message to be processed exists in a target processing flow table corresponding to the target cloud host or not; and the processing chip processes the message to be processed according to the searched flow table entry.

Description

Message processing method and device
Technical Field
One or more embodiments of the present disclosure relate to the field of intelligent network cards, and in particular, to a method and an apparatus for processing a message.
Background
The development of the current intelligent network card technology enables a provider of cloud services to improve the efficiency of the whole computing system and reduce the computing cost of the whole system by a method of assembling the intelligent network card on a physical machine, but for a user of the cloud services, namely a user of a cloud host, applications on the cloud host cannot directly enjoy the acceleration effect brought by the intelligent network card, that is, the cloud host does not have accessories with the function of the intelligent network card, so that the applications on the cloud host can obtain a corresponding hardware acceleration effect. In addition, if the acceleration function of the intelligent network card on the physical machine is directly used in a cloud scene, all cloud hosts cannot be isolated, and the safety of the cloud hosts is reduced.
Disclosure of Invention
In view of this, one or more embodiments of the present disclosure provide a method and a device for processing a message.
To achieve the above object, one or more embodiments of the present disclosure provide the following technical solutions:
according to a first aspect of one or more embodiments of the present specification, a packet processing method is provided, which is applied to a DPU assembled on a physical machine, where the DPU includes a control chip and a processing chip, and virtual DPUs that correspond to cloud hosts in the physical machine one to one are deployed on the DPU, where the virtual DPU includes: the control end is deployed in the control chip, and the processing flow tables which are established in the processing chip and respectively correspond to the cloud hosts are respectively arranged in the processing chip; the control end is used for receiving configuration information issued by a management user of the corresponding cloud host, and issuing flow table entries to the processing flow table corresponding to the corresponding cloud host according to the configuration information; the method comprises the following steps:
the processing chip receives a message to be processed aiming at a target cloud host;
the processing chip searches whether a flow table entry matched with the message to be processed exists in a target processing flow table corresponding to the target cloud host or not;
and the processing chip processes the message to be processed according to the searched flow table entry.
According to a second aspect of one or more embodiments of the present specification, a packet processing method is provided, and is applied to a client corresponding to a management user of a target cloud host, where a physical machine where the target cloud host is located is equipped with a DPU, the DPU includes a control chip and a processing chip, virtual DPUs that correspond to cloud hosts in the physical machine one to one are deployed on the DPU, and the virtual DPU includes: the control terminals are deployed in the control chip, and the processing flow tables which are established in the processing chip and respectively correspond to the control terminals; the method comprises the following steps:
responding to a configuration instruction sent by the management user, and generating configuration information corresponding to the configuration instruction;
and issuing the configuration information to a control end corresponding to the target cloud host, so that the control end issues flow table entries to a processing flow table corresponding to the target cloud host according to the configuration information, and the processing chip processes the message to be processed for the target cloud host according to the flow table entries.
According to a third aspect of one or more embodiments of the present description, there is provided a computer readable storage medium, having stored thereon a computer program which, when executed by a processor, performs the steps of the method according to the first aspect.
According to a fourth aspect of one or more embodiments of the present description, there is provided an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method according to the first aspect when executing the program.
In the technical solution provided in this specification, dedicated resources belonging to a target cloud host are allocated in a DPU assembled on a physical machine in a manner of deploying virtual DPUs corresponding to the cloud hosts one to one, so as to provide an acceleration function for packet processing for the target cloud host, and for different cloud hosts running on the same physical machine, mutual isolation between different cloud hosts can be achieved by allocating dedicated resources through the above method, thereby improving security of the cloud host. The management user of the cloud host can manage and configure the exclusive resource which is responsible for accelerating the message processing of the cloud host, and the configuration efficiency is improved.
Drawings
Fig. 1 is a schematic diagram of an architecture for implementing packet processing by a virtualized DPU according to an exemplary embodiment of the present specification;
fig. 2 is a schematic flowchart of a message processing method according to an exemplary embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a DPU according to an exemplary embodiment of the present disclosure;
fig. 4 is a schematic flowchart of another message processing method according to an exemplary embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a DPU applied in an NFV scenario according to an exemplary embodiment of the present specification;
fig. 6 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present disclosure;
fig. 7 is a schematic diagram of a message processing apparatus according to an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with one or more embodiments of the present specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of one or more embodiments of the specification, as detailed in the claims which follow.
It should be noted that: in other embodiments, the steps of the corresponding methods are not necessarily performed in the order shown and described herein. In some other embodiments, the method may include more or fewer steps than those described herein. Moreover, a single step described in this specification may be broken down into multiple steps for description in other embodiments; multiple steps described in this specification may be combined into a single step in other embodiments.
In order to reduce the occupation of computing resources of a cloud host, release the computing capacity of a CPU to implement tasks of an application layer, and improve the computing efficiency of a cloud host, the present specification provides a packet processing method, which is applied to a DPU assembled on a physical machine, virtualizes a vDPU (virtual DPU) that provides an exclusive acceleration function for the cloud host by using hardware and software resources of the DPU, where the vDPU at least includes a control end that is deployed in the DPU and corresponds to the cloud host one to one and a processing flow table that corresponds to the control end, and allocates an exclusive data processing resource to the cloud host, so that data layer tasks related to the cloud host that are completed by the CPU on the physical machine are assumed by the vDPU, and the computing capacity of the CPU is released to implement tasks of the application layer, thereby improving the computing efficiency of the cloud host.
Fig. 1 is a schematic diagram of an architecture for implementing packet processing by a virtualized DPU shown in this specification. As shown in fig. 1, cloud host 11, cloud host 12, DPU, and vDPU 13, vDPU 14, network 15, and electronic device 16, 17 disposed on the DPU may be included.
The cloud host 11 or the cloud host 12 is a virtual server carried by an independent host or a host cluster, and the DPU is deployed in the independent host or the host cluster, and has vDPU 13 and vDPU 14 corresponding to the cloud hosts one to one. In the operation process, the DPU may be configured with a packet processing device, and the device may be implemented in a software and/or hardware manner to process a packet for the cloud host corresponding to the packet processing device.
The electronic device 16 or 17 refers to one type of electronic device that a user can use. In fact, it is obvious that the user can also use electronic devices of the type such as: a mobile phone, a tablet device, a notebook computer, a pda (Personal Digital Assistants), a wearable device (such as smart glasses, a smart watch, etc.), etc., which are not limited by one or more embodiments of the present disclosure. In the operation process, the electronic device 16 may allow the user to log in the cloud host 11 corresponding to the user to issue the configuration information to the vDPU 13, and the electronic device 17 may similarly allow the user to log in the cloud host 12 corresponding to the user to issue the configuration information to the vDPU 14.
And the network 15 for interaction between the electronic device 16 and the cloud host 11 and interaction between the electronic device 17 and the cloud host 12 may include various types of wired or wireless networks. In one embodiment, the Network 15 may include the Public Switched Telephone Network (PSTN) and the Internet.
Next, a message processing method in this specification will be described with reference to fig. 2. Fig. 2 is a schematic flowchart of a message processing method according to an exemplary embodiment.
The method in fig. 2 is mainly applied to a DPU (Data Processing Unit, intelligent network card) assembled on a physical machine, where the DPU is a special processor with Data as a center structure, and adopts a software-defined technical route to support virtualization of infrastructure layer resources and support functions of infrastructure layers such as storage, security, quality of service management, and the like, and the DPU has the most direct function of serving as an offload engine of a CPU, taking over services of the infrastructure layers such as network virtualization, hardware resource pooling, and the like, and releasing computing power of the CPU to an upper layer for application. Existing DPUs typically include two parts, a software part and a hardware part: the software part comprises a control chip, is usually realized by a standard CPU architecture and is mainly used for unloading tasks of a control layer and tasks of a data layer which cannot be processed by a flexible and complex hardware part; the hardware part is a processing chip, and is usually implemented in hardware form such as FPGA (Field Programmable Gate Array) or ASIC (Application Specific Integrated Circuit) for offloading basic data layer tasks. The control chip and the processing chip are mutually matched to release the computing power of the CPU, so that the computing efficiency is improved, and the operating cost is reduced. In order to improve the message processing efficiency of a cloud host provided with functional support by a physical machine deployed with a DPU, so as to improve the computational efficiency of the cloud host, the present specification provides a message processing method, which is applied to a DPU deployed in a physical machine.
In an exemplary embodiment of the present description, the method shown in fig. 2 is applied to a DPU 311 as shown in fig. 3, and the DPU 311 is mounted on a physical machine 31.
The DPU 311 includes a control chip 3111 and a processing chip 3112, and a vDPU corresponding to the cloud host in the physical machine is deployed on the DPU, for example, the vDPU 1 corresponding to the cloud host 1 includes: a control terminal 1 disposed in the control chip 3111, and a processing flow table 1 corresponding to the cloud host 1 and established in the processing chip 3112; the vDPU 2 corresponding to the cloud host 2 includes: a control end 2 disposed in the control chip 3111, and a processing flow table 2 corresponding to the cloud host 2 and established in the processing chip 3112. The control end 1 is configured to receive configuration information issued by the management user 1 of the cloud host 1, and issue a flow table entry to the processing flow table 1 corresponding to the cloud host 1 according to the configuration information. The process of issuing the flow table entry to the processing flow table 2 through the control terminal 2 for the cloud host 2 is the same as the foregoing process. Of course, the number of the cloud hosts may be multiple, and accordingly, the number of the control terminals in the control chip and the number of the processing flow tables in the processing chip correspond to the number of the cloud hosts, which is not specifically limited in this application. Different cloud hosts correspond to different control terminals and process flow tables, so that configuration isolation among the cloud hosts is realized.
In an exemplary embodiment of the present specification, the control end 1 or 2 may be a virtual machine running in the control chip 3111, and the virtual machine may be implemented by a lightweight virtualization technology, for example, Kata, Docker, Multipuss, and the like, which is not limited in this specification. The lightweight virtual machines correspond to the user cloud hosts one to one.
The message processing method described in fig. 2 mainly includes the following steps:
s201, a processing chip receives a message to be processed aiming at a target cloud host.
In an exemplary embodiment of the present specification, it is assumed that, as shown in fig. 3, the target cloud host is the cloud host 1, and the processing chip 3112 receives a message to be processed for the cloud host 1.
In an exemplary embodiment of the present specification, the message to be processed may come from the target cloud host, or the message to be processed may also come from another object different from the target cloud host. For example, assuming that the target cloud host is the cloud host 1, when information interaction is performed between the cloud host 1 and a certain terminal, when a message to be processed is sent from the direction of the cloud host 1 to the terminal, the processing chip 3112 receives the message to be processed from the cloud host 1, and if the transmission direction of the message to be processed is sent from the terminal to the cloud host 1, the processing chip 3112 receives the message to be processed from the terminal. Or, assuming that information interaction is performed between the cloud host 1 and the cloud host 2, the message to be processed is sent from the cloud host 1 to the cloud host 2, and when the target cloud host is the cloud host 1, the processing chip 3112 receives the message to be processed from the cloud host 1 (target cloud host) according to the transmission direction of the message to be processed; if the target cloud host is the cloud host 2, the processing chip 3112 receives the message to be processed from the cloud host 1 (non-target cloud host) according to the transmission direction of the message to be processed.
S202, the processing chip searches whether a flow table entry matched with the message to be processed exists in a target processing flow table corresponding to the target cloud host.
The configuration information is issued by a management user of the cloud host, and defines a processing rule of the cloud host corresponding to the management user on the message. Once the flow table entries generated according to the configuration information are issued to the processing flow table corresponding to the cloud host, the processing chip processes the received messages according to the processing rules, so that the processing process of the messages meets the requirements of management users. The processing flow table records a processing rule defined by a management user through configuration information in the form of a flow table entry. After the message to be processed is received by the processing chip, the processing chip determines a processing flow table corresponding to the target cloud host according to the target cloud host for which the message to be processed is directed, and searches a flow table entry matched with the message to be processed in the processing flow table, so as to confirm the rule according to which the message to be processed needs to be processed.
In an exemplary embodiment of the present specification, as shown in fig. 3, it is assumed that a target cloud host is a cloud host 1, a flow table entry generated by a control terminal 1 according to configuration information issued by a management user 1 of the cloud host is issued to a processing flow table 1, and the processing flow table 1 determines a processing rule for a packet of the cloud host 1.
In an exemplary embodiment of this specification, a port table is deployed in the processing chip 3112, and the port table records port numbers of cloud hosts respectively corresponding to all processing flow tables maintained by the processing chip 3112, where the port numbers of each cloud host are different from each other. For example, it is assumed that the processing chip 3112 maintains a processing flow table 1 and a processing flow table 2 corresponding to the cloud host 1 and the cloud host 2. When the processing chip 3112 receives a to-be-processed packet for the cloud host 1, the port number of the cloud host 1 is searched in the port table, and then the target processing flow table corresponding to the port number of the cloud host 1 is determined as the processing flow table 1 according to the mapping relationship between the predefined port number and the processing flow table 1. Then, the processing chip searches whether a flow table entry matched with the message to be processed exists in the processing flow table 1, that is, whether a processing rule for the message to be processed is stored in the processing flow table 1. If so, processing the message to be processed according to the rule and forwarding the message to be processed to a corresponding target object.
In an exemplary embodiment of the present specification, the port number may be a MAC address of a virtual network card of the cloud host.
In an exemplary embodiment of the present specification, the control end corresponds to the network cards of the cloud hosts one to one, and the control end and the network card IP of the cloud host corresponding to the control end are in the same network segment. Thus, a management user of the cloud host can directly log in the corresponding control end from the cloud host through SSH (secure shell protocol) and the like, and control and management of the accelerated resources occupied by the cloud host are realized through the control end.
And S203, the processing chip processes the message to be processed according to the searched flow table entry.
In an exemplary embodiment of the present specification, according to different sources of a to-be-processed packet, directions for forwarding the to-be-processed packet after the to-be-processed packet is processed by a processing chip are also different. For example, as described above, the pending packets may come from the target cloud host, or the pending packets may come from other objects different from the target cloud host. And if the message to be processed comes from the target cloud host, the processing chip processes the message to be processed according to the searched flow table entry and forwards the message to the target object indicated by the message to be processed. And if the message to be processed comes from other objects, the processing chip processes the message to be processed according to the searched flow table entry and forwards the message to be processed to the corresponding target cloud host.
In an exemplary embodiment of the present specification, the hardware processing resources of the processing chip are maintained as hardware processing modules in the hardware resource pool for implementing preset processing functions, and a mapping relationship with at least one hardware processing module is recorded in a flow table entry in the processing flow table. For example, as shown in fig. 3, a processing flow table 2 corresponding to the cloud host 2 and a hardware processing module 1 in a hardware resource pool of the processing chip 3112 have a mapping relationship, taking the hardware processing module for implementing a speed limit function as an example, the speed limit function module is implemented by a speed limit table, the speed limit table is the hardware processing module 1, and speed limit entries are stored in the speed limit table, for example, it is assumed that two speed limit entries are in the speed limit table, which are respectively represented as a meter 1 and a meter 2, where the meter 1 is preset to be "speed limit 1M bandwidth", and the meter 2 is preset to be "speed limit 10M bandwidth". Assuming that when the message to be processed from the cloud host 1 needs to be subjected to speed limitation with a bandwidth of 1M, a mapping relationship between a flow table entry in the processing flow table 1 and a speed limitation entry meter 1 is recorded in the flow table entry, and the processing chip calls a corresponding hardware processing module, namely a speed limitation module, from the resource pool according to the flow table entry, performs speed limitation processing on the message to be processed, and forwards the processed message to be processed. In this specification, the processing chip may implement operations such as NAT (Network Address Translation), encapsulation/decapsulation of a message tunnel (e.g., vxlan), and the like in a similar manner as described above.
The hardware resources which are not called in the hardware resource pool can be used for realizing message processing of other cloud hosts or processing data layer tasks of a CPU on the physical machine, so that the computing efficiency of the physical machine is improved. For example, the processing chip may further include a global processing flow table that is controlled by a vSwitch (virtual switch) installed in the control chip, the vSwitch being managed by an administrator of the physical machine, and the control chip issuing a global flow table entry to the global processing flow table in the processing chip through the vSwitch. When the configuration information of the global processing flow table is issued, the following method is specifically adopted: a first packet of a flow message is first uploaded to a vSwitch, and the vSwitch, according to preset configuration information, for example: and generating a global processing entry by a route, an Access Control List (ACL), a Quality of Service (QoS) speed limit and the like, and then issuing the global processing entry to a global processing flow table of the processing chip. When the message aiming at the physical machine hits the global flow table entry in any global processing flow table, the message is directly processed and forwarded according to the global flow table entry, so that the hardware acceleration effect is realized. Meanwhile, the global flow processing table can also realize the related processing such as encapsulation/decapsulation and flow mirroring of the message.
In an exemplary embodiment of this specification, when there is no flow table entry matching a message to be processed in a processing flow table, that is, the message to be processed misses any one of the processing flow tables, at this time, the processing chip may directly upload the message to be processed to the control end, and deliver the message to the control end to process the message to be processed. The control end can automatically generate corresponding newly added flow table entries for the processing messages according to the predefined configuration information, and add the newly added flow table entries to the processing flow table. So that other subsequent messages to be processed which are the same as the messages to be processed can hit the newly added flow table entry and be processed by the processing chip.
In another exemplary embodiment of this specification, the message to be processed that is uploaded to the control end may also be subjected to the following subsequent processing: the control end can directly process the message to be processed by using the software resource of the control chip without depending on the hardware resource in the processing chip. Or the control end can directly discard the message to be processed. Of course, the control end may send a prompt to the cloud host corresponding to the control end to prompt the management user of the cloud host to modify, add, and the like the configuration information, so that the control end issues the corresponding added flow table entry to the processing flow table corresponding to the cloud host. And other subsequent messages to be processed which are the same as the messages to be processed can hit the newly added flow table entry and be processed by the processing chip.
Of course, in another exemplary embodiment of the present specification, the to-be-processed message may also be directly discarded by the processing chip.
In this specification, a control end and a processing flow table corresponding to a cloud host are equivalent to occupying part of software resources and hardware resources of a DPU, so as to form a vDPU that is temporarily isolated from other resources in the DPU, and the vDPU provides a hardware acceleration function, such as a DPU, of a physical machine to the corresponding cloud host, which is equivalent to assembling an intelligent network card to the corresponding cloud host. The implementation of the vDPU provides an exclusive acceleration function for the cloud host, the calculation efficiency of the cloud host is improved, and when the cloud host does not occupy hardware resources in the DPU any more to accelerate messages, occupied resources are released into a hardware resource pool and can be occupied by other cloud hosts, or the vDPU is used for realizing the acceleration function of the physical machine.
In order to improve the security of issuing the configuration information by the management user of the cloud host in this specification and avoid the exception of processing hardware resources in the processing chip caused by the exception of the configuration command, in an exemplary embodiment of this specification, a manner of issuing the configuration information safely is provided. Configuration command exceptions may be caused by malicious or erroneous configuration information issuance by the administrative user, or by other similar means. As shown in fig. 3, a configuration channel is established between the control end 1 corresponding to the cloud host 1 and the management process 1 running in the control chip, and the configuration channel is used to redirect the configuration information sent by the cloud host 1 through the control end 1 to the management process 1, and the management process performs security check on the configuration information and filters the configuration information that does not meet the security condition. If the configuration information passes the security check, the management process notifies the control end of the information passing the security check, and the control end issues the flow table entry to the processing flow table 1 corresponding to the cloud host 1 according to the configuration information. At this time, the configuration information that fails the security check cannot be taken as a basis for generating the flow table entry. By utilizing the method to carry out security check on the configuration information, the problem that the hardware resource processing in the processing chip is abnormal due to the fact that a management user of the cloud host issues an illegal command to the processing flow table through the control end can be avoided.
In an exemplary embodiment of the present specification, a packet processing method is provided, which is applied to a client corresponding to a management user of a target cloud host, where a physical machine where the target cloud host is located is equipped with a DPU, where the DPU includes a control chip and a processing chip, and a virtual DPU corresponding to a cloud host in the physical machine in a one-to-one manner is deployed on the DPU, where the virtual DPU includes: the control terminals are deployed in the control chip, and the processing flow tables which are established in the processing chip and respectively correspond to the control terminals. The specific steps of the method are shown in fig. 4.
Assuming that the target cloud host in the above method is the cloud host 2 shown in fig. 3, the corresponding management user is the user 2 shown in fig. 3. Wherein, the cloud host 2 is deployed on the physical machine 31, the DPU 311 is deployed on the physical machine 31, and the DPU 311 includes a control chip 3111 and a processing chip 3112. And a virtual DPU 1 corresponding to the cloud host 1 and a virtual DPU 2 corresponding to the cloud host 2 in the physical machine 31 are deployed on the DPU 311. The virtual DPU 1 includes a control end 1 and a processing flow table 1, and the virtual DPU 2 includes a control end 2 and a processing flow table 2.
S401, responding to a configuration instruction sent by the management user, and generating configuration information corresponding to the configuration instruction.
The management user of the cloud host 2, the user 2, sends a configuration instruction, and in response to the configuration instruction, the client of the user 2 generates configuration information corresponding to the configuration instruction and sends the configuration information to the control end 2 corresponding to the cloud host 2.
S402, the configuration information is issued to a control end corresponding to the target cloud host, so that the control end issues flow table entries to a processing flow table corresponding to the target cloud host according to the configuration information, and the processing chip processes the message to be processed for the target cloud host according to the flow table entries.
After the client issues the configuration information to the control end 2 corresponding to the cloud host 2, the control end 2 issues a flow table entry to the processing flow table 2 according to the configuration information. When the processing chip 3112 processes a packet to be processed, if the packet to be processed matches the flow table entry, the packet to be processed is processed according to the flow table entry.
To facilitate understanding, this specification provides exemplary embodiments in a specific context as described below. For example, the technical solution provided in the present specification is applied in an NFV (Network Functions Virtualization) scenario. As shown in fig. 5, it is assumed that an NFV 1 is set in an original physical machine 51, and a DPU 511 deployed in the physical machine 51 includes a vDPU 1 corresponding to the NFV 1, where the vDPU 1 includes: the control terminal 1 of the control chip 5111 runs, and the processing flow table 1 in the processing chip 5112. When a new NFV 2 is added to the physical machine 51, the control end 2 corresponding to the NFV 2 is operated in the control chip 5111, and the control end 2 establishes the processing flow table 2 corresponding to the NFV 2 in the processing chip 5112, where the control end 2 is configured to receive configuration information issued by the management user 2 of the NFV 2, and the management user 2 of the NFV 2 performs operations such as registering to the corresponding control end 2 to configure NAT, QoS speed limit, and encapsulation/decapsulation of a message, which are required by the NFV 2, and generate corresponding configuration information. The configuration information is redirected to the management process through the configuration management channel 2. The management process is implemented by vSwitch in the control chip. The management process performs security check on the configuration information from the control terminal 2, and if the configuration information passes the security check, the management process may notify the control terminal 2 of a message that passes the security check, and the control terminal 2 issues the flow table entry to the processing flow table 2 corresponding to the NFV 2 according to the configuration information. The message to be processed for the NFV 2 can be processed and forwarded according to the processing rule determined by the flow table entry.
Similarly, the management user 1 of the NFV 1 may also generate corresponding configuration information by logging in to the corresponding control terminal 1 to configure operations such as NAT, QoS speed limit, encapsulation/decapsulation of the message, and the like required by the NFV 1. This configuration information is also checked for security by the management process on the vSwitch in the control chip 5111, which process refers to the configuration process of NFV 2.
In the process of message processing, assuming that the to-be-processed message 1 for the NFV 1 is received by the processing chip 5112 from the NFV 1 itself, the processing chip 5112 searches whether a flow table entry matching the to-be-processed message exists in the processing flow table 1, and if a flow table entry matching the to-be-processed message 1 exists in the processing flow table 1, according to a processing rule determined by the flow table entry, calls the hardware processing resource 1 in the hardware resource pool of the processing chip 5112 to perform processing such as NAT, QoS speed limit and the like on the to-be-processed message 1, and forwards the to-be-processed message 1 to the global processing flow table. At this time, the vDPU 1 corresponding to the NFV 1 completes processing of the message 1 to be processed, and the message 1 to be processed enters the global processing stage. The processing chip 5112 searches for a global flow table entry matching the to-be-processed packet 1 in the global flow table, and processes the to-be-processed packet 1 according to the global flow table entry, for example, the global flow table entry may call a hardware processing resource 2 in a hardware resource pool in the processing chip 5112 to process the to-be-processed packet, and the processed to-be-processed packet 1 is forwarded out of the DPU to a target object indicated by the to-be-processed packet 1, thereby completing the processing of the to-be-processed packet 1.
Assuming that the message 2 to be processed for the NFV 2 received by the processing chip 5112 comes from another object different from the NFV 2, at this time, the message 2 to be processed enters the DPU and is subjected to global processing, for example, the message 2 to be processed is processed by using the hardware processing resource 3 in the processing chip 5112 according to the global flow table entry matching with the message 2 to be processed in the global processing flow table, the global processing flow table can call the port table to determine the port number of the NFV 2 to be sent, determining a processing flow table 2 corresponding to the NFV 2 through a port number, handing the message to be processed to the processing flow table 2 for further processing, according to a flow table entry matched with the message to be processed 2 in the processing flow table 2, and processing and forwarding the message 2 to be processed, and forwarding the message 2 to be processed to a network card corresponding to the NFV 2 to complete the processing of the message 2 to be processed.
Fig. 6 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present specification. Referring to fig. 6, the apparatus includes, at a hardware level, a processor 602, an internal bus 604, a network interface 606, a memory 608, and a non-volatile memory 610. Of course it is also possible to include hardware required for other functions. The processor 602 reads a corresponding computer program from the non-volatile memory 610 into the memory 608 and then runs the computer program, thereby forming a message processing apparatus on a logic level. Of course, besides software implementation, the one or more embodiments in this specification do not exclude other implementations, such as logic devices or combinations of software and hardware, and so on, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Corresponding to the embodiment of the method, the present specification further provides a message processing apparatus.
Referring to fig. 7, a packet processing apparatus is applied to a DPU assembled on a physical machine, where the DPU includes a control chip and a processing chip, and virtual DPUs corresponding to cloud hosts in the physical machine one to one are deployed on the DPU, and the virtual DPU includes: the control terminals are deployed in the control chip, and the processing flow tables which are established in the processing chip and respectively correspond to the control terminals; the control end is used for receiving configuration information issued by a management user of the corresponding cloud host, and issuing flow table entries to the processing flow table corresponding to the corresponding cloud host according to the configuration information; the apparatus may include:
a receiving unit 710, configured to receive, by the processing chip, a message to be processed for a target cloud host;
a searching unit 720, configured to search, by the processing chip, whether a flow table entry matching the message to be processed exists in a target processing flow table corresponding to the target cloud host;
the processing unit 730, configured to process the packet to be processed by the processing chip according to the found flow table entry.
Optionally, the receiving unit 710 may specifically be configured to:
the processing chip receives a message to be processed sent by the target cloud host, or the processing chip receives a message to be processed sent by other objects different from the target cloud host;
accordingly, the processing unit 730 may be specifically configured to:
if the message to be processed comes from the target cloud host, the processing chip processes the message to be processed according to the found flow table entry and leads the message to be processed to a target object indicated by the message to be processed;
and if the message to be processed comes from other objects, the processing chip processes the message to be processed according to the searched flow table entry and forwards the message to be processed to a corresponding target cloud host.
Optionally, the searching unit 720 may be specifically configured to:
the processing chip searches the port number of the target cloud host in a port table, and the port table is used for recording the port numbers of the cloud hosts corresponding to all processing flow tables maintained by the processing chip;
the processing chip determines a target processing flow table corresponding to the port number of the target cloud host according to a mapping relation between a predefined port number and the processing flow table under the condition that the port number of the target cloud host is determined to be contained in the port table;
and the processing chip searches whether a flow table entry matched with the message to be processed exists in the target processing flow table or not.
Optionally, the hardware processing resources of the processing chip are maintained as hardware processing modules in a hardware resource pool for implementing a preset processing function, and a mapping relationship between a flow table entry in the processing flow table and at least one hardware resource module is recorded; the processing unit 730 may be specifically configured to:
the processing chip calls a corresponding hardware processing module from the resource pool to process the message to be processed according to the mapping relation recorded in the searched flow table entry;
and the processing chip forwards the processed message to be processed.
Optionally, the message processing apparatus may further include:
a first uploading unit 740, configured to, if the processing entry does not exist in the processing flow table, upload the message to be processed to the control end by the processing chip.
And an adding unit 750, configured to generate, by the control end according to the configuration information, a corresponding added flow table entry for the packet to be processed, and add the added flow table entry to the processing flow table.
Optionally, the message processing apparatus may further include:
a second uploading unit 760, configured to upload the message to be processed to the control end by the processing chip if the flow table entry does not exist in the processing flow table;
a control end processing unit 770, configured to process the to-be-processed packet by the control end.
Optionally, a management process runs in the control chip, and a configuration channel is established between the management process and the control end; the message processing apparatus may further include:
a redirection unit 780, configured to, when the control end receives the configuration information issued by a corresponding management user, redirect the configuration information to the management process through the configuration channel, so that the management process performs security check on the configuration information;
and a configuration issuing unit 790, configured to, if the configuration information passes the security check, issue, by the control end, the flow table entry to the processing flow table corresponding to the corresponding cloud host according to the configuration information.
Optionally, the control end and the cloud host corresponding to the control end are in the same network segment.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
In a typical configuration, a computer includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage media or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
In one or more embodiments of the present specification, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The terminology used in the description of the one or more embodiments is for the purpose of describing the particular embodiments only and is not intended to be limiting of the description of the one or more embodiments. As used in one or more embodiments of the present specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in one or more embodiments of the present description to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of one or more embodiments herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The above description is only for the purpose of illustrating the preferred embodiments of the one or more embodiments of the present disclosure, and is not intended to limit the scope of the one or more embodiments of the present disclosure, and any modifications, equivalent substitutions, improvements, etc. made within the spirit and principle of the one or more embodiments of the present disclosure should be included in the scope of the one or more embodiments of the present disclosure.

Claims (11)

1. A message processing method is applied to a DPU assembled on a physical machine, wherein the DPU comprises a control chip and a processing chip, virtual DPUs corresponding to cloud hosts in the physical machine one by one are deployed on the DPU, and the virtual DPUs comprise: the control terminals which are deployed in the control chip and correspond to the cloud hosts one by one, and the processing flow tables which are established in the processing chip and correspond to the control terminals respectively; the control end is used for receiving configuration information issued by a management user of the corresponding cloud host, and issuing flow table entries to the processing flow table corresponding to the corresponding cloud host according to the configuration information; the method comprises the following steps:
the processing chip receives a message to be processed aiming at a target cloud host;
the processing chip searches whether a flow table entry matched with the message to be processed exists in a target processing flow table corresponding to the target cloud host or not;
and the processing chip processes the message to be processed according to the searched flow table entry.
2. The method of claim 1, wherein the processing chip receives a message to be processed for a target cloud host, comprising:
the processing chip receives a message to be processed sent by the target cloud host, or the processing chip receives a message to be processed sent by other objects different from the target cloud host;
the processing chip processes the message to be processed according to the found flow table entry, and the processing chip comprises the following steps:
if the message to be processed comes from the target cloud host, the processing chip processes the message to be processed according to the found flow table entry and forwards the message to be processed to a target object indicated by the message to be processed;
and if the message to be processed comes from other objects, the processing chip processes the message to be processed according to the searched flow table entry and forwards the message to be processed to a corresponding target cloud host.
3. The method of claim 1, wherein the step of searching, by the processing chip, whether a flow table entry matching the packet to be processed exists in a target processing flow table corresponding to the target cloud host comprises:
the processing chip searches the port number of the target cloud host in a port table, and the port table is used for recording the port numbers of the cloud hosts corresponding to all processing flow tables maintained by the processing chip;
the processing chip determines a target processing flow table corresponding to the port number of the target cloud host according to a mapping relation between a predefined port number and the processing flow table under the condition that the port number of the target cloud host is determined to be contained in the port table;
and the processing chip searches whether a flow table entry matched with the message to be processed exists in the target processing flow table or not.
4. The method of claim 1, wherein the hardware processing resources of the processing chip are maintained as hardware processing modules in a hardware resource pool for implementing preset processing functions, and a flow table entry in the processing flow table records a mapping relationship with at least one hardware processing module; the processing chip processes the message to be processed according to the found flow table entry, and the processing chip comprises the following steps:
the processing chip calls a corresponding hardware processing module from the resource pool to process the message to be processed according to the mapping relation recorded in the searched flow table entry;
and the processing chip forwards the processed message to be processed.
5. The method of claim 1, further comprising:
if the flow table entry does not exist in the processing flow table, the processing chip sends the message to be processed to the control terminal;
and the control terminal generates a corresponding newly added flow table entry for the message to be processed according to the configuration information and adds the newly added flow table entry into the processing flow table.
6. The method of claim 1, further comprising:
if the flow table entries do not exist in the flow table, the processing chip sends the message to be processed to the control terminal;
and the control end processes the message to be processed.
7. The method of claim 1, wherein a management process runs in the control chip, and a configuration channel is established between the management process and the control end; the method further comprises the following steps:
the control end redirects the configuration information to the management process through the configuration channel under the condition of receiving the configuration information issued by a corresponding management user, so that the management process performs security verification on the configuration information;
and if the configuration information passes the security check, the control end issues the flow table entry to the flow table corresponding to the corresponding cloud host according to the configuration information.
8. The method of claim 1, wherein the control end and the cloud host corresponding to the control end are in the same network segment.
9. A message processing method is applied to a client corresponding to a management user of a target cloud host, a physical machine where the target cloud host is located is equipped with a DPU, the DPU comprises a control chip and a processing chip, the DPU is provided with virtual DPUs in one-to-one correspondence with cloud hosts in the physical machine, and the virtual DPUs comprise: the control terminals which are deployed in the control chip and correspond to the cloud hosts one by one, and the processing flow tables which are established in the processing chip and correspond to the control terminals respectively; the method comprises the following steps:
responding to a configuration instruction sent by the management user, and generating configuration information corresponding to the configuration instruction;
and issuing the configuration information to a control end corresponding to the target cloud host, so that the control end issues flow table entries to a processing flow table corresponding to the target cloud host according to the configuration information, and the processing chip processes the message to be processed aiming at the target cloud host according to the flow table entries.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 9.
11. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1-9 are implemented when the processor executes the program.
CN202210312192.4A 2022-03-28 2022-03-28 Message processing method and device Active CN114422367B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210312192.4A CN114422367B (en) 2022-03-28 2022-03-28 Message processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210312192.4A CN114422367B (en) 2022-03-28 2022-03-28 Message processing method and device

Publications (2)

Publication Number Publication Date
CN114422367A CN114422367A (en) 2022-04-29
CN114422367B true CN114422367B (en) 2022-09-06

Family

ID=81264454

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210312192.4A Active CN114422367B (en) 2022-03-28 2022-03-28 Message processing method and device

Country Status (1)

Country Link
CN (1) CN114422367B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115460145A (en) * 2022-08-15 2022-12-09 阿里云计算有限公司 Forwarding rule issuing method, intelligent network card and storage medium
CN115222538B (en) * 2022-08-15 2022-12-13 深圳星云智联科技有限公司 Market situation snapshot data calculation method and device, electronic equipment and storage medium
CN115102863B (en) * 2022-08-26 2022-11-11 珠海星云智联科技有限公司 Method and device for dynamically configuring DPU (distributed processing Unit) hardware resource pool
CN115114222B (en) * 2022-08-30 2022-12-27 珠海星云智联科技有限公司 Market information snapshot distribution method and related device
CN115168280B (en) * 2022-08-30 2022-12-02 珠海星云智联科技有限公司 Market quotation snapshot processing method and related device
CN115150203B (en) * 2022-09-02 2022-11-15 珠海星云智联科技有限公司 Data processing method and device, computer equipment and storage medium
CN116301663A (en) * 2023-05-12 2023-06-23 新华三技术有限公司 Data storage method, device and host

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113037647A (en) * 2021-03-17 2021-06-25 杭州迪普科技股份有限公司 Message processing method, device, equipment and computer readable storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105306241B (en) * 2014-07-11 2018-11-06 华为技术有限公司 A kind of service deployment method and network function accelerate platform
CN108347341A (en) * 2017-01-24 2018-07-31 华为技术有限公司 A kind of acceleration capacity method of adjustment and device for adjusting virtual machine acceleration capacity
EP4220396A1 (en) * 2017-11-15 2023-08-02 Huawei Technologies Co., Ltd. Acceleration resource scheduling method and acceleration system
EP3732837A4 (en) * 2017-12-29 2021-11-03 Nokia Technologies Oy Virtualized network functions
CN111193969B (en) * 2018-11-14 2023-04-28 中兴通讯股份有限公司 Data communication and communication management method based on DPU and DPU
CN112764872B (en) * 2021-04-06 2021-07-02 阿里云计算有限公司 Computer device, virtualization acceleration device, remote control method, and storage medium
CN113703919A (en) * 2021-08-26 2021-11-26 深圳云豹智能有限公司 Simulation system based on single-machine virtual DPU

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113037647A (en) * 2021-03-17 2021-06-25 杭州迪普科技股份有限公司 Message processing method, device, equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN114422367A (en) 2022-04-29

Similar Documents

Publication Publication Date Title
CN114422367B (en) Message processing method and device
US11706158B2 (en) Technologies for accelerating edge device workloads
US9374316B2 (en) Interoperability for distributed overlay virtual environment
CN114025021B (en) Communication method, system, medium and electronic equipment crossing Kubernetes cluster
US10904728B2 (en) Mobile application accelerator
JP6616957B2 (en) Communication system and communication method
CN113467970B (en) Cross-security-area resource access method in cloud computing system and electronic equipment
CN111694519B (en) Method, system and server for mounting cloud hard disk on bare metal server
US20240179115A1 (en) Virtual network routing gateway that supports address translation for dataplans as well as dynamic routing protocols (control plane)
KR101936942B1 (en) Distributed computing acceleration platform and distributed computing acceleration platform control method
KR101973946B1 (en) Distributed computing acceleration platform
US20220263713A1 (en) Invalidating cached flow information in a cloud infrastructure
CN110300068B (en) ARP resource management method and device and electronic equipment
KR101493933B1 (en) Method, appratus, system and computer-readable recording medium for assisting communication of virtual machine using hardware switch and software switch
CN116319354B (en) Network topology updating method based on cloud instance migration
US11637812B2 (en) Dynamic forward proxy chaining
CN116668372B (en) Flow control method and related device
US20230246956A1 (en) Invalidating cached flow information in a cloud infrastructure
US20230385139A1 (en) Network api credentials within a translation session
US20220417138A1 (en) Routing policies for graphical processing units
CN114500058A (en) Network access control method, system, device and medium
EP4360280A1 (en) Routing policies for graphical processing units
WO2022173554A1 (en) Packet flow in a cloud infrastructure based on cached and non-cached configuration information
CN117579547A (en) Message routing acceleration method, device, main equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant