WO2022199559A1 - 报文处理方法以及网络设备 - Google Patents

报文处理方法以及网络设备 Download PDF

Info

Publication number
WO2022199559A1
WO2022199559A1 PCT/CN2022/082138 CN2022082138W WO2022199559A1 WO 2022199559 A1 WO2022199559 A1 WO 2022199559A1 CN 2022082138 W CN2022082138 W CN 2022082138W WO 2022199559 A1 WO2022199559 A1 WO 2022199559A1
Authority
WO
WIPO (PCT)
Prior art keywords
entry
index value
network device
packet
action information
Prior art date
Application number
PCT/CN2022/082138
Other languages
English (en)
French (fr)
Inventor
肖诗汉
吴波
王海博
徐小飞
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP22774216.0A priority Critical patent/EP4311187A1/en
Publication of WO2022199559A1 publication Critical patent/WO2022199559A1/zh
Priority to US18/471,725 priority patent/US20240022512A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0894Policy-based network configuration management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • H04L45/08Learning-based routing, e.g. using neural networks or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/28Routing or path finding of packets in data switching networks using route fault recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/48Routing tree calculation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/32Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/55Prevention, detection or correction of errors

Definitions

  • the present application relates to network technologies, and in particular, to a packet processing method and network equipment.
  • the table lookup function is one of the most important core functions of the network. Efficient table lookup can effectively improve the packet processing efficiency of the network.
  • the FIB table includes FIB entries.
  • the FIB entry search algorithm mainly adopts the prefix tree search algorithm.
  • a search tree structure can be constructed according to each entry in the FIB table including a prefix.
  • a search tree structure is stored in the network device, and a table lookup function is implemented through the search tree structure.
  • the storage size for storing the search tree structure in the network device is mainly determined by the size of the search tree structure.
  • the speed at which the network device performs a table lookup through the lookup tree structure is mainly determined by the height of the lookup tree structure.
  • the search tree structure used to search for the FIB entry in the network device occupies more and more storage overhead of the network device, resulting in more storage overhead.
  • Embodiments of the present application provide a packet processing method and network device, which are used to reduce storage overhead and improve the speed of determining action information corresponding to a first packet, thereby reducing packet processing delay.
  • a first aspect of the embodiments of the present application provides a message processing method, the method includes:
  • the network device obtains the first message; then, the network device determines the first index value according to the address information of the first message and the neural network model; the network device determines the action information corresponding to the first message from the action information table according to the first index value , the action information table includes at least one entry, each entry corresponds to an index value and an action information; the network device processes the first packet according to the action information corresponding to the first packet.
  • the network device determines the first index value according to the address information of the first message and the neural network model; then, the network device determines the action information corresponding to the first message from the action information table according to the first index value, and determines the action information corresponding to the first message from the action information table.
  • the first message is processed.
  • the network device does not need to store a large-scale search tree structure, thereby avoiding the storage overhead caused by the search tree structure.
  • the method in which the network device uses the neural network model and the action information table to determine the action information corresponding to the first packet is faster. The delay is small.
  • the method further includes: the network device determines the second index value according to the address information of the first message and the search tree structure, and the search tree structure is that in the message rule information table, the neural network model cannot be fitted.
  • the network device usually 90% of the entries in the message rule information table can be fitted by the neural network model, and there are fewer table entries that cannot be fitted in the message rule information table, so the network device only needs to store the The search tree structure corresponding to the table item that the neural network model cannot fit.
  • the technical solution of the implementation manner can realize the compression of the existing large-scale search tree structure, reduce the storage overhead of the search tree structure in the network device, and effectively improve the forwarding capacity.
  • the method in which the network device determines the first index value according to the address information of the first packet and the neural network model is faster, and the search time is delayed. smaller.
  • the neural network model is obtained by performing model training according to the message rule information table.
  • the neural network model is obtained by model training according to the message rule information table, so that the network device can determine the index value through the neural network model, and the network device does not need to store the search tree structure, thereby reducing storage overhead.
  • the message rule information table includes at least one entry, each entry corresponds to an index value and an action information, and the action information table is used to indicate each entry in the message rule information table corresponding action information.
  • the first entry in the message rule information table corresponds to the second entry in the action information table, and the first entry and the second entry respectively include one or more entries;
  • the index values corresponding to the first entry are respectively the same as the index values corresponding to the second entry, and the index values corresponding to the second entry include the first index value.
  • the first entry in the message rule information table corresponds to the second entry in the action information table, respectively, and the first entry and the second entry respectively include one or more entries;
  • the index values corresponding to the first entry are respectively the same as the index values corresponding to the second entry.
  • the network device determines the first index value according to the address information of the first packet and the neural network model, including: the network device determines the third index value according to the address information of the first packet and the neural network model , the third index value is the index value corresponding to the third entry in the message rule information table; the network device determines the first index value corresponding to the fourth entry in the action information table from the mapping table according to the third index value, and the fourth The entry is an entry corresponding to the third entry in the action information table, and the mapping table includes an index value of an entry of the action information table corresponding to each entry in the message rule information table.
  • the network device determines the third index value according to the address information of the first packet and the neural network model.
  • the network device determines the first index value corresponding to the fourth entry of the action information table from the mapping table according to the third index value, thereby determining the action information corresponding to the first packet.
  • the network device determining the first target index value from the first index value and the second index value includes: the network device determining the mask length corresponding to the first index value and the mask length corresponding to the second index value code length; if the mask length corresponding to the first index value is greater than the mask length corresponding to the second index value, the network device selects the first index value as the first target index value; if the mask length corresponding to the first index value is less than the mask length corresponding to the second index value, the network device selects the second index value as the first target index value.
  • the first target index value is selected according to the mask length corresponding to the index value, so that the first target index value is selected according to the longest prefix matching principle, and the corresponding action information for the first packet is more accurately determined.
  • the network device determining the first target index value from the first index value and the second index value includes: the network device determining that the first index value corresponds to the fifth entry and the second index of the error correction table The value corresponds to the sixth entry of the error correction table; the error correction table includes at least one entry, each entry corresponds to an index value and a priority, and the entries in the error correction table are in the order of the index value and the message rules.
  • the entries in the information table correspond one-to-one, each entry in the message rule information table corresponds to a priority, and each entry in the error correction table corresponds to a priority with the corresponding entry in the message rule information table.
  • the priorities are the same; the network device determines the priority corresponding to the fifth entry and the priority corresponding to the sixth entry according to the error correction table; if the priority corresponding to the fifth entry is higher than the priority corresponding to the sixth entry, then The network device selects the first index value as the first target index value; if the priority corresponding to the fifth entry is lower than the priority corresponding to the sixth entry, the network device selects the second index value as the first target index value.
  • the network device determines the first target index value.
  • the first target index value is determined according to the priority of the entry corresponding to the index value, so that the corresponding action information can be accurately determined for the first packet.
  • the method further includes: the network device determines the prefix and mask corresponding to the seventh entry from the error correction table; the error correction table includes at least one entry, and each entry corresponds to an index value, Each entry has corresponding address information, and the address information corresponding to each entry includes a prefix and a mask; the seventh entry includes the entry corresponding to the first index value in the error correction table and the prefix of the first index value in the error correction table.
  • the network device determines from the seventh entry that the prefix corresponding to the eighth entry matches the destination address of the first packet, and the mask corresponding to the eighth entry is the seventh entry The mask with the largest mask length among the masks corresponding to the entry; the network device determines the fourth index value corresponding to the eighth entry; the network device determines the action information corresponding to the first message from the action information table according to the first index value, including : The network device determines the action information corresponding to the first packet from the action information table according to the fourth index value.
  • the network device determines the first index value according to the address information of the first packet and the neural network model. Then, the network device performs error correction on the first index value according to the error correction table to obtain a fourth index value. The network device determines the action information corresponding to the first packet from the action information table according to the fourth index value, and processes the first packet according to the action information corresponding to the first packet. It can be seen from this that the network device does not need to store a large-scale search tree structure, thereby avoiding the storage overhead caused by the search tree structure.
  • the network device determines the action information corresponding to the first message by searching the tree structure
  • the method in which the network device determines the action information corresponding to the first message by using the neural network model, the error correction table and the action information table is faster, The search delay is small. Further, the network device performs error correction on the first index value according to the error correction table to obtain a fourth index value, and then determines the action information corresponding to the first packet. In this way, the corresponding action information for the first packet can be determined more accurately.
  • the method further includes: the network device determines the prefix and mask corresponding to the ninth entry from the error correction table; the error correction table includes at least one entry, and each entry corresponds to an index value, Each entry has corresponding address information, and the address information corresponding to each entry includes a prefix and a mask; the ninth entry includes the entry corresponding to the first index value in the error correction table and the prefix of the first index value in the error correction table.
  • the network device determines from the ninth entry that the prefix corresponding to the tenth entry matches the destination address of the first packet, and the mask corresponding to the tenth entry is the first The mask with the largest mask length among the masks corresponding to the nine entries; the network device determines the fifth index value corresponding to the tenth entry; the network device determines the sixth index according to the address information of the first packet and the search tree structure value; the search tree structure is the search tree structure corresponding to the entries in the message rule information table that cannot be fitted by the neural network model; the network device determines the second target index value from the fifth index value and the sixth index value; the network device determines the second target index value according to the
  • the first index value determining the action information corresponding to the first packet from the action information table includes: the network device determining the action information corresponding to the first packet from the action information table according to the second target index value.
  • the technical solution of this implementation manner can realize the compression of the existing large-scale search tree structure, reduce the The storage overhead of the search tree structure in the network device effectively increases the forwarding capacity.
  • the method in which the network device uses the neural network model, the error correction table and the action information table to determine the fourth index value is faster and has a smaller search delay.
  • the network device may further perform error correction on the first index value according to the error correction table to obtain the fifth index value. Then, the network device determines the action information corresponding to the first packet in combination with the fifth index value and the sixth index value. In this way, the action information determined for the first packet is more accurately determined.
  • the action information corresponding to the first packet includes port information; the network device processing the first packet according to the action information corresponding to the first packet includes: the network device determining the first packet according to the port information The next-hop routing node of the packet; the network device forwards the first packet to the next-hop routing node.
  • This implementation provides a specific process of applying the present application to a forwarding scenario.
  • the network device determines the port information, and then forwards the first packet according to the port information.
  • the method before the network device determines the first index value according to the address information of the first message and the neural network model, the method further includes: the network device determines the neural network structure; the network device determines the neural network structure according to the message rule information table and the neural network model.
  • the neural network structure is trained to obtain a neural network model.
  • the message rule information table includes at least one table entry, each table entry corresponds to an index value and an action information, and the action information table is used to indicate each table in the message rule information table. Action information corresponding to the item.
  • a process of training a neural network model by a network device is provided, which provides a basis for the implementation of the solution.
  • the network device conducts model training through the message rule information table and the neural network structure.
  • the subsequent network device can determine the index value according to the address information of the message and the neural network model, and the network device does not need to store the message rule information table and the corresponding search tree structure, thereby avoiding corresponding storage overhead.
  • the method further includes: the control of the network device sends a first message to the data plane of the network device, where the first message is used to deliver or update the neural network model to the data plane of the network device.
  • the neural network model can be delivered to the data plane of the network device.
  • the corresponding index value can be determined by using the neural network model and the address information of the packet.
  • the first message includes the second packet;
  • the packet header of the second packet includes the neural network model enable bit, the height of the neural network model, the width of the neural network model, the neural network model
  • the included micromodel identifier, the neural network model enable bit takes the value of one;
  • the payload of the second packet includes the parameters of the neural network model.
  • control plane of the network device may send the relevant parameters of the neural network model to the data plane of the network device in the form of packets.
  • This implementation provides a specific format in which the message carries the relevant parameters of the neural network model.
  • the method further includes: the network device determines an entry in the message rule information table that cannot be fitted by the neural network model, the message rule information table includes at least one entry, and each entry corresponds to one entry.
  • the index value and an action information, the action information table is used to indicate the action information corresponding to each entry in the message rule information table; the network device represents the entry that cannot be fitted by the neural network model according to the search tree algorithm, and obtains the search tree structure .
  • the network device can use the search tree algorithm to represent the table items that cannot be fitted by the neural network model to obtain a search tree structure.
  • the network device usually, 90% of the entries in the message rule information table can be fitted by the neural network model, and there are few items that cannot be fitted in the message rule information table. Therefore, the network device only needs to store the tables that cannot be fitted by the neural network model.
  • the search tree structure corresponding to the item Therefore, the technical solution of the implementation mode realizes the compression of the existing large-scale search tree structure, reduces the storage overhead of the search tree structure in the network device, and effectively improves the forwarding capacity.
  • the network device determines the final target index value by combining the neural network model and the tree structure search method. In this way, the action information corresponding to the message can be more accurately determined.
  • the method further includes: the control of the network device sends a second message to the data plane of the network device, where the second message is used to deliver or update the search tree structure to the data plane of the network device.
  • the search tree structure can be delivered to the data plane of the network device.
  • the corresponding index value can be determined according to the search tree structure and the address information of the packet.
  • the second message includes a third packet;
  • the packet header of the third packet includes the search tree enable bit, the type of the search tree structure, and the start node to be updated in the search tree structure
  • the identifier and the identifier of the to-be-updated termination node in the search tree structure, the value of the search tree enable bit is one;
  • the payload of the third packet includes the search tree structure.
  • control plane of the network device may deliver the search tree structure to the data plane of the network device in the form of a message.
  • This implementation provides a specific format of the packet-carrying search tree structure.
  • the method further includes: the control of the network device sends a third message to the data plane of the network device, where the third message is used to deliver or update the error correction table to the data plane of the network device.
  • control of the network device issues an error correction table to the data plane of the network device.
  • the data plane of the network device can perform error correction on the index value obtained through the neural network model to obtain the final index value. Therefore, the corresponding action information can be more accurately determined for the message.
  • the third message includes a fourth packet
  • the packet header of the fourth packet includes the error correction table enable bit, the start position and the end position of the entry to be updated in the error correction table , the value of the error correction table enable bit is one;
  • the payload of the fourth packet includes the prefix and mask corresponding to the table entry to be updated.
  • control plane of the network device may send the error correction table to the data plane of the network device in the form of a message.
  • This implementation provides a specific format of the error correction table for the packet bearing.
  • a second aspect of an embodiment of the present application provides a network device, where the network device includes:
  • transceiver module for acquiring the first message
  • a processing module configured to determine the first index value according to the address information of the first message and the neural network model; determine the action information corresponding to the first message from the action information table according to the first index value, and the action information table includes at least one entry , each entry corresponds to an index value and an action information; the first packet is processed according to the action information corresponding to the first packet.
  • processing module is also used to:
  • the second index value is determined according to the address information of the first message and the search tree structure, and the search tree structure is the search tree structure corresponding to the entries in the message rule information table that cannot be fitted by the neural network model;
  • the processing module is specifically used for:
  • the action information corresponding to the first packet is determined from the action information table according to the first target index value.
  • the neural network model is obtained by performing model training according to the message rule information table.
  • the message rule information table includes at least one entry, each entry corresponds to an index value and an action information, and the action information table is used to indicate each entry in the message rule information table corresponding action information.
  • the first entry in the message rule information table corresponds to the second entry in the action information table, and the first entry and the second entry respectively include one or more entries;
  • the index values corresponding to the first entry are respectively the same as the index values corresponding to the second entry, and the index values corresponding to the second entry include the first index value.
  • processing module is specifically used for:
  • a third index value is determined according to the address information of the first message and the neural network model, and the third index value is an index value corresponding to the third entry in the message rule information table;
  • the first index value corresponding to the fourth entry in the action information table is determined from the mapping table according to the third index value, where the fourth entry is the entry corresponding to the third entry in the action information table, and the mapping table includes message rule information The index value of the entry of the action information table corresponding to each entry in the table.
  • processing module is specifically used for:
  • the first index value is selected as the first target index value
  • the second index value is selected as the first target index value.
  • processing module is specifically used for:
  • the error correction table includes at least one entry, and each entry corresponds to an index value and a priority , the entries in the error correction table are in one-to-one correspondence with the entries in the message rule information table according to the size order of the index value, each table entry in the message rule information table corresponds to a priority, and each table in the error correction table corresponds to a priority.
  • a priority corresponding to the entry is the same as the priority of the corresponding entry in the message rule information table;
  • the first index value is selected as the first target index value
  • the second index value is selected as the first target index value.
  • processing module is also used to:
  • the error correction table includes at least one entry, each entry corresponds to an index value, each entry has corresponding address information, and each entry corresponds to The address information includes a prefix and a mask;
  • the seventh entry includes an entry corresponding to the first index value in the error correction table and an entry corresponding to an index value within the preset threshold range of the first index value;
  • the mask corresponding to the eighth entry is the mask with the largest mask length among the masks corresponding to the seventh entry;
  • the processing module is specifically used for:
  • the action information corresponding to the first packet is determined from the action information table according to the fourth index value.
  • processing module is also used to:
  • the error correction table includes at least one entry, each entry corresponds to an index value, each entry has corresponding address information, and each entry corresponds to The address information includes a prefix and a mask;
  • the ninth table entry includes the table entry corresponding to the first index value in the error correction table and the table entry corresponding to the index value within the preset threshold range of the first index value;
  • the sixth index value is determined according to the address information of the first message and the search tree structure;
  • the search tree structure is the search tree structure corresponding to the entries in the message rule information table that cannot be fitted by the neural network model;
  • the processing module is specifically used for:
  • the action information corresponding to the first packet is determined from the action information table according to the second target index value.
  • the action information corresponding to the first packet includes port information; the processing module is specifically configured to:
  • processing module is also used to:
  • the message rule information table includes at least one entry, each entry corresponds to an index value and an action information, and the action information table is used to indicate the message Action information corresponding to each entry in the rule information table.
  • control of the network device sends a first message to the data plane of the network device, and the first message is used to deliver or update the neural network model to the data plane of the network device.
  • the first message includes the second packet;
  • the packet header of the second packet includes the neural network model enable bit, the height of the neural network model, the width of the neural network model, the neural network model
  • the included micromodel identifier, the neural network model enable bit takes the value of one;
  • the payload of the second packet includes the parameters of the neural network model.
  • processing module is also used to:
  • the message rule information table includes at least one table entry, each table entry corresponds to an index value and an action information, and the action information table is used to indicate the message Action information corresponding to each entry in the rule information table;
  • the search tree algorithm the table items that cannot be fitted by the neural network model are represented, and the search tree structure is obtained.
  • control of the network device sends a second message to the data plane of the network device, and the second message is used to deliver or update the search tree structure to the data plane of the network device.
  • the second message includes a third packet;
  • the packet header of the third packet includes the search tree enable bit, the type of the search tree structure, and the start node to be updated in the search tree structure
  • the identifier and the identifier of the to-be-updated termination node in the search tree structure, the value of the search tree enable bit is one;
  • the payload of the third packet includes the search tree structure.
  • control of the network device sends a third message to the data plane of the network device, and the third message is used to deliver or update the error correction table to the data plane of the network device.
  • the third message includes a fourth packet
  • the packet header of the fourth packet includes the error correction table enable bit, the start position and the end position of the entry to be updated in the error correction table , the value of the error correction table enable bit is one;
  • the payload of the fourth packet includes the prefix and mask corresponding to the table entry to be updated.
  • a third aspect of the embodiments of the present application provides a network device, where the network device includes a processor, configured to implement the method described in any possible implementation manner of the foregoing first aspect.
  • the network device may further include a memory for storing instructions, and when the processor executes the instructions stored in the memory, the method described in any possible implementation manner of the first aspect may be implemented.
  • the network device further includes a communication interface, where the communication interface is used for the network device to communicate with other devices.
  • the communication interface may be a transceiver, circuit, bus, module, pin, or other type of communication interface.
  • a fourth aspect of the embodiments of the present application provides a computer-readable storage medium, where computer instructions are stored in the computer-readable storage medium, and when the computer instructions are executed on a computer, the computer is made to execute any of the possible implementations in the first aspect above. Methods.
  • a fifth aspect of the embodiments of the present application provides a computer program product, where the computer program product includes computer program code, when the computer program code is run on a computer, the computer program code enables the computer to execute any possible implementation in the first aspect. method.
  • a seventh aspect of an embodiment of the present application provides a chip, including a processor.
  • the processor is configured to execute the method in any possible implementation manner of the above first aspect.
  • the chip further includes a memory coupled to the processor.
  • the chip further includes a communication interface.
  • the embodiments of the present application have the following advantages:
  • the network device obtains the first packet; then, the network device determines the first index value according to the address information of the first packet and the neural network model.
  • the network device determines the action information corresponding to the first message from the action information table according to the first index value; the action information table includes at least one entry, and each entry corresponds to an index value and an action information; the network device determines the action information corresponding to the first message according to the first index value;
  • the corresponding action information processes the first packet. Therefore, the present application uses the neural network model and the first index value of the address information of the first packet, and determines the action information corresponding to the first packet from the action information table according to the first index value, so as to realize the processing of the first packet. deal with.
  • the network device does not need to store a large-scale search tree structure, avoiding the storage overhead caused by the search tree structure.
  • FIG. 1 is a schematic structural diagram of a network device according to an embodiment of the present application.
  • FIG. 2A is a schematic diagram of an embodiment of a packet processing method according to an embodiment of the present application.
  • 2B is a schematic diagram of a scenario of a packet processing method according to an embodiment of the present application.
  • 2C is a schematic diagram of another scenario of the packet processing method according to the embodiment of the present application.
  • 2D is a schematic storage diagram of a neural network model, a search tree structure and an action information table in a network device according to an embodiment of the present application;
  • 3A is a schematic diagram of another embodiment of a packet processing method according to an embodiment of the present application.
  • 3B is a schematic diagram of a search tree structure according to an embodiment of the present application.
  • 3C is a schematic diagram of another scenario of the packet processing method according to the embodiment of the present application.
  • FIG. 4A is a schematic diagram of another embodiment of the packet processing method according to the embodiment of the present application.
  • 4B is a schematic diagram of another scenario of a packet processing method according to an embodiment of the present application.
  • 5A is a schematic diagram of another embodiment of a packet processing method according to an embodiment of the present application.
  • 5B is a schematic diagram of another scenario of the packet processing method according to the embodiment of the present application.
  • 6A is a schematic diagram of another embodiment of a packet processing method according to an embodiment of the present application.
  • 6B is a schematic structural diagram of a neural network model according to an embodiment of the present application.
  • 6C is a schematic diagram of a format of a second message according to an embodiment of the present application.
  • FIG. 7A is a schematic diagram of another embodiment of a packet processing method according to an embodiment of the present application.
  • 7B is a schematic diagram of a format of a third message according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a format of a fourth message according to an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a network device according to an embodiment of the present application.
  • FIG. 10 is another schematic structural diagram of a network device according to an embodiment of the present application.
  • Embodiments of the present application provide a packet processing method and network device, which are used to reduce storage overhead and improve the speed of determining action information corresponding to a first packet, thereby reducing packet processing delay.
  • the technical solutions of the present application are applicable to various types of data communication network scenarios. For example, data center network, wide area network, local area network, metropolitan area network, mobile communication network and other application scenarios.
  • the data communication network system to which this application applies includes at least one network device.
  • the network device may be a router, a switch, or the like.
  • the network device may use the technical solutions provided in this application to process the packets received by the network device.
  • Action information including a processing operation instruction of the packet, and/or some related processing information required for performing processing operations on the packet. For example, operation information and port information are discarded. The action information is used to process the message.
  • Message rule information table including at least one table entry, each table entry corresponds to an index value, a message rule information and an action information.
  • the packet rule information may be address information, port information, protocol number information, etc. of the packet.
  • the message rule information table is used to match the message information of the message with the message rule information in the message rule information table to determine the action information corresponding to the message or the index value of the action information corresponding to the message.
  • Action information table including at least one table entry, each table entry corresponds to an index value and an action information.
  • the action information table is used to determine the action information corresponding to the message through the index value.
  • Search tree structure It consists of nodes and edges, and each node stores the corresponding value.
  • the value can represent an index value or action information.
  • Neural network model a network system formed by interconnecting neurons.
  • model training is performed according to the entries in the message rule information table to obtain the neural network model.
  • FIG. 1 A schematic structural diagram of a network device provided by an embodiment of the present application is described below with reference to FIG. 1 .
  • FIG. 1 is a schematic structural diagram of a network device according to an embodiment of the present application.
  • the network device includes a model training and verification module 101 , a table entry search module 102 , a result selection module 103 and a message processing module 104 .
  • the model training and verification module 101 is configured to perform model training based on the message rule information table to obtain a neural network model; and send the neural network model to the table entry search module 102 .
  • the message rule information table includes at least one entry, and each entry corresponds to an index value and an action information. Each entry has corresponding address information.
  • the packet rule information table may be a forwarding information base (FIB) table, or an access control list (ACL), or a firewall policy table, or a flow table, or Media access control (media access control, MAC) address table, etc., which are not specifically limited in this application.
  • FIB forwarding information base
  • ACL access control list
  • MAC Media access control
  • model training and verification module 101 is further configured to determine table items that cannot be fitted by the neural network model, and express the table items that cannot be fitted by the neural network model as a search tree structure through a search tree algorithm; Module 102 sends the lookup tree structure.
  • the table entry lookup module 102 is configured to receive the neural network model sent by the model training and verification module 101; and calculate and obtain the first index value according to the address information of the first packet and the neural network model.
  • the result selection module 103 is configured to determine the action information corresponding to the first packet from the action information table according to the first index value.
  • the action information table includes at least one entry, and each entry corresponds to an index value and an action information.
  • the action information table is used to indicate the action information corresponding to each entry in the message rule information table.
  • the message processing module 104 is configured to process the message with the action information corresponding to the first message.
  • the table entry search module 102 is further configured to calculate and obtain the second index value according to the address information of the first packet and the search tree structure.
  • the search tree structure is a search tree structure corresponding to entries in the message rule information table that cannot be fitted by the neural network model.
  • the result selection module 104 is further configured to determine the first target index value from the first index value and the second index value, and determine the action corresponding to the first message from the action information table according to the first target index value information.
  • the message processing module 103 is configured to process the first message with the action information corresponding to the first message.
  • the table entry search module 102 is further configured to perform error correction on the first index value according to the error correction table to obtain a seventh index value.
  • the error correction table includes at least one entry, each entry corresponds to an index value, and each entry has corresponding address information.
  • the address information of each entry in the error correction table is obtained from the message rule information table.
  • the result selection module 103 is configured to select the third target index value from the seventh index value and the second index value, and determine the action corresponding to the first message from the action information table according to the third target index value information.
  • FIG. 2A is a schematic diagram of an embodiment of a packet processing method according to an embodiment of the present application.
  • the message processing method includes:
  • the network device obtains a first packet.
  • the data plane of the network device can extract the address information of the first packet. For example, the destination address and source address of the first packet.
  • the network device determines the first index value according to the address information of the first packet and the neural network model.
  • the address information of the first packet includes at least one of the following: a destination address of the first packet, and a source address of the first packet.
  • the neural network model is a neural network model obtained by model training according to the message rule information table.
  • the message rule information table includes at least one entry, each entry corresponds to an index value and an action information, and each entry has corresponding address information.
  • the address information corresponding to each entry includes a prefix and a mask.
  • An action information corresponding to each entry includes port information and the like.
  • one piece of action information corresponding to each entry may include action information for different network layers.
  • an action information corresponding to each entry includes a port number, an index value of a destination MAC address, and the like.
  • the network device forwards the first packet according to the port number.
  • the network device determines the destination MAC address according to the index value of the destination MAC address, and modifies the MAC address of the first packet to the destination MAC address.
  • the message rule information table includes any one of the following: a FIB table, an ACL, a MAC address table, a firewall policy table, and a flow table.
  • the message rule information table is a FIB table.
  • the FIB table includes 6 rows, and each row can be understood as a table item. Therefore, six entries are included in Table 1. The first entry corresponds to index value 0, the second entry corresponds to index value 1, and so on, and the sixth entry corresponds to index value 5.
  • each FIB entry in the FIB table has corresponding prefix, mask and port information. That is, the address information corresponding to each entry includes a prefix and a mask, and one action information corresponding to each entry includes port information.
  • the port information is used to instruct the network device to determine the outgoing port of the packet, and forward the packet from the outgoing port to the next-hop routing node. For example, in Table 1 above, the prefix corresponding to the second entry is 0, the mask is /1, and the outgoing port is 3.
  • the packet rule information table is ACL.
  • ACL includes 6 lines, each line can be understood as an entry. Therefore, Table 2 includes six entries. It can be seen from Table 2 that each ACL entry in the ACL has corresponding quintuple information, quintuple information source address information, destination address information, source port information, destination port information, and protocol number information. In addition, each ACL entry has a corresponding priority.
  • the address information of the first packet includes the destination address of the first packet.
  • the network device inputs the destination address of the first packet as an input parameter into the neural network model, and obtains the first index value output by the neural network model.
  • the neural network model in the network device may be sent to the network device by other devices, or preconfigured in the network device, or obtained by the network device through self-training, which is not specifically limited in this application.
  • the neural network model obtained by the self-training of the network device please refer to the related introduction of the embodiment shown in FIG. 6 later, which will not be repeated here.
  • the network device may use a default (or default) action to process the first packet. For example, the network device discards the first packet; or, the network device forwards the first packet from the default port to the next-hop routing node.
  • the network device determines the action information corresponding to the first packet from the action information table according to the first index value.
  • the action information table includes at least one entry, and each entry corresponds to an index value and an action information.
  • the action information table is used to indicate the action information corresponding to each entry in the message rule information table.
  • the first entry in the message rule information table is in one-to-one correspondence with the second entry in the action information table, and the index values corresponding to the first entry are respectively corresponding to the second entry.
  • the index value is the same.
  • the first entry and the second entry respectively include one or more entries.
  • the index value corresponding to the second entry includes the first index value.
  • the message rule information table is the FIB table shown in Table 1 above, and the action information table can be represented as Table 3 below. It can be known from Table 1 and Table 3 that the first entry in Table 1 corresponds to the second entry in Table 3. As can be seen from Table 1 and Table 3, the first entry in the message rule information table corresponds to the first entry in the action information table, and the second entry in the message rule information table corresponds to the first entry in the action information table. The second entry corresponds, and so on. It can be seen from Table 1 that the message rule information table includes 6 entries, the first entry corresponds to index value 0, the second entry corresponds to index value 1, and so on, the sixth entry corresponds to index value 5 .
  • the action information table includes 6 entries, the first entry corresponds to index value 0, the second entry corresponds to index value 1, and so on, the sixth entry corresponds to index value 5. Therefore, the first entry includes the first to sixth entries of the message rule information table.
  • the second entry includes the first to sixth entries of the action information table.
  • the respective index values corresponding to the first entry are the same as the respective index values corresponding to the second entry.
  • the first index value output by the network device through the neural network model in the above step 203 can be understood as the index value of the corresponding entry in the action information table. This eliminates the need for conversion of index values.
  • the network device determines the entry corresponding to the first index value from the action information table, and uses the action information corresponding to the entry as the action information corresponding to the first packet.
  • the number of entries included in the message rule information table is the same as the number of entries in the action information table, and the entries in the message rule information table are in the order of the size of the index values and the action information table.
  • the table items in the one-to-one correspondence For example, from Table 1 and Table 3 above, it can be known that the number of entries included in Table 3 is the same as the number of entries included in Table 1.
  • the entries in Table 3 may correspond to the entries in Table 1 one-to-one.
  • the first index value determined in the foregoing step 202 is an index value of 0, it can be known that the index value of 0 corresponds to the first entry in the foregoing Table 1.
  • the first entry in Table 1 above corresponds to the first entry in Table 3.
  • the first index value can be understood as the index value of the first entry in Table 3. Then the network device can perform the above step 203 .
  • the port information corresponding to the entry in the action information table is the port information of the entry corresponding to the entry in the action information table in Table 1.
  • the first index value is index value 1. Because the entries in the message rule information table correspond one-to-one with the entries in the action information table according to the order of the size of the index values. That is, the index value corresponding to the entry in the message rule information table is the same as the index value corresponding to the corresponding entry in the action information table. Therefore, according to Table 3, the network device can determine that the index value 1 corresponds to the second entry, and the port information in the second entry includes port 2. Therefore, the network device can determine to forward the first packet from port 2.
  • step 203a The second possible implementation manner is described below in conjunction with step 203a and step 203b.
  • the network device determines a third index value according to the address information of the first packet and the neural network model.
  • the third index value is an index value corresponding to the third entry in the message rule information table.
  • the message rule information table is the above Table 1, and the third index value is the index value 2 corresponding to the third entry in the above Table 1.
  • Step 203b The network device determines the first index value corresponding to the fourth entry of the action information table from the mapping table according to the third index value.
  • the fourth entry is an entry corresponding to the third entry in the action information table.
  • the mapping table includes an index value of an entry of the action information table corresponding to each entry in the message rule information table.
  • the number of entries included in the action information table is different from the number of entries included in the message rule information table.
  • the message rule information table is Table 1 above
  • the action information table is Table 4 below.
  • the action information table has only three entries, and each entry corresponds to an index value. From Table 4, it can be understood that each outgoing port corresponds to an index value. Port 2 corresponds to index value 0, port 3 corresponds to index value 1, and port 1 corresponds to index value 2. The entries for the same port shown in Table 2 above are deleted, which can reduce the number of entries in the action information table and reduce the storage overhead.
  • the mapping table may include an index value of an entry of the action information table corresponding to each entry in the message rule information table.
  • Table 5 is a schematic diagram of the mapping table.
  • the outgoing port in the first entry shown in Table 1 is 2, and the outgoing port in the first entry shown in Table 4 is 2. Therefore, the first entry shown in Table 1 corresponds to the first entry shown in Table 4.
  • the index value corresponding to the first entry in Table 4 is 0, so the index value of the first row in the mapping table of Table 5 below is 0.
  • the egress port in the second entry shown in Table 1 is 3, and the egress port in the second entry shown in Table 4 is 3. Therefore, the second entry shown in Table 1 corresponds to the second entry shown in Table 4.
  • the index value corresponding to the second entry in Table 4 is 1, so the index value of the second row in the mapping table of Table 5 below is 1.
  • the outgoing port in the third entry shown in Table 1 is 3, and the outgoing port in the second entry shown in Table 4 is 3. Therefore, the third entry shown in Table 1 corresponds to the second entry shown in Table 4.
  • the index value corresponding to the second entry in Table 4 is 1, so the index value of the third row in the mapping table of Table 5 below is 1.
  • the sixth entry shown in Table 1 corresponds to the third entry shown in Table 4.
  • the index value corresponding to the third entry in Table 4 is 2, so the index value of the sixth row in the mapping table of Table 5 below is 2.
  • index value in each entry in Table 5 above is the index value of the entry in Table 4 corresponding to the entry in Table 1.
  • the network device determines, according to Table 5, that the third entry in Table 1 corresponds to the second entry in Table 4.
  • the index value is 1, so the first index value is the index value 1.
  • the network device processes the first packet according to the action information corresponding to the first packet.
  • the action information includes port information.
  • the above step 204 specifically includes step 204a and step 204b.
  • the network device determines the next-hop routing node of the first packet according to the port information.
  • the port information is port 2.
  • the routing node connected to port 2 of the network device is the next-hop routing node of the first packet.
  • the network device forwards the first packet to the next-hop routing node.
  • the port information is port 2.
  • the network device forwards the first packet to the next-hop routing node through port 2 of the network device.
  • the action information includes discard processing. Then, the above step 204 is specifically for the network device to discard the first packet.
  • the model training and verification module in the network device performs model training based on the message rule information table to obtain a neural network model.
  • the table entry search module in the network device receives the first message, and then inputs the address information of the first message into the neural network model to obtain the first index value.
  • the result selection module of the network device determines the action information corresponding to the first packet from the action information table according to the first index value.
  • the packet processing module of the network device processes the first packet according to the action information corresponding to the first packet.
  • the process performed by the network device is similar to the related introduction in FIG. 2B , with the difference that: after the entry lookup module in the network device obtains the third index value, the entry lookup module searches from the third index value from The mapping table determines the first index value corresponding to the entry corresponding to the action information table. The result selection module of the network device determines the action information corresponding to the first packet from the action information table according to the first index value. Then, the packet processing module of the network device processes the first packet according to the action information corresponding to the first packet.
  • the network device can determine the first index value by using the neural network model and the address information of the first packet.
  • the network device determines action information corresponding to the first packet from the action information table according to the first index value, and processes the first packet according to the action information corresponding to the first packet.
  • the network device does not need to store a large-scale search tree structure, thus avoiding the storage overhead caused by the search tree structure.
  • the method in which the network device uses the neural network model and the action information table to determine the action information corresponding to the first packet is faster. The delay is small.
  • the neural network model can be stored on-chip, and the action information table can be stored off-chip.
  • the network device only needs to perform on-chip memory access when processing the first packet, and does not need to perform off-chip memory access, thereby saving off-chip memory access bandwidth and effectively improving forwarding capacity.
  • the neural network model is used to fit the table items included in the message rule information, and there will be table items that cannot be fitted.
  • the present application provides corresponding technical solutions for the case of entries in the message rule information table that cannot be fitted by the neural network model. The following description will be made with reference to the embodiment shown in FIG. 3A .
  • FIG. 3A is a schematic diagram of another embodiment of a packet processing method according to an embodiment of the present application.
  • the message processing method includes:
  • the network device obtains a first packet.
  • the network device determines the first index value according to the address information of the first packet and the neural network model.
  • Steps 301 to 302 are similar to steps 201 to 202 in the embodiment shown in FIG. 2A .
  • steps 201 to 202 in the embodiment shown in FIG. 2A please refer to the related introductions of steps 201 to 202 in the embodiment shown in FIG. 2A , which will not be repeated here.
  • the network device determines the second index value according to the address information of the first packet and the search tree structure.
  • the search tree structure is the search tree structure corresponding to the entries in the message rule information table that cannot be fitted by the neural network model.
  • the message rule information table is the FIB table shown in Table 6 below.
  • x is the destination address information with a length of 3 bits
  • y is an index value.
  • Table entries that the neural network model cannot fit are prefixed with binary 1.
  • the index value of the entry corresponding to the prefix plus mask 1/1 is 1, and the predicted index value predicted by the above neural network model is 4, then the prefix plus mask is 1/1 and the prefix plus The index value 1 corresponding to the mask of 1/1 is added to the residual parameter table, and is represented as a search tree structure by the search tree algorithm.
  • FIG. 3B For details, please refer to the search tree structure shown in FIG. 3B .
  • the search is performed from the right branch of the search tree structure shown in FIG. 3B , and it can be known that the number 1 in the searched node represents the index value. That is, the second index value is the index value 1.
  • the search tree structure in the above step 303 may be sent by other devices to the network device, or the network device may form a residual parameter table with entries that cannot be fitted by the neural network model, and then use the search tree algorithm to form a residual parameter table. Indicates the entries that cannot be fitted in the residual parameter table, and obtains the search tree structure.
  • the search tree structure please refer to the related introduction later, which will not be repeated here.
  • the network device determines the first target index value from the first index value and the second index value.
  • the network device may determine the first target index value in multiple manners, and two possible implementation manners are shown below.
  • the network device determines the mask length corresponding to the first index value and the mask length corresponding to the second index value.
  • the network device may determine the mask length corresponding to the first index value and the mask length corresponding to the second index value through the following two possible implementation manners.
  • Implementation Mode 1 The network device determines the mask length corresponding to the first index value and the mask length corresponding to the second index value from the error correction table.
  • the error correction table includes at least one entry, each entry corresponds to an index value, and each entry has corresponding address information.
  • the address information corresponding to each entry includes a prefix and a mask.
  • the number of entries included in the error correction table is the same as the number of entries included in the message rule information table, that is, the entries in the error correction table correspond one-to-one with the entries in the message rule information table according to the order of index values. That is, it can be understood that the index value corresponding to the entry in the error correction table is the same as the index value of the corresponding entry in the message information table.
  • the address information corresponding to the entry in the error correction table is the address information of the corresponding entry in the message rule information table.
  • the error correction table and the action information table may be a logical table, and in actual storage, the content of the error correction table and the content of the action information table may be stored in a table, which is not specifically limited in this application.
  • the message rule information table is Table 6 above. Because the entries in the error correction table correspond one-to-one with the entries in the message rule information table according to the size order of the index values.
  • the address information corresponding to the entry in the error correction table is the address information of the corresponding entry in the message rule information table. Therefore, the error correction table can be expressed as the following Table 7.
  • index value 3 corresponds to the fourth entry of Table 7
  • index value 1 corresponds to the second entry of Table 7.
  • table entry It can be seen from Table 7 that the mask corresponding to the second entry is /1, that is, the mask length is 1.
  • the mask corresponding to the fourth entry is /2, that is, the mask length is 2.
  • Implementation mode 2 the network device obtains the mask length corresponding to the first index value output by the neural network model in the above-mentioned step 302, and also obtains the mask length corresponding to the second index value by searching the tree structure in the above-mentioned step 303.
  • the network device selects the first index value as the first target index value.
  • the network device selects the second index value as the first target index value.
  • the mask corresponding to the second entry in Table 7 corresponding to the first index value is /1, that is, the mask length is 1.
  • the mask corresponding to the fourth entry in Table 7 corresponding to the second index value is /2, that is, the mask length is 2. Then the network device selects the second index value as the first target index value.
  • the message rule information table is the FIB table.
  • the network device determines that the first index value corresponds to the fifth entry of the error correction table and the second index value corresponds to the sixth entry of the error correction table.
  • each entry in the error correction table also corresponds to a priority.
  • Each entry in the message rule information table corresponds to a priority.
  • a priority corresponding to each entry in the error correction table is the same as the priority of the corresponding entry in the message rule information table.
  • the network device determines the priority corresponding to the fifth entry and the priority corresponding to the sixth entry.
  • the packet rule information table is an ACL, and each entry in the packet rule information table has a corresponding priority. Because the entries in the error correction table correspond one-to-one with the entries in the message rule information table according to the size order of the index values. Therefore, each entry in the error correction table has a corresponding priority, and a priority corresponding to each entry in the error correction table is the same as the priority of the corresponding entry in the message rule information table. Therefore, the network device can determine the priority corresponding to the fifth entry and the priority corresponding to the sixth entry according to the error correction table.
  • the priority corresponding to each entry in the error correction table may be pre-configured by the user or determined by the network device, which is not specifically limited in this application.
  • the network device selects the first index value as the first target index value.
  • the network device selects the second index value as the first target index value.
  • step 302 and step 303 in the embodiment shown in FIG. 3A is not limited, and step 302 may be executed first, and then step 303; or step 303 may be executed first, and then step 302 may be executed; Steps 302 and 303 are executed, which is not specifically limited in this application.
  • the implementation manners in the above steps 304d to 304g are usually implemented in an access control scenario, that is, the packet rule information table is an ACL.
  • the model training and verification module in the network device performs model training based on the message rule information table to obtain a neural network model.
  • the model training and verification module expresses the table items that cannot be fitted by the neural network model as the corresponding search tree structure according to the search tree algorithm.
  • the table entry search module in the network device receives the first message, and then inputs the address information of the first message into the neural network model and the search tree structure in parallel to obtain the first index value and the second index value respectively.
  • the result selection module of the network device selects the first target index value from the first index value and the second index value, and determines the action information corresponding to the first packet from the action information table according to the first target index value.
  • the packet processing module of the network device processes the first packet according to the action information corresponding to the first packet. It can be known from the above example shown in FIG. 3C that the above steps 302 and 303 can be executed in parallel.
  • the search tree structure in step 304 may be stored in the slice. Since the search tree structure is the search tree structure corresponding to the entry that cannot be fitted by the neural network model, the technical solution of the present application can realize the compression of the existing large-scale search tree structure and reduce the storage overhead of the search tree structure in the network device , effectively increasing the forwarding capacity.
  • the existing large-scale search tree structures cannot be stored on-chip, some search tree structures are stored off-chip. This will result in the need to perform off-chip memory access when processing packets, resulting in a large number of off-chip memory accesses, which brings a certain amount of off-chip memory access bandwidth.
  • the neural network model and the search tree structure can be stored in the chip. In this way, when determining the index value, only on-chip memory access is required, and off-chip memory access is not required, thereby saving off-chip memory access bandwidth and effectively improving forwarding capacity.
  • the network device determines the action information corresponding to the first packet from the action information table according to the first target index value.
  • the network device processes the first packet according to the action information corresponding to the first packet.
  • the foregoing steps 305 to 306 are similar to the foregoing steps 203 to 204 in the embodiment shown in FIG. 2A .
  • FIG. 3A shows a solution in which the network device inputs the address information of the first packet into the neural network model and the search tree structure respectively to obtain the corresponding index value. It should be noted that if the network device inputs the address information of the first packet into the neural network model and the search tree structure respectively and no results are returned, it means that the address information of the first packet does not match any of the packet rule information tables. Prefix matches. If the network device inputs the address information of the first packet into the neural network model to obtain the first index value, and inputs the address information of the first packet into the search tree structure and no result is returned, then the network device uses the first index value as the final index obtained value.
  • the network device uses the second index value to obtain the final index.
  • it can be expressed as Table 8 below.
  • ⁇ 0,0> indicates that neither the neural network model nor the search tree structure returns a result, indicating that the address information of the first packet does not match any prefix in the packet rule information table.
  • ⁇ 1,0> means that the neural network model outputs index_NN, but the search tree structure does not return results, then the network device takes index_NN as the final index value.
  • ⁇ 0,1> means that the neural network model does not return results, and the search tree structure returns index_lookup, then the network device takes index_lookup as the final index value.
  • ⁇ 1,1> indicates that the neural network model outputs index_NN, and the lookup tree structure returns index_lookup. Then, when the mask length mask_NN of the entry corresponding to index_NN is greater than the mask length mask_lookup of the entry corresponding to index_lookup, the network device takes index_NN as the final index value. When the mask length mask_NN of the entry corresponding to index_NN is less than or equal to the mask length mask_lookup of the entry corresponding to index_lookup, the network device takes index_lookup as the finally obtained index value.
  • the network device determines the first index value according to the address information of the first packet and the neural network model.
  • the network device determines the second index value according to the address information of the first packet and the search tree structure.
  • the network device determines the first target index value from the first index value and the second index value, and determines the action information corresponding to the first packet from the action information table according to the first target index value.
  • the network device processes the first packet according to the action information corresponding to the first packet.
  • usually 90% of the entries in the message rule information table can be fitted by the neural network model, and there are few items that cannot be fitted in the message rule information table.
  • the search tree structure corresponding to the fitted entry.
  • the technical solution of the present application can realize the compression of the existing large-scale search tree structure, reduce the storage overhead of the search tree structure in the network device, and effectively improve the forwarding capacity.
  • the method in which the network device determines the index value through a large-scale search tree structure the method in which the network device determines the first index value according to the address information of the first packet and the neural network model is faster, and the search time is delayed. smaller.
  • the network device may perform error correction on the first index value, so as to more accurately determine the corresponding action for the first packet subsequently. information. The process is described below in conjunction with the embodiment shown in FIG. 4A .
  • FIG. 4A is a schematic diagram of another embodiment of a packet processing method according to an embodiment of the present application.
  • the message processing method includes:
  • the network device obtains a first packet.
  • the network device determines the first index value according to the address information of the first packet and the neural network model.
  • Steps 401 to 402 are similar to steps 201 to 202 in the embodiment shown in FIG. 2A .
  • steps 401 to 402 are similar to steps 201 to 202 in the embodiment shown in FIG. 2A .
  • the network device determines the prefix and mask corresponding to the seventh entry from the error correction table.
  • the seventh entry includes: an entry corresponding to the first index value in the error correction table and an entry corresponding to an index value within a preset threshold range of the first index value.
  • the error correction table includes at least one entry, each entry corresponds to an index value, and each entry has corresponding address information.
  • the address information corresponding to each entry includes a prefix and a mask.
  • the number of entries included in the error correction table is the same as the number of entries included in the message rule information table, that is, the entries in the error correction table correspond one-to-one with the entries in the message rule information table according to the order of index values. That is, it can be understood that the index value corresponding to the entry in the error correction table is the same as the index value of the corresponding entry in the message information table.
  • the address information corresponding to the entry in the error correction table is the address information of the corresponding entry in the message rule information table.
  • the error correction table and the action information table may be a logical table, and in actual storage, the content of the error correction table and the content of the action information table may be stored through a table, which is not specifically limited in this application.
  • the message rule information table is Table 1 above. Because the entries in the error correction table correspond one-to-one with the entries in the message rule information table according to the size order of the index values.
  • the address information corresponding to the entry in the error correction table is the address information of the corresponding entry in the message rule information table. Therefore, the error correction table can be expressed as the following Table 9.
  • the first index value is index value 3, which corresponds to the fourth entry in Table 9, and the preset threshold error bound is 2.
  • the fifth entry includes the second entry, the third entry, the fourth entry, the fifth entry and the sixth entry in Table 9 above.
  • the network device determines from Table 9 the prefixes and masks corresponding to the second entry, the third entry, the fourth entry, the fifth entry and the sixth entry respectively.
  • the preset threshold error bound may be configured by the user according to the configuration manual, or determined by the network device, which is not specifically limited in this application.
  • the size design of the preset threshold may consider at least one of the following factors: the number of entries in the message rule information table, the search delay requirement, and the model precision requirement.
  • the network device determines that the prefix corresponding to the eighth entry matches the destination address of the first packet, and the mask corresponding to the eighth entry is the mask with the largest length among the masks corresponding to the seventh entry.
  • the seventh entry includes the second entry, the third entry, the fourth entry, the fifth entry and the sixth entry in Table 9 above.
  • the first three high-order bits in the destination address of the first packet are 001, then from Table 9 above, it can be known that the prefix of the fourth entry is 001. Therefore, the network device determines that the prefix of the fourth entry matches the destination address of the first packet.
  • the network device can directly select the eighth entry without comparing the entries. The corresponding mask length. If the prefixes of multiple entries in the seventh entry match the destination address of the first packet, the network device needs to further determine the mask lengths corresponding to the multiple entries, and determine the entry with the largest mask length. as the eighth entry.
  • the network device determines a fourth index value corresponding to the eighth table entry.
  • the eighth table entry is the fourth table entry in the above Table 9, then the fourth index value is the index value of 3.
  • the model training and verification module in the network device is used for model training based on the message rule information table to obtain a neural network model.
  • the entry lookup module in the network device receives the first message, and inputs the address information of the first message into the neural network model to obtain the first index value. Then, the network device performs error correction on the first index value according to the error correction table to obtain a fourth index value.
  • the result selection module of the network device determines the action information corresponding to the first packet from the action information table according to the fourth index value. Then, the packet processing module of the network device processes the first packet according to the action information corresponding to the first packet.
  • the network device determines the action information corresponding to the first packet from the action information table according to the fourth index value.
  • the number of entries included in the message rule information table is the same as the number of entries included in the action information table, and the number of entries included in the message rule information table is the same.
  • the table entries are in one-to-one correspondence with the table entries included in the action information table according to the size order of the index values. That is, it can be understood that the index value corresponding to the entry in the message rule information table is the same as the index value corresponding to the corresponding entry in the action information table.
  • the number of entries included in the error correction table is the same as the number of entries included in the message rule information table, and the entries included in the error correction table correspond one-to-one with the entries included in the message rule information table. That is, it can be understood that the index value corresponding to the entry in the error correction table is the same as the index value of the corresponding entry in the message information table. Therefore, the fourth index value can be understood as the index value corresponding to the corresponding entry in the action information table.
  • the network device processes the first packet according to the action information corresponding to the first packet.
  • Steps 406 to 407 are similar to steps 203 to 204 in the embodiment shown in FIG. 2A .
  • steps 203 to 204 in the embodiment shown in FIG. 2A .
  • the network device determines the first index value according to the address information of the first packet and the neural network model. Then, the network device performs error correction on the first index value according to the error correction table to obtain a fourth index value. The network device determines the action information corresponding to the first packet from the action information table according to the fourth index value, and processes the first packet according to the action information corresponding to the first packet. It can be seen from this that the network device does not need to store a large-scale search tree structure, thereby avoiding the storage overhead caused by the search tree structure.
  • the network device determines the action information corresponding to the first message by searching the tree structure
  • the method in which the network device determines the action information corresponding to the first message by using the neural network model, the error correction table and the action information table is faster, The search delay is small. Further, the network device performs error correction on the first index value according to the error correction table to obtain a fourth index value, and then determines the action information corresponding to the first packet. In this way, the corresponding action information for the first packet can be determined more accurately.
  • the neural network model in the network device may be stored on-chip, and the action information table may be stored off-chip.
  • the network device only needs to perform on-chip memory access when processing the first packet, and does not need to perform off-chip memory access, thereby saving off-chip memory access bandwidth and effectively improving forwarding capacity.
  • the technical scheme is as follows: the network device obtains the fifth index value according to the address information of the first message, the neural network model and the error correction table. Then, the network device determines the sixth index value according to the address information of the first packet and the search tree structure. The network device finally determines the action information corresponding to the first packet in combination with the fifth index value and the sixth index value.
  • FIG. 5A is a schematic diagram of another embodiment of a packet processing method according to an embodiment of the present application.
  • the message processing method includes:
  • the network device obtains a first packet.
  • the network device determines the first index value according to the address information of the first packet and the neural network model.
  • Steps 501 to 502 are similar to steps 201 to 202 in the embodiment shown in FIG. 2A .
  • steps 501 to 502 are similar to steps 201 to 202 in the embodiment shown in FIG. 2A .
  • the network device determines the prefix and mask corresponding to the ninth entry from the error correction table.
  • the ninth entry includes an entry corresponding to the first index value in the error correction table and an entry corresponding to an index value within a preset threshold range of the first index value.
  • the network device determines that the prefix corresponding to the tenth entry matches the destination address of the first packet, and the mask corresponding to the tenth entry is the mask with the largest length among the masks corresponding to the ninth entry.
  • the network set determines the fifth index value corresponding to the tenth entry.
  • Steps 503 to 505 are similar to steps 403 to 405 in the embodiment shown in FIG. 4A .
  • steps 403 to 405 in the embodiment shown in FIG. 4A .
  • the network device determines a sixth index value according to the address information of the first packet and the search tree structure.
  • the network device determines a second target index value from the fifth index value and the sixth index value.
  • the network device determines the action information corresponding to the first packet from the action information table according to the second target index value.
  • the network device processes the first packet according to the action information corresponding to the first packet.
  • Steps 506 to 509 are similar to steps 303 to 306 in the embodiment shown in FIG. 3A .
  • steps 303 to 306 in the embodiment shown in FIG. 3A please refer to the related introductions of steps 303 to 306 in the embodiment shown in FIG. 3A , which will not be repeated here.
  • the model training and verification module of the network device is used for model training based on the message rule information table to obtain a neural network model; according to the search tree algorithm, the table entries that cannot be fitted by the neural network model are represented as a corresponding search tree structure.
  • the entry lookup module receives the first message.
  • the table entry search module inputs the address information of the first message into the neural network model and the search tree structure in parallel to obtain the first index value and the sixth index value.
  • the table entry search module performs error correction on the first index value according to the error correction table to obtain a fifth index value.
  • the result selection module of the network device selects the second target index value from the fifth index value and the sixth index value, and determines the action information corresponding to the first packet from the action information table according to the second target index value.
  • the packet processing module of the network device processes the first packet according to the action information corresponding to the first packet.
  • the network device determines the first index value according to the address information of the first packet and the neural network model; the network device performs error correction on the first index value according to the error correction table to obtain the fifth index value.
  • the network device determines the sixth index value according to the address information of the first packet and the search tree structure. Then, the network device selects the second target index value from the fifth index value and the sixth index value, and determines the action information corresponding to the first packet from the action information table according to the second target index value. Then, the packet processing module in the network device processes the first packet according to the action information corresponding to the first packet.
  • the technical solution of the present application can realize the compression of the existing large-scale search tree structure and reduce the storage overhead of the search tree structure in the network device , effectively increasing the forwarding capacity.
  • the method in which the network device uses the neural network model, the error correction table and the action information table to determine the fourth index value is faster and has a smaller search delay.
  • the network device may further perform error correction on the first index value according to the error correction table to obtain the fifth index value. Then, the network device determines the action information corresponding to the first packet in combination with the fifth index value and the sixth index value. In this way, the action information determined for the first packet is better and more accurate, and the first packet is processed more accurately.
  • the network device may perform model training according to the message rule information table to obtain a neural network model.
  • the training process is described below with reference to FIG. 6A .
  • FIG. 6A is a schematic diagram of another embodiment of a packet processing method according to an embodiment of the present application.
  • the message processing method includes:
  • the network device determines a neural network structure.
  • the neural network structure may be a hierarchical structure or a non-hierarchical structure, which is not specifically limited in this application.
  • the following takes the neural network structure as the hierarchical structure as an example for description.
  • the neural network structure is a hierarchical structure.
  • the network device determines the height H, width W of the neural network structure, and the number of micromodels included in the neural network structure.
  • Each micromodel can be represented by a RELU activation function, so that the neural network model can be represented by a piecewise linear function.
  • the width, height and number of micro-models of the neural network structure may be configured by the user according to the configuration manual, or may be configured by the network device, which will not be repeated here.
  • the height of the neural network structure and the number of micro-models can be set according to the scene requirements.
  • the width of the neural network structure can be smaller and the number of micromodels larger.
  • the storage of the neural network model stored by the network device is relatively large, but the time delay for the network device to retrieve through the neural network model is relatively small.
  • data center networking For example, data center networking.
  • the width of the neural network structure can be smaller and the number of micromodels is smaller.
  • the storage of the neural network model stored by the network device is small, but the time delay for the network device to retrieve through the neural network model is relatively large. For example, a wide area network scenario.
  • the width, height and number of micro-models of the neural network structure can be set according to actual needs.
  • the network device performs model training according to the message rule information table and the neural network structure to obtain a neural network model.
  • the network device may sample IP addresses in the entire internet protocol address (IP) address space to obtain the first sampled IP address.
  • IP internet protocol address
  • the network device determines the index value corresponding to the first sampled IP address according to the message rule information table shown in Table 1 above.
  • the network device trains the micromodel of the first stage (ie stage0) in the neural network structure according to the first sampled IP address and the index value corresponding to the first sampled IP address.
  • the first-level micromodel is Submodel 0,0 (x).
  • the micromodel can be represented by a rectified linear unit (RELU) activation function, then the training process mainly obtains the specific values of the coefficients in the RELU activation function, and the values of the coefficients can be called the micromodel. parameters of the model.
  • RELU rectified linear unit
  • the network device After the micromodels of the first stage converge, the network device obtains the range of IP addresses that each micromodel of the second stage (ie, stage1) is responsible for according to the output of the micromodels of the first stage.
  • the network device samples IP addresses from the range of IP addresses that each micromodel of the second level is responsible for, and obtains the second sampled IP address.
  • the network device determines an index value corresponding to the second sampled IP address according to the message rule information table. Then, the network device trains each micromodel of the second level in the neural network structure according to the second sampled IP address and the index value corresponding to the second sampled IP address.
  • the training process for other levels of micromodels in the neural network structure is similar, and will not be described here.
  • the network device obtains the parameters of each micro-model through the above-mentioned training of the micro-models of all levels of the neural network model. Then, the network device determines the neural network model according to the parameters of each micromodel and the neural network structure.
  • the above steps 601 to 602 may be executed by the control plane of the network device.
  • it may be the CPU of the network device; or it may be executed by the AI chip integrated in the network device, which is not specifically limited in this application.
  • the network device may perform the above steps 601 to 602 to obtain the neural network model; or, when the entries in the message rule information table of the network device are updated on a large scale , the network device may perform the above-mentioned process from step 601 to step 602 to obtain the latest neural network model.
  • step 304a in the above-mentioned embodiment shown in FIG. 3A if the neural network model also outputs the mask length corresponding to the index value, then the network device should also sample the IP address when training the neural network model.
  • the mask length corresponding to the matching entry is used as training data. That is, the network device performs training in combination with the sampled IP address, the index value corresponding to the sampled IP address, and the mask length corresponding to the sampled IP address to obtain a neural network model.
  • control of the network device sends the first message to the data plane of the network device.
  • the first message is used to deliver or update the neural network model to the data plane of the network device.
  • the first message includes a second packet.
  • the packet header of the second packet includes the neural network model enable bit, the height of the neural network model, the width of the neural network model, and the identifier of the micromodel in the neural network model.
  • the payload of the second message includes parameters of the neural network model.
  • the neural network model enable bit has a value of one.
  • the network device performs model training on the control plane to obtain a neural network model.
  • the control of the network device sends a first message to the data plane of the network device, where the first message includes the second packet.
  • FIG. 6C the format of the second packet.
  • the packet header of the second packet includes the neural network model enable bit NN enable , the height H NN of the neural network model, the width W NN and The identification of the micromodel Submodel_ID in the neural network model.
  • the payload of the first message includes parameters of each micromodel.
  • Submodel_ID includes Submodel 0,0 (x), Submodel 1,0 (x), Submodel 1,1 (x), Submodel 1,2 (x), Submodel 1,3 (x), Submodel 1,4 (x ), Submodel 1,5 (x), Submodel 1,6 (x), Submodel 1,7 (x) corresponding Submodel ID respectively.
  • w1 is an integer greater than or equal to 7.
  • a1, b1, c1 and d1 may specifically be parameters of the micromodel Submodel 0,0 (x) in FIG. 6B .
  • a2, b2, c2, and d2 may specifically be parameters of the micromodel Submodel 1,0 (x) in FIG. 6B, and so on.
  • the first message may include multiple second packets, and the format of the packet header of each second packet is Similar to the packet header shown in FIG. 6C , the difference is that the micromodel identifier included in the packet header of the second packet is the micromodel identifier corresponding to the parameters of the micromodel carried by the payload of the second packet.
  • the micromodel identifiers included in the packet header of the second packet shown in FIG. 6C are Submodel 0,0 (x), Submodel 1,0 (x), Submodel 1,1 (x), Submodel 1,2 respectively. (x), Submodel 1,3 (x), Submodel 1,4 (x), Submodel 1,5 (x), Submodel 1,6 (x), Submodel 1,7 (x) The corresponding micromodel identifiers respectively.
  • w1 is an integer greater than or equal to 7.
  • the payload of the second packet includes the parameters of these micromodels.
  • the network device may determine the search tree structure. The process is described below in conjunction with the embodiment shown in FIG. 7A .
  • FIG. 7A is a schematic diagram of another embodiment of a packet processing method according to an embodiment of the present application.
  • the message processing method includes:
  • the network device determines entries in the message rule information table that cannot be fitted by the neural network model.
  • the network device represents, according to the search tree algorithm, entries that cannot be fitted by the neural network model, and obtains a search tree structure.
  • the network device determines entries that cannot be fitted by the neural network model, and forms a residual set of entries that cannot be fitted by the neural network model.
  • the network device expresses the entries included in the residual parameter table as a search tree structure according to the search tree algorithm.
  • steps 701 to 702 reference may be made to the relevant introduction of step 303 in the aforementioned FIG. 3A, and details are not repeated here.
  • the above steps 701 to 702 may be executed by the control plane of the network device.
  • it may be a central processing unit (central processing unit, CPU) of a network device; or it may be executed by an artificial intelligence (artificial intelligence (AI) chip integrated in the network device, which is not specifically limited in this application.
  • AI artificial intelligence
  • the network device may perform the processes of the above steps 701 to 702 to obtain the search tree structure.
  • the network device can perform the above steps 701 to 702 to obtain the latest update. search tree structure.
  • the relevant introduction of the large-scale update of the entries in the packet rule information table please refer to the related introduction above.
  • control of the network device sends the second message to the data plane of the network device.
  • the second message is used to deliver or update the search tree structure to the data plane of the network device.
  • the second message includes a third packet.
  • the message header of the third message includes the search tree structure enable bit, the type of the search tree structure, the identification of the start node to be updated in the search tree structure, and the identification of the end node to be updated in the search tree structure.
  • the payload of the third message includes the search tree structure.
  • the value of the search tree enable bit is one.
  • the network device determines the table items that cannot be fitted by the neural network model on the control plane, and uses the search tree algorithm to represent the table items that cannot be fitted by the neural network model to obtain a search tree structure.
  • the control of the network device sends a second message to the data plane of the network device, where the second message includes the third packet.
  • the packet header of the second packet includes the search tree enable bit Trie enable .
  • NN enable 1, used to indicate that the third packet is used to indicate the search tree structure to the data plane of the network device.
  • the packet header of the second packet includes the search tree type Trie_type, the identifier of the starting node to be updated Node_s, and the identifier of the ending node to be updated Node_e in the search tree structure.
  • the payload of the third packet includes a lookup tree structure Lookup Trie.
  • the payload of the third packet includes two nodes, one edge, and the one edge is the right side, and the index value on the node is 1.
  • the second message may include multiple third packets, and the packet header of each third packet is The format is similar to the packet header shown in FIG. 7B , and the relevant information of the tree structure is searched through the payload of multiple third packets.
  • the control plane of the network device may determine the error correction table. Then, the control of the network device sends a third message towards the data plane of the network device. The third message is used to deliver or update the error correction table to the data of the network device.
  • the third message includes a fourth packet
  • the packet header of the fourth packet includes an error correction table enable bit, a start position and an end position of the entry to be updated in the error correction table, and the error correction table enables
  • the value of the energy bit is one; the payload of the fourth packet includes the prefix and mask corresponding to the entry to be updated.
  • the specific format of the fourth packet shown in FIG. 8 refer to the specific format of the fourth packet shown in FIG. 8 .
  • the message header of the fourth packet includes the starting position of the entry to be updated as the first entry, and the termination position of the entry to be updated as the third entry . That is to say, the fourth packet is used to deliver or update the prefix and mask included in the first three entries in Table 7. Then the payload of the fourth packet includes the prefix and mask of the first entry in Table 7, the prefix and mask of the second entry in Table 7, and the prefix and mask of the third entry in Table 7. code.
  • the third message may include multiple fourth packets, which are carried by the payloads of multiple fourth packets.
  • the prefix and mask of all entries in the error correction table may include multiple fourth packets, which are carried by the payloads of multiple fourth packets.
  • the following describes the network device provided by the embodiment of the present application with reference to FIG. 9 and FIG. 10 .
  • FIG. 9 is a schematic structural diagram of a network device according to an embodiment of the present application.
  • the network device may be the network device described in the foregoing method embodiments, or may be a chip or component of the network device in the foregoing method embodiments.
  • the network device may be configured to perform some or all of the steps performed by the network device in the foregoing method embodiments.
  • the network device includes a transceiver module 901 and a processing module 902 .
  • the processing module 902 is configured to determine the first index value according to the address information of the first message and the neural network model; determine the action information corresponding to the first message from the action information table according to the first index value, and the action information table includes at least one table Each entry corresponds to an index value and an action information; the first packet is processed according to the action information corresponding to the first packet.
  • processing module 902 is further configured to:
  • the second index value is determined according to the address information of the first message and the search tree structure, and the search tree structure is the search tree structure corresponding to the entries in the message rule information table that cannot be fitted by the neural network model;
  • the processing module 902 is specifically used for:
  • the action information corresponding to the first packet is determined from the action information table according to the first target index value.
  • the neural network model is obtained by performing model training according to the message rule information table.
  • the message rule information table includes at least one entry, each entry corresponds to an index value and an action information, and the action information table is used to indicate each entry in the message rule information table corresponding action information.
  • the first entry in the message rule information table corresponds to the second entry in the action information table, and the first entry and the second entry respectively include one or more entries;
  • the index values corresponding to the first entry are respectively the same as the index values corresponding to the second entry, and the index values corresponding to the second entry include the first index value.
  • processing module 902 is specifically configured to:
  • a third index value is determined according to the address information of the first message and the neural network model, and the third index value is an index value corresponding to the third entry in the message rule information table;
  • the first index value corresponding to the fourth entry in the action information table is determined from the mapping table according to the third index value, where the fourth entry is the entry corresponding to the third entry in the action information table, and the mapping table includes message rule information The index value of the entry of the action information table corresponding to each entry in the table.
  • processing module 902 is specifically configured to:
  • the first index value is selected as the first target index value
  • the second index value is selected as the first target index value.
  • processing module 902 is specifically configured to:
  • the error correction table includes at least one entry, and each entry corresponds to an index value and a priority , the entries in the error correction table are in one-to-one correspondence with the entries in the message rule information table according to the size order of the index value, each table entry in the message rule information table corresponds to a priority, and each table in the error correction table corresponds to a priority.
  • a priority corresponding to the entry is the same as the priority of the corresponding entry in the message rule information table;
  • the first index value is selected as the first target index value
  • the second index value is selected as the first target index value.
  • processing module 902 is further configured to:
  • the error correction table includes at least one entry, each entry corresponds to an index value, each entry has corresponding address information, and each entry corresponds to The address information includes a prefix and a mask;
  • the seventh entry includes an entry corresponding to the first index value in the error correction table and an entry corresponding to an index value within the preset threshold range of the first index value;
  • the mask corresponding to the eighth entry is the mask with the largest mask length among the masks corresponding to the seventh entry;
  • the processing module 902 is specifically used for:
  • the action information corresponding to the first packet is determined from the action information table according to the fourth index value.
  • processing module 902 is further configured to:
  • the error correction table includes at least one entry, each entry corresponds to an index value, each entry has corresponding address information, and each entry corresponds to The address information includes a prefix and a mask;
  • the ninth table entry includes the table entry corresponding to the first index value in the error correction table and the table entry corresponding to the index value within the preset threshold range of the first index value;
  • the sixth index value is determined according to the address information of the first message and the search tree structure;
  • the search tree structure is the search tree structure corresponding to the entries in the message rule information table that cannot be fitted by the neural network model;
  • the processing module is specifically used for:
  • the action information corresponding to the first packet is determined from the action information table according to the second target index value.
  • the action information corresponding to the first packet includes port information; the processing module 902 is specifically configured to:
  • processing module 902 is further configured to:
  • the message rule information table includes at least one entry, each entry corresponds to an index value and an action information, and the action information table is used to indicate the message Action information corresponding to each entry in the rule information table.
  • control of the network device sends a first message to the data plane of the network device, and the first message is used to deliver or update the neural network model to the data plane of the network device.
  • the first message includes the second packet;
  • the packet header of the second packet includes the neural network model enable bit, the height of the neural network model, the width of the neural network model, the neural network model
  • the included micromodel identifier, the neural network model enable bit takes the value of one;
  • the payload of the second packet includes the parameters of the neural network model.
  • processing module 902 is further configured to:
  • the message rule information table includes at least one table entry, each table entry corresponds to an index value and an action information, and the action information table is used to indicate the message Action information corresponding to each entry in the rule information table;
  • the search tree algorithm the table items that cannot be fitted by the neural network model are represented, and the search tree structure is obtained.
  • control of the network device sends a second message to the data plane of the network device, and the second message is used to deliver or update the search tree structure to the data plane of the network device.
  • the second message includes a third packet;
  • the packet header of the third packet includes the search tree enable bit, the type of the search tree structure, and the start node to be updated in the search tree structure
  • the identifier and the identifier of the to-be-updated termination node in the search tree structure, the value of the search tree enable bit is one;
  • the payload of the third packet includes the search tree structure.
  • control of the network device sends a third message to the data plane of the network device, and the third message is used to deliver or update the error correction table to the data plane of the network device.
  • the third message includes a fourth packet
  • the packet header of the fourth packet includes the error correction table enable bit, the start position and the end position of the entry to be updated in the error correction table , the value of the error correction table enable bit is one;
  • the payload of the fourth packet includes the prefix and mask corresponding to the table entry to be updated.
  • the transceiver module 901 is used to obtain the first message; the processing module 902 is used to determine the first index value according to the address information of the first message and the neural network model;
  • the table determines the action information corresponding to the first message, the action information table includes at least one entry, each entry corresponds to an index value and an action information; the first message is processed according to the action information corresponding to the first message. Therefore, the network device does not need to store a large-scale search tree structure, thereby avoiding the storage overhead caused by the search tree structure.
  • the method in which the network device uses the neural network model and the action information table to determine the action information corresponding to the first packet is faster.
  • the delay is small.
  • the present application also provides a network device.
  • FIG. 10 is another schematic structural diagram of the network device according to an embodiment of the present application.
  • the network device may be used to perform the steps performed by the network device in the embodiments shown in FIG. 2A , FIG. 3A , FIG. 4A , FIG. 5A , FIG. 6A and FIG.
  • the network device includes a processor 1001 and a memory 1002 .
  • the network device further includes a transceiver 1003 .
  • the processor 1001, the memory 1002, and the transceiver 1003 are respectively connected through a bus, and the memory stores computer instructions.
  • the transceiver module 901 in the foregoing embodiment may specifically be the transceiver 1003 in this embodiment, and thus the specific implementation of the transceiver 1003 will not be described again.
  • the processing module 902 in the foregoing embodiment may specifically be the processor 1001 in this embodiment, so the specific implementation of the processor 1001 will not be described again.
  • Embodiments of the present application also provide a computer program product including instructions, which, when run on a computer, enables the computer to execute the implementation shown in FIGS. 2A , 3A, 4A, 5A, 6A, and 7A.
  • Example message processing method Example message processing method.
  • Embodiments of the present application further provide a computer-readable storage medium, including computer instructions, when the computer instructions are run on a computer, the computer can execute the above-mentioned FIG. 2A , FIG. 3A , FIG. 4A , FIG. 5A , FIG. 6A and FIG.
  • the communication method of the embodiment shown in 7A is not limited to a computer-readable storage medium, including computer instructions, when the computer instructions are run on a computer, the computer can execute the above-mentioned FIG. 2A , FIG. 3A , FIG. 4A , FIG. 5A , FIG. 6A and FIG.
  • the communication method of the embodiment shown in 7A is shown in 7A.
  • An embodiment of the present application further provides a chip device, including a processor, which is connected to a memory and calls a program stored in the memory, so that the processor executes the above-mentioned FIG. 2A , FIG. 3A , FIG. 4A , FIG. 5A , and FIG. 6A and the packet processing method of the embodiment shown in FIG. 7A .
  • the processor mentioned in any of the above can be a general-purpose central processing unit, a microprocessor, an application-specific integrated circuit (ASIC), or one or more of the above-mentioned Fig. 2A, 3A, 4A, 5A, 6A and 7A shown in the embodiment of the message processing method program execution integrated circuit.
  • the memory mentioned in any one of the above can be read-only memory (ROM) or other types of static storage devices that can store static information and instructions, random access memory (random access memory, RAM), and the like.
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
  • the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, and the computer software products are stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请实施例公开了一种报文处理方法和网络设备,用于减少存储开销,提高确定第一报文对应的动作信息的速度。本申请实施例方法包括:网络设备获取第一报文;所述网络设备根据所述第一报文的地址信息和神经网络模型确定第一索引值;所述网络设备根据所述第一索引值从动作信息表确定所述第一报文对应的动作信息,所述动作信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息;所述网络设备根据所述第一报文对应的动作信息对所述第一报文进行处理。

Description

报文处理方法以及网络设备
本申请要求于2021年3月23日提交中国专利局,申请号为202110309197.7,发明名称为“报文处理方法以及网络设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及网络技术,尤其涉及一种报文处理方法以及网络设备。
背景技术
查表功能是网络最重要的核心功能之一。高效的查表可以有效提升网络的报文处理效率。例如,以转发信息库(forwarding information base,FIB)表为例,FIB表包括FIB表项。目前,FIB表项查找算法主要采用前缀树查找算法。对于给定的FIB表,可以根据FIB表中每个表项包括前缀可以构建查找树结构。网络设备中存储查找树结构,并通过查找树结构实现查表功能。其中,网络设备中用于存储查找树结构的存储大小主要由查找树结构的大小决定。网络设备通过查找树结构进行查表的查找速度主要由查找树结构的高度决定。
随着FIB表包括的FIB表项的日益增加,网络设备中用于查找FIB表项的查找树结构占用网络设备越来越多的存储开销,导致存储开销较多。
发明内容
本申请实施例提供了一种报文处理方法和网络设备,用于减少存储开销,提高确定第一报文对应的动作信息的速度,从而降低报文处理的时延。
本申请实施例第一方面提供一种报文处理方法,该方法包括:
网络设备获取第一报文;然后,网络设备根据第一报文的地址信息和神经网络模型确定第一索引值;网络设备根据第一索引值从动作信息表确定第一报文对应的动作信息,动作信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息;网络设备根据第一报文对应的动作信息对第一报文进行处理。
本实施例中,网络设备通过第一报文的地址信息和神经网络模型确定第一索引值;然后,网络设备根据第一索引值从动作信息表确定第一报文对应的动作信息,并对第一报文进行处理。网络设备无需存储大规模的查找树结构,避免查找树结构带来的存储开销。并且,相比于网络设备通过查找树结构确定第一报文对应的动作信息的方式,网络设备采用神经网络模型和动作信息表确定第一报文对应的动作信息的方式更为快速,查找时延较小。
一种可能的实现方式中,该方法还包括:网络设备根据第一报文的地址信息和查找树结构确定第二索引值,查找树结构为报文规则信息表中通过神经网络模型无法拟合的表项对应的查找树结构;网络设备从第一索引值和第二索引值确定第一目标索引值;网络设备根据第一索引值从动作信息表确定第一报文对应的动作信息,包括:网络设备根据第一目标索引值从动作信息表确定第一报文对应的动作信息。
在该可能的实现方式中,通常报文规则信息表中90%的表项可以通过神经网络模型拟合,报文规则信息表中无法拟合的表项较少,因此网络设备只需要存储通过神经网络模型无法拟合的表项对应的查找树结构。该实现方式的技术方案可以实现已有的大规模查找树结构的压缩,减少网络设备中查找树结构的存储开销,有效提升转发容量。并且,相比于网络设备通过大规模的查找树结构确定索引值的方式来说,网络设备根据第一报文的地址信息和神经网络模型确定第一索引值的方式更为快速,查找时延较小。
另一种可能的实现方式中,神经网络模型是根据报文规则信息表进行模型训练得到的。
在该可能的实现方式中,神经网络模型是根据报文规则信息表进行模型训练得到的,这样网络设备可以通过神经网络模型确定索引值,网络设备无需存储查找树结构,从而减少存储开销。
另一种可能的实现方式中,报文规则信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息,动作信息表用于指示报文规则信息表中的每个表项对应的动作信息。
在该可能的实现方式中,示出了报文规则信息表与动作信息表之间的关联关系。
另一种可能的实现方式中,报文规则信息表中的第一表项与动作信息表中的第二表项对应,第一表项和第二表项分别包括一个或多个表项;第一表项对应的索引值分别与第二表项对应的索引值相同,第二表项对应的索引值包括第一索引值。
在该可能的实现方式中,报文规则信息表中的第一表项与动作信息表中的第二表项分别对应,第一表项和第二表项分别包括一个或多个表项;第一表项对应的索引值分别与第二表项对应的索引值相同。这样无需进行索引值之间的转换,避免相应的处理开销和存储转换关系的开销。
另一种可能的实现方式中,网络设备根据第一报文的地址信息和神经网络模型确定第一索引值,包括:网络设备根据第一报文的地址信息和神经网络模型确定第三索引值,第三索引值为报文规则信息表中的第三表项对应的索引值;网络设备根据第三索引值从映射表确定动作信息表的第四表项对应的第一索引值,第四表项为动作信息表中与第三表项对应的表项,映射表包括报文规则信息表中每个表项对应的动作信息表的表项的索引值。
在该可能的实现方式中,网络设备根据第一报文的地址信息和神经网络模型确定第三索引值。网络设备根据第三索引值从映射表确定动作信息表的第四表项对应的第一索引值,从而确定第一报文对应的动作信息。
另一种可能的实现方式中,网络设备从第一索引值和第二索引值确定第一目标索引值,包括:网络设备确定第一索引值对应的掩码长度和第二索引值对应的掩码长度;若第一索引值对应的掩码长度大于第二索引值对应的掩码长度,则网络设备选择第一索引值作为第一目标索引值;若第一索引值对应的掩码长度小于第二索引值对应的掩码长度,则网络设备选择第二索引值作为第一目标索引值。
在该可能的实现方式中,提供了一种具体选择第一目标索引值的方式。通过索引值对应的掩码长度选择第一目标索引值,从而实现通过最长前缀匹配原则选择第一目标索引值,实现更精准地为第一报文确定对应的动作信息。
另一种可能的实现方式中,网络设备从第一索引值和第二索引值确定第一目标索引值, 包括:网络设备确定第一索引值对应纠错表的第五表项和第二索引值对应纠错表的第六表项;纠错表包括至少一个表项,每个表项对应一个索引值和一个优先级,纠错表中的表项按照索引值的大小顺序与报文规则信息表中的表项一一对应,报文规则信息表中每个表项对应一个优先级,纠错表中每个表项对应的一个优先级与报文规则信息表中对应的表项的优先级相同;网络设备根据纠错表确定第五表项对应的优先级和第六表项对应的优先级;若第五表项对应的优先级高于第六表项对应的优先级,则网络设备选择第一索引值作为第一目标索引值;若第五表项对应的优先级低于第六表项对应的优先级,则网络设备选择第二索引值作为第一目标索引值。
在该可能的实现方式中,提供网络设备确定第一目标索引值的另一种可能的实现方式。通过索引值对应的表项的优先级确定第一目标索引值,这样能够精准地为第一报文确定对应的动作信息。
另一种可能的实现方式中,该方法还包括:网络设备从纠错表确定第七表项对应的前缀和掩码;纠错表包括至少一个表项,每个表项对应一个索引值,每个表项有对应的地址信息,每个表项对应的地址信息包括前缀和掩码;第七表项包括在纠错表中第一索引值对应的表项和在第一索引值的预设阈值范围内的索引值对应的表项;网络设备从第七表项中确定第八表项对应的前缀与第一报文的目的地址匹配,第八表项对应的掩码为第七表项对应的掩码中掩码长度最大的掩码;网络设备确定第八表项对应的第四索引值;网络设备根据第一索引值从动作信息表确定第一报文对应的动作信息,包括:网络设备根据第四索引值从动作信息表确定第一报文对应的动作信息。
在该可能的实现方式中,网络设备根据第一报文的地址信息和神经网络模型确定第一索引值。然后,网络设备根据纠错表对第一索引值进行纠错,得到第四索引值。网络设备根据第四索引值从动作信息表中确定第一报文对应的动作信息,并根据第一报文对应的动作信息对第一报文进行处理。由此可知,网络设备无需存储大规模查找树结构,避免查找树结构带来的存储开销。相比于网络设备通过查找树结构确定第一报文对应的动作信息的方式,网络设备采用神经网络模型、纠错表和动作信息表确定第一报文对应的动作信息的方式更为快速,查找时延较小。进一步的,网络设备根据纠错表对第一索引值进行纠错,得到第四索引值,再确定第一报文对应的动作信息。这样可以更精确地为第一报文确定对应的动作信息。
另一种可能的实现方式中,该方法还包括:网络设备从纠错表确定第九表项对应的前缀和掩码;纠错表包括至少一个表项,每个表项对应一个索引值,每个表项有对应的地址信息,每个表项对应的地址信息包括前缀和掩码;第九表项包括在纠错表中第一索引值对应的表项和在第一索引值的预设阈值范围内的索引值对应的表项;网络设备从第九表项中确定第十表项对应的前缀与所述第一报文的目的地址匹配,第十表项对应的掩码为第九表项对应的掩码中掩码长度最大的掩码;网络设备确定所述第十表项对应的第五索引值;网络设备根据第一报文的地址信息和查找树结构确定第六索引值;查找树结构为报文规则信息表中通过神经网络模型无法拟合的表项对应的查找树结构;网络设备从第五索引值和第六索引值确定第二目标索引值;网络设备根据第一索引值从动作信息表确定第一报文对应 的动作信息,包括:网络设备根据第二目标索引值从动作信息表确定第一报文对应的动作信息。
在该可能的实现方式中,由于查找树结构是通过神经网络模型无法拟合的表项对应的查找树结构,因此该实现方式的技术方案可以实现已有的大规模查找树结构的压缩,减少网络设备中查找树结构的存储开销,有效提升转发容量。相比于网络设备通过大规模的查找树结构确定索引值的方式,网络设备采用神经网络模型、纠错表和动作信息表确定第四索引值的方式更为快速,查找时延较小。并且,网络设备可以进一步根据纠错表对第一索引值进行纠错,得到第五索引值。然后,网络设备再结合第五索引值和第六索引值确定第一报文对应的动作信息。这样更精准地为第一报文确定的动作信息。
另一种可能的实现方式中,第一报文对应的动作信息包括端口信息;网络设备根据第一报文对应的动作信息对第一报文进行处理,包括:网络设备根据端口信息确定第一报文的下一跳路由节点;网络设备将第一报文转发到下一跳路由节点。
该实现方式提供本申请应用于转发场景的具体过程,网络设备通过确定端口信息,再根据端口信息对第一报文进行转发。
另一种可能的实现方式中,网络设备根据第一报文的地址信息和神经网络模型确定第一索引值之前,方法还包括:网络设备确定神经网络结构;网络设备根据报文规则信息表和神经网络结构进行训练,得到神经网络模型,报文规则信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息,动作信息表用于指示报文规则信息表中每个表项对应的动作信息。
在该可能的实现方式中,提供了网络设备训练神经网络模型的过程,为方案的实施提供基础。网络设备通过报文规则信息表和神经网络结构进行模型训练。这样后续网络设备可以通过报文的地址信息和神经网络模型确定索引值,网络设备无需保存该报文规则信息表以及相应的查找树结构,从而避免相应的存储开销。
另一种可能的实现方式中,方法还包括:网络设备的控制面向网络设备的数据面发送第一消息,第一消息用于向网络设备的数据面下发或更新神经网络模型。
在该可能的实现方式中,网络设备的控制面训练得到神经网络模型后,可以向网络设备的数据面下发神经网络模型。这样网络设备的数据面接收到报文之后,可以通过该神经网络模型和报文的地址信息确定对应的索引值。
另一种可能的实现方式中,第一消息包括第二报文;第二报文的报文头部包括神经网络模型使能位、神经网络模型的高度、神经网络模型的宽度、神经网络模型包括的微模型标识,神经网络模型使能位取值为一;第二报文的载荷包括神经网络模型的参数。
在该可能的实现方式中,网络设备的控制面可以通过报文的方式向网络设备的数据面下发神经网络模型的相关参数。该实现方式提供了报文承载神经网络模型的相关参数的具体格式。
另一种可能的实现方式中,方法还包括:网络设备确定报文规则信息表中通过神经网络模型无法拟合的表项,报文规则信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息,动作信息表用于指示报文规则信息表中每个表项对应的动作信息;网络设 备根据查找树算法表示通过神经网络模型无法拟合的表项,得到查找树结构。
在该可能的实现方式中,对于通过神经网络模型无法拟合的表项,网络设备可以通过查找树算法表示通过神经网络模型无法拟合的表项,得到查找树结构。通常报文规则信息表中90%的表项可以通过神经网络模型拟合,报文规则信息表中无法拟合的表项较少,因此网络设备只需要存储通过神经网络模型无法拟合的表项对应的查找树结构。因此该实现方式的技术方案实现已有的大规模查找树结构的压缩,减少网络设备中查找树结构的存储开销,有效提升转发容量。并且,网络设备结合神经网络模型的方式和查找树结构的方式共同确定最终的目标索引值。这样可以更精准地确定报文对应的动作信息。
另一种可能的实现方式中,方法还包括:网络设备的控制面向网络设备的数据面发送第二消息,第二消息用于向网络设备的数据面下发或更新查找树结构。
在该可能的实现方式中,网络设备的控制面确定查找树结构后,可以向网络设备的数据面下发查找树结构。这样网络设备的数据面接收到报文之后,可以通过该查找树结构和报文的地址信息确定对应的索引值。
另一种可能的实现方式中,第二消息包括第三报文;第三报文的报文头部包括查找树使能位、查找树结构的类型、查找树结构中的待更新起始节点标识、以及查找树结构中的待更新终止节点标识,查找树使能位的取值为一;第三报文的载荷包括查找树结构。
在该可能的实现方式中,网络设备的控制面可以通过报文的方式向网络设备的数据面下发查找树结构。该实现方式提供了报文承载查找树结构的具体格式。
另一种可能的实现方式中,方法还包括:网络设备的控制面向网络设备的数据面发送第三消息,第三消息用于向网络设备的数据面下发或更新所述纠错表。
在该可能的实现方式中,网络设备的控制面向网络设备的数据面下发纠错表。这样网络设备的数据面可以对通过神经网络模型得到的索引值进行纠错,得到最终的索引值。从而实现更精准地为报文确定对应的动作信息。
另一种可能的实现方式中,第三消息包括第四报文,第四报文的报文头部包括纠错表使能位、纠错表中待更新表项的起始位置和终止位置,纠错表使能位的取值为一;第四报文的载荷包括待更新表项对应的前缀和掩码。
在该可能的实现方式中,网络设备的控制面可以通过报文的方式向网络设备的数据面下发纠错表。该实现方式提供了报文承载纠错表的具体格式。
本申请实施例第二方面提供一种网络设备,该网络设备包括:
收发模块,用于获取第一报文;
处理模块,用于根据第一报文的地址信息和神经网络模型确定第一索引值;根据第一索引值从动作信息表确定第一报文对应的动作信息,动作信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息;根据第一报文对应的动作信息对第一报文进行处理。
一种可能的实现方式中,处理模块还用于:
根据第一报文的地址信息和查找树结构确定第二索引值,查找树结构为报文规则信息表中通过神经网络模型无法拟合的表项对应的查找树结构;
从第一索引值和第二索引值确定第一目标索引值;
处理模块具体用于:
根据第一目标索引值从动作信息表确定第一报文对应的动作信息。
另一种可能的实现方式中,神经网络模型是根据报文规则信息表进行模型训练得到的。
另一种可能的实现方式中,报文规则信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息,动作信息表用于指示报文规则信息表中的每个表项对应的动作信息。
另一种可能的实现方式中,报文规则信息表中的第一表项与动作信息表中的第二表项对应,第一表项和第二表项分别包括一个或多个表项;第一表项对应的索引值分别与第二表项对应的索引值相同,第二表项对应的索引值包括第一索引值。
另一种可能的实现方式中,处理模块具体用于:
根据第一报文的地址信息和神经网络模型确定第三索引值,第三索引值为报文规则信息表中的第三表项对应的索引值;
根据第三索引值从映射表确定动作信息表的第四表项对应的第一索引值,第四表项为动作信息表中与第三表项对应的表项,映射表包括报文规则信息表中每个表项对应的动作信息表的表项的索引值。
另一种可能的实现方式中,处理模块具体用于:
确定第一索引值对应的掩码长度和第二索引值对应的掩码长度;
若第一索引值对应的掩码长度大于第二索引值对应的掩码长度,则选择第一索引值作为第一目标索引值;
若第一索引值对应的掩码长度小于第二索引值对应的掩码长度,则选择第二索引值作为第一目标索引值。
另一种可能的实现方式中,处理模块具体用于:
确定第一索引值对应纠错表的第五表项和第二索引值对应纠错表的第六表项;纠错表包括至少一个表项,每个表项对应一个索引值和一个优先级,纠错表中的表项按照索引值的大小顺序与报文规则信息表中的表项一一对应,报文规则信息表中每个表项对应一个优先级,纠错表中每个表项对应的一个优先级与报文规则信息表中对应的表项的优先级相同;
根据纠错表确定第五表项对应的优先级和第六表项对应的优先级;
若第五表项对应的优先级高于第六表项对应的优先级,则选择第一索引值作为第一目标索引值;
若第五表项对应的优先级低于第六表项对应的优先级,则选择第二索引值作为第一目标索引值。
另一种可能的实现方式中,处理模块还用于:
从纠错表确定第七表项对应的前缀和掩码;纠错表包括至少一个表项,每个表项对应一个索引值,每个表项有对应的地址信息,每个表项对应的地址信息包括前缀和掩码;第七表项包括在纠错表中第一索引值对应的表项和在第一索引值的预设阈值范围内的索引值对应的表项;
从第七表项中确定第八表项对应的前缀与第一报文的目的地址匹配,第八表项对应的掩码为第七表项对应的掩码中掩码长度最大的掩码;
确定第八表项对应的第四索引值;
处理模块具体用于:
根据第四索引值从动作信息表确定第一报文对应的动作信息。
另一种可能的实现方式中,处理模块还用于:
从纠错表确定第九表项对应的前缀和掩码;纠错表包括至少一个表项,每个表项对应一个索引值,每个表项有对应的地址信息,每个表项对应的地址信息包括前缀和掩码;第九表项包括在纠错表中第一索引值对应的表项和在第一索引值的预设阈值范围内的索引值对应的表项;
从第九表项中确定第十表项对应的前缀与所述第一报文的目的地址匹配,第十表项对应的掩码为第九表项对应的掩码中掩码长度最大的掩码;
确定所述第十表项对应的第五索引值;
根据第一报文的地址信息和查找树结构确定第六索引值;查找树结构为报文规则信息表中通过神经网络模型无法拟合的表项对应的查找树结构;
从第五索引值和第六索引值确定第二目标索引值;
处理模块具体用于:
根据第二目标索引值从动作信息表确定第一报文对应的动作信息。
另一种可能的实现方式中,第一报文对应的动作信息包括端口信息;处理模块具体用于:
根据端口信息确定第一报文的下一跳路由节点;
将第一报文转发到下一跳路由节点。
另一种可能的实现方式中,处理模块还用于:
确定神经网络结构;
根据报文规则信息表和神经网络结构进行训练,得到神经网络模型,报文规则信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息,动作信息表用于指示报文规则信息表中每个表项对应的动作信息。
另一种可能的实现方式中,网络设备的控制面向网络设备的数据面发送第一消息,第一消息用于向网络设备的数据面下发或更新神经网络模型。
另一种可能的实现方式中,第一消息包括第二报文;第二报文的报文头部包括神经网络模型使能位、神经网络模型的高度、神经网络模型的宽度、神经网络模型包括的微模型标识,神经网络模型使能位取值为一;第二报文的载荷包括神经网络模型的参数。
另一种可能的实现方式中,处理模块还用于:
确定报文规则信息表中通过神经网络模型无法拟合的表项,报文规则信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息,动作信息表用于指示报文规则信息表中每个表项对应的动作信息;
根据查找树算法表示通过神经网络模型无法拟合的表项,得到查找树结构。
另一种可能的实现方式中,网络设备的控制面向网络设备的数据面发送第二消息,第二消息用于向网络设备的数据面下发或更新查找树结构。
另一种可能的实现方式中,第二消息包括第三报文;第三报文的报文头部包括查找树使能位、查找树结构的类型、查找树结构中的待更新起始节点标识、以及查找树结构中的待更新终止节点标识,查找树使能位的取值为一;第三报文的载荷包括查找树结构。
另一种可能的实现方式中,网络设备的控制面向网络设备的数据面发送第三消息,第三消息用于向网络设备的数据面下发或更新所述纠错表。
另一种可能的实现方式中,第三消息包括第四报文,第四报文的报文头部包括纠错表使能位、纠错表中待更新表项的起始位置和终止位置,纠错表使能位的取值为一;第四报文的载荷包括待更新表项对应的前缀和掩码。
本申请实施例第三方面提供一种网络设备,该网络设备包括处理器,用于实现上述第一方面任意可能的实现方式中描述的方法。
可选的,网络设备还可以包括存储器,所述存储器用于存储指令,所述处理器执行所述存储器中存储的指令时,可以实现上述第一方面任意可能的实现方式中描述的方法。
可选的,网络设备还包括通信接口,所述通信接口用于网络设备与其他设备进行通信。示例性的,通信接口可以是收发器、电路、总线、模块、管脚或其他类型的通信接口。
本申请实施例第四方面提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机指令,当计算机指令在计算机上运行时,使得计算机执行上述第一方面中任意可能的实现方式中的方法。
本申请实施例第五方面提供一种计算机程序产品,所述计算机程序产品包括计算机程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行第一方面中任意可能的实现方式中的方法。
本申请实施例第七方面提供一种芯片,包括处理器。处理器用于执行上述第一方面中任意可能的实现方式中的方法。
可选地,所述芯片还包括存储器,存储器与处理器耦合。
进一步可选地,所述芯片还包括通信接口。
从以上技术方案可以看出,本申请实施例具有以下优点:
经由上述技术方案可知,网络设备获取第一报文;然后,网络设备根据第一报文的地址信息和神经网络模型确定第一索引值。网络设备根据第一索引值从动作信息表确定第一报文对应的动作信息;动作信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息;网络设备根据第一报文对应的动作信息对第一报文进行处理。因此,本申请通过神经网络模型和第一报文的地址信息第一索引值,并根据第一索引值从动作信息表中确定第一报文对应的动作信息,从而实现对第一报文进行处理。网络设备无需存储大规模查找树结构,避免查找树结构带来的存储开销。
附图说明
图1为本申请实施例网络设备的一个结构示意图;
图2A为本申请实施例报文处理方法的一个实施例示意图;
图2B为本申请实施例报文处理方法的一个场景示意图;
图2C为本申请实施例报文处理方法的另一个场景示意图;
图2D为本申请实施例网络设备中的神经网络模型、查找树结构和动作信息表的一个存储示意图;
图3A为本申请实施例报文处理方法的另一个实施例示意图;
图3B为本申请实施例查找树结构的一个示意图;
图3C为本申请实施例报文处理方法的另一个场景示意图;
图4A为本申请实施例报文处理方法的另一个实施例示意图;
图4B为本申请实施例报文处理方法的另一个场景示意图;
图5A为本申请实施例报文处理方法的另一个实施例示意图;
图5B为本申请实施例报文处理方法的另一个场景示意图;
图6A为本申请实施例报文处理方法的另一个实施例示意图;
图6B为本申请实施例神经网络模型的一个结构示意图;
图6C为本申请实施例第二报文的一个格式示意图;
图7A为本申请实施例报文处理方法的另一个实施例示意图;
图7B为本申请实施例第三报文的一个格式示意图;
图8为本申请实施例第四报文的一个格式示意图;
图9为本申请实施例网络设备的一个结构示意图;
图10为本申请实施例网络设备的另一个结构示意图。
具体实施方式
本申请实施例提供了一种报文处理方法和网络设备,用于减少存储开销,提高确定第一报文对应的动作信息的速度,从而降低报文处理的时延。
本申请的技术方案适用于各种类型的数据通信网路场景。例如,数据中心网络、广域网、局域网、城域网、移动通信网络等应用场景。本申请适用的数据通信网路***包括至少一个网络设备。网络设备可以为路由器、交换机等。网络设备可以采用本申请提供的技术方案对网络设备接收到的报文进行处理。
下面对本申请涉及的一些技术术语进行解释。
1、动作信息:包括报文的处理操作指示,和/或,对报文执行处理操作所需的一些相关处理信息。例如,丢弃操作信息、端口信息。动作信息用于对报文进行处理。
2、报文规则信息表:包括至少一个表项,每个表项对应一个索引值、一个报文规则信息和一个动作信息。报文规则信息可以是报文的地址信息、端口信息、协议号信息等。报文规则信息表用于将报文的报文信息与报文规则信息表中的报文规则信息进行匹配以确定报文对应的动作信息或报文对应的动作信息的索引值。
3、动作信息表:包括至少一个表项,每个表项对应一个索引值和一个动作信息。动作信息表用于通过索引值确定报文对应的动作信息。
4、查找树结构:由节点和边构成,每个节点上存储相应的数值。该数值可以代表索引值或动作信息。
5、神经网络模型:由神经元互相连接而形成的网络***。本申请中,根据报文规则信息表中的表项进行模型训练,得到该神经网络模型。
下面结合图1介绍本申请实施例提供的网络设备的一个结构示意图。
请参阅图1,图1为本申请实施例网络设备的一个结构示意图。在图1中,网络设备包括模型训练与验证模块101、表项查找模块102、结果选择模块103和报文处理模块104。
模型训练与验证模块101用于基于报文规则信息表进行模型训练,得到神经网络模型;向表项查找模块102发送该神经网络模型。
其中,报文规则信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息。每个表项有对应的地址信息。
本申请实施例中,报文规则信息表可以为转发信息库(forwarding information base,FIB)表、或者为访问控制列表(access control lists,ACL)、或者为防火墙策略表、或者为流表、或者媒体访问控制(media access control,MAC)地址表等,具体本申请不做限定。关于报文规则信息表的更多介绍请参阅后文图2A所示的实施例的相关介绍。
可选的,模型训练与验证模块101还用于确定神经网络模型无法拟合的表项,并通过查找树算法将通过神经网络模型无法拟合的表项表示为查找树结构;向表项查找模块102发送查找树结构。
表项查找模块102用于接收模型训练与验证模块101发送的神经网络模型;根据第一报文的地址信息和神经网络模型计算得到第一索引值。
结果选择模块103用于根据第一索引值从动作信息表中确定第一报文对应的动作信息。
其中,动作信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息。动作信息表用于指示报文规则信息表中每个表项对应的动作信息。
报文处理模块104用于第一报文对应的动作信息对报文进行处理。
可选的,表项查找模块102还用于根据第一报文的地址信息和查找树结构计算得到第二索引值。
其中,查找树结构是报文规则信息表中通过神经网络模型无法拟合的表项对应的查找树结构。
在该实现方式下,结果选择模块104还用于从第一索引值和第二索引值中确定第一目标索引值,并根据第一目标索引值从动作信息表确定第一报文对应的动作信息。而报文处理模块103用于第一报文对应的动作信息对第一报文进行处理。
可选的,表项查找模块102还用于根据纠错表对第一索引值进行纠错,得到第七索引值。
其中,纠错表包括至少一个表项,每个表项对应一个索引值,每个表项项有对应的地址信息。纠错表中每个表项的地址信息是从报文规则信息表中获得的。关于纠错表的更多介绍请参阅后文的相关介绍。
那么基于该实现方式,结果选择模块103用于从第七索引值和第二索引值中选择第三目标索引值,并根据第三目标索引值从动作信息表中确定第一报文对应的动作信息。
下面结合具体实施例介绍本申请实施例的技术方案。
请参阅图2A,图2A为本申请实施例报文处理方法的一个实施例示意图。在图2A中,报文处理方法包括:
201、网络设备获取第一报文。
具体的,第一报文到达网络设备,网络设备的数据面可以提取第一报文的地址信息。例如,第一报文的目的地址、源地址等。
202、网络设备根据第一报文的地址信息和神经网络模型确定第一索引值。
本实施例中,第一报文的地址信息包括以下至少一项:第一报文的目的地址、第一报文的源地址。
神经网络模型是根据报文规则信息表进行模型训练得到的神经网络模型。
其中,报文规则信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息,每个表项有对应的地址信息。
可选的,每个表项对应的地址信息包括前缀和掩码等。每个表项对应的一个动作信息包括端口信息等。
需要说明的是,每个表项对应的一个动作信息可以包括针对不同网络层的动作信息。
例如,每个表项对应的一个动作信息包括端口号、以及目的MAC地址的索引值等。具体的,在IP层上,网络设备根据端口号对第一报文进行转发。在MAC层上,网络设备根据目的MAC地址的索引值确定目的MAC地址,并将第一报文的MAC地址修改为该目标MAC地址。
本实施例中,报文规则信息表包括以下任一种:FIB表、ACL、MAC地址表、防火墙策略表、流表。
例如,如下表1所示,报文规则信息表为FIB表。FIB表包括6行,每一行可以理解为一个表项。因此,表1中包括六个表项。第一个表项对应索引值0,第二个表项对应索引值1,以此类推,第六个表项对应索引值5。
表1
前缀prefix label
-/0 2
0/1 3
00/2 3
001/3 2
01/2 2
011/3 1
由上述表1可知,FIB表中每个FIB表项有对应的前缀、掩码和端口信息。即每个表项对应的地址信息包括前缀和掩码,每个表项对应的一个动作信息包括端口信息。该端口信息用于指示网络设备确定报文的出端口,并将报文从该出端口转发给下一跳路由节点。例如,上述表1中,第二个表项对应的前缀为0,掩码为/1,出端口为3。
例如,如下表2所示,报文规则信息表为ACL。ACL包括6行,每一行可以理解为一个表项。因此,表2中包括六个表项。由表2可知,ACL中每个ACL表项有对应的五元组信息,五元组信息源地址信息、目的地址信息、源端口信息、目的端口信息、协议号信息。另外,每个ACL表项都有对应的优先级。
表2
源地址信息 目的地址信息 源端口信息 目的端口信息 协议号信息
156.225.120.30/32 13.56.195.105/32 0:65535 6789:6789 0x06
156.226.207.75/32 13.56.195.105/32 0:65535 1705:1705 0x06
156.228.58.152/32 13.56.195.105/32 0:65535 1525:1525 0x06
156.229.10.28/32 13.56.195.105/32 0:65535 1521:1521 0x06
156.231.43.50/32 13.56.195.105/32 0:65535 1707:1707 0x06
156.233.89.23/32 13.56.195.105/32 0:65535 1521:1521 0x06
具体的,第一报文的地址信息包括第一报文的目的地址。上述步骤202中,网络设备将第一报文的目的地址作为输入参数输入神经网络模型,得到神经网络模型输出的第一索引值。
需要说明的是,网络设备中的神经网络模型可以是其他设备向网络设备发送的,或者是预配置在网络设备中的,或者是网络设备自行训练得到的,具体本申请不做限定。针对网络设备自行训练得到神经网络模型的实现方式请参阅后文图6所示的实施例的相关介绍,这里不再赘述。
本实施例中,需要说明的是,若上述步骤202中神经网络模型没有返回结果,则表明第一报文的目的地址与报文规则信息表中的任何一个前缀不匹配。那么,网络设备可以采用默认(或缺省)的动作处理第一报文。例如,网络设备将该第一报文丢弃;或者,网络设备将第一报文从默认端口转发至下一跳路由节点。
203、网络设备根据第一索引值从动作信息表确定第一报文对应的动作信息。
其中,动作信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息。动作信息表用于指示报文规则信息表中每个表项对应的动作信息。
第一种可能的实现方式中,报文规则信息表中的第一表项与动作信息表中的第二表项一一对应,第一表项对应的索引值分别与第二表项对应的索引值相同。
其中,第一表项和第二表项分别包括一个或多个表项。第二表项对应的索引值包括第一索引值。
例如,报文规则信息表为上述表1所示的FIB表,动作信息表可以表示为下述表3。由表1和表3可知,表1的第一个表项与表3的第二个表项对应。由表1和表3可知,报文规则信息表中的第一个表项与动作信息表中的第一个表项对应,报文规则信息表中的第二个表项与动作信息表的第二个表项对应,以此类推。由表1可知,报文规则信息表中包括6个表项,第一个表项对应索引值0,第二个表项对应索引值1,以此类推,第六个表项 对应索引值5。由上述表3可知,动作信息表包括6个表项,第一个表项对应索引值0,第二个表项对应索引值1,以此类推,第六个表项对应索引值5。因此,第一表项包括报文规则信息表的第一个表项至第六个表项。第二表项包括动作信息表的第一个表项至第六个表项。第一表项分别对应的索引值与第二表项分别对应的索引值相同。
表3
label
2
3
3
2
2
1
由此可知,上述步骤203中网络设备通过神经网络模型输出的第一索引值可以理解是动作信息表中的对应表项的索引值。这样无需进行索引值的转换。上述步骤203中,网络设备从动作信息表确定第一索引值对应的表项,并将该表项对应的动作信息作为第一报文对应的动作信息。
在该实现方式中,可选的,报文规则信息表包括的表项数量与动作信息表中的表项数量相同,报文规则信息表中的表项按照索引值的大小顺序与动作信息表中的表项一一对应。例如,由上述表1和表3可知,表3包括的表项数量与表1包括的表项数量相同。表3中的表项可以与表1中的表项一一对应。例如,上述步骤202中确定得到的第一索引值为索引值0,那么可知索引值0对应上述表1中的第一个表项。而上述表1中的第一个表项与表3中的第一个表项对应。因此,第一索引值可以理解为是表3中的第一个表项的索引值。那么网络设备可以执行上述步骤203。动作信息表中的表项对应的端口信息为表1中与该动作信息表中的表项对应的表项的端口信息。
例如,第一索引值为索引值1。由于报文规则信息表中的表项按照索引值的大小顺序与动作信息表中的表项一一对应。即报文规则信息表中的表项对应的索引值与动作信息表中的对应表项对应的索引值相同。因此,网络设备根据表3可以确定索引值1对应第二个表项,第二个表项中的端口信息包括端口2。因此,网络设备可以确定从端口2转发第一报文。
下面结合步骤203a和步骤203b介绍第二种可能的实现方式。
203a:网络设备根据第一报文的地址信息和神经网络模型确定第三索引值。
其中,第三索引值为报文规则信息表中的第三表项对应的索引值。
例如,报文规则信息表为上述表1,第三索引值为上述表1中的第三个表项对应的索引值2。
步骤203b:网络设备根据第三索引值从映射表确定动作信息表的第四表项对应的第一 索引值。
其中,第四表项为动作信息表中与第三表项对应的表项。映射表包括报文规则信息表中每个表项对应的动作信息表的表项的索引值。
可选的,动作信息表包括的表项数量与报文规则信息表包括的表项数量不相同。例如,动作信息表的表项与报文规则信息表的表项存在多对一的关系。
例如,报文规则信息表为上述表1,动作信息表为下述表4。
表4
label
2
3
1
也就是动作信息表只有三个表项,每个表项对应一个索引值。由表4可知,可以理解的是,每个出端口对应一个索引值。端口2对应索引值0,端口3对应索引值1,端口1对应索引值2。对于上述表2所示的同一端口的表项被删除,这样可以减少动作信息表的表项数量,减少存储的开销。
对于表1和表4的示例,映射表可以包括报文规则信息表中每个表项对应动作信息表的表项的索引值。具体如表5所示,表5为映射表的一个示意图。
表1所示的第一个表项中出端口为2,表4所示的第一个表项中出端口为2。因此,表1所示的第一个表项对应表4所示的第一个表项。表4中的第一个表项对应的索引值为0,因此下述表5的映射表中第一行的索引值为0。
表1所示的第二个表项中出端口为3,表4所示的第二个表项中出端口为3。因此,表1所示的第二个表项对应表4所示的第二个表项。表4中的第二个表项对应的索引值为1,因此下述表5的映射表中第二行的索引值为1。
表1所示的第三个表项中出端口为3,表4所示的第二个表项中出端口为3。因此,表1所示的第三个表项对应表4所示的第二个表项。表4中的第二个表项对应的索引值为1,因此下述表5的映射表中第三行的索引值为1。以此类推。表1所示的第六个表项对应表4所示的第三个表项。表4中的第三个表项对应的索引值为2,因此下述表5的映射表中的第六行的索引值为2。
表5
index值
0
1
1
0
0
2
上述表5中每个表项中的index值为表1中的表项对应的表4的表项的索引值。
例如,第三索引值为上述表1中的第三个表项对应的索引值2,那么网络设备根据上述表5确定表1中的第三个表项对应表4的第二个表项的索引值1,因此,第一索引值为索引值1。
在后文的实施例中,以报文规则信息表中的表项按照索引值的大小顺序与动作信息表中的表项一一对应为例介绍本申请的技术方案。
204、网络设备根据第一报文对应的动作信息对第一报文进行处理。
一种可能的实现方式中,动作信息包括端口信息。上述步骤204具体包括步骤204a和步骤204b。
204a:网络设备根据端口信息确定第一报文的下一跳路由节点。
例如,如上述表2所示,端口信息为端口2。那么网络设备的端口2连接的路由节点为第一报文的下一跳路由节点。
204b:网络设备将第一报文转发到下一跳路由节点。
例如,端口信息为端口2。网络设备通过网络设备的端口2将第一报文转发给下一跳路由节点。
另一种可能的实现方式中,动作信息包括丢弃处理。那么上述步骤204具体为网络设备将第一报文丢弃。
基于上述步骤203中的第一种实现方式,下面结合图2B介绍本实施例的具体过程。如图2B所示,网络设备中的模型训练与验证模块基于报文规则信息表进行模型训练,得到神经网络模型。网络设备中的表项查找模块接收第一报文,再将第一报文的地址信息输入神经网络模型,得到第一索引值。然后,网络设备的结果选择模块根据第一索引值从动作信息表中确定第一报文对应的动作信息。网络设备的报文处理模块根据第一报文对应的动作信息对第一报文进行处理。
基于上述步骤203中的第二种实现方式,下面结合图2C介绍本实施例的具体过程。如图2C所示,网络设备执行的过程与上述图2B的相关介绍类似,不同的地方在于:网络设备中的表项查找模块得到第三索引值后,表项查找模块根据第三索引值从映射表确定动作信息表对应表项对应的第一索引值。网络设备的结果选择模块根据第一索引值从动作信息表中确定第一报文对应的动作信息。然后,网络设备的报文处理模块根据第一报文对应的动作信息对第一报文进行处理。
本申请实施例中,网络设备采用神经网络模型和第一报文的地址信息可以确定第一索引值。网络设备根据第一索引值从动作信息表确定第一报文对应的动作信息,并根据第一报文对应的动作信息对第一报文进行处理。网络设备无需存储大规模查找树结构,避免查找树结构带来的存储开销。并且,相比于网络设备通过查找树结构确定第一报文对应的动作信息的方式,网络设备采用神经网络模型和动作信息表确定第一报文对应的动作信息的方式更为快速,查找时延较小。进一步的,如图2D所示,在网络设备中,神经网络模型可 以存储在片内,动作信息表可以存储在片外。这样网络设备在对第一报文进行处理时只需要进行片内访存,无需进行片外访存,从而节省了片外访存带宽,有效提升转发容量。
本申请实施例中,可选的,通过神经网络模型拟合报文规则信息中包括的表项,会出现无法拟合的表项。针对报文规则信息表中通过神经网络模型无法拟合的表项的情况,本申请提供了相应的技术方案。下面结合图3A所示的实施例进行介绍。
请参阅图3A,图3A为本申请实施例报文处理方法的另一个实施例示意图。在图3A中,报文处理方法包括:
301、网络设备获取第一报文。
302、网络设备根据第一报文的地址信息和神经网络模型确定第一索引值。
步骤301至步骤302与前述图2A所示的实施例中的步骤201至步骤202类似,具体请参阅前述图2A所示的实施例中的步骤201至步骤202的相关介绍,这里不再赘述。
303、网络设备根据第一报文的地址信息和查找树结构确定第二索引值。
其中,查找树结构为报文规则信息表中通过神经网络模型无法拟合的表项对应的查找树结构。
例如,报文规则信息表为下述表6所示的FIB表。
表6
前缀prefix label
-/0 2
1/1 2
0/1 3
00/2 3
001/3 2
01/2 2
011/3 1
上述步骤302中的神经网络模型可以表示为y=x+3。其中,x为长度为3比特的目的地址信息,y为索引值。该神经网络模型无法拟合的表项是以二进制1开头的前缀。例如,前缀加上掩码为1/1对应的表项的索引值为1,而通过上述神经网络模型预测得到的预测索引值为4,那么前缀加上掩码为1/1和前缀加上掩码为1/1对应的索引值1加入残参表中,并通过查找树算法表示为查找树结构。具体请参阅图3B所示的查找树结构。若第一报文的目的地址前三个高位表示为1**,则从图3B所示的查找树结构的右侧分支进行查找,那么可知查找到的节点中的数字1代表索引值。即第二索引值为索引值1。
本实施例中,上述步骤303中的查找树结构可以是其他设备向网络设备发送的,也可以是网络设备将通过神经网络模型无法拟合的表项构成残参表,然后再通过查找树算法表示残参表中无法拟合的表项,得到查找树结构。关于查找树结构的确定过程请参阅后文相 关介绍,这里不再赘述。
304、网络设备从第一索引值和第二索引值中确定第一目标索引值。
上述步骤304中,网络设备确定第一目标索引值有多种方式,下面示出两种可能的实现方式。
下面结合步骤304a至步骤304c介绍第一种可能的实现方式。
304a:网络设备确定第一索引值对应的掩码长度和第二索引值对应的掩码长度。
可选的,步骤304a中,网络设备可以通过以下两种可能的实现方式确定第一索引值对应的掩码长度和第二索引值对应的掩码长度。
实现方式1、网络设备从纠错表确定第一索引值对应的掩码长度和第二索引值对应的掩码长度。
其中,纠错表包括至少一个表项,每个表项对应一个索引值,每个表项有对应的地址信息。每个表项对应的地址信息包括前缀和掩码。
纠错表包括的表项数量与报文规则信息表包括的表项数量相同,即纠错表中的表项按照索引值的大小顺序与报文规则信息表中的表项一一对应。即可以理解的是,纠错表中的表项对应的索引值与报文信息表中的对应表项的索引值相同。纠错表中的表项对应的地址信息为报文规则信息表中对应表项的地址信息。
纠错表与动作信息表可以是逻辑上的一个表格,而在实际存储是可以是通过一个表格存储纠错表的内容和动作信息表的内容,具体本申请不做限定。
例如,报文规则信息表为上述表6。由于纠错表中的表项按照索引值的大小顺序与报文规则信息表中的表项一一对应。纠错表中的表项对应的地址信息为报文规则信息表中对应表项的地址信息。因此,纠错表可以表示为下述表7。
表7
前缀prefix
-/0
1/1
0/1
00/2
001/3
01/2
011/3
例如,第一索引值为索引值3,第二索引值为索引值1,那么由表7可知,索引值3对应的表7的第四个表项,索引值1对应表7的第二个表项。由表7可知,第二个表项对应的掩码为/1,即掩码长度为1。第四个表项对应的掩码为/2,即掩码长度为2。
实现方式2:网络设备在上述步骤302中得到神经网络模型输出的第一索引值对应的 掩码长度,以及在上述步骤303中通过查找树结构还得到第二索引值对应的掩码长度。
304b:若第一索引值对应的掩码长度大于第二索引值对应的掩码长度,则网络设备选择第一索引值作为第一目标索引值。
304c:若第一索引值对应的掩码长度小于第二索引值对应的掩码长度,则网络设备选择第二索引值作为第一目标索引值。
例如,如表7所示,第一索引值对应表7的第二个表项对应的掩码为/1,即掩码长度为1。第二索引值对应表7的第四个表项对应的掩码为/2,即掩码长度为2。那么网络设备选择第二索引值作为第一目标索引值。
上述步骤304a至步骤304c中的实现方式通常是在转发场景下采用的实现方式,也就是报文规则信息表为FIB表。
下面结合步骤304d至步骤304g介绍第二种可能的实现方式。
304d:网络设备确定第一索引值对应纠错表的第五表项和第二索引值对应纠错表的第六表项。
关于纠错表的相关介绍请参阅前述步骤304a中的相关介绍。在步骤304d中,纠错表中每个表项还对应一个优先级。报文规则信息表中每个表项对应一个优先级。纠错表中每个表项对应的一个优先级与报文规则信息表中对应的表项的优先级相同。
304e:网络设备确定第五表项对应的优先级和第六表项对应的优先级。
例如,报文规则信息表为ACL,报文规则信息表中每个表项都有对应的优先级。由于纠错表中的表项按照索引值的大小顺序与报文规则信息表中的表项一一对应。因此,纠错表中每个表项有对应的优先级,并且纠错表中每个表项对应的一个优先级与报文规则信息表中对应的表项的优先级相同。因此,网络设备根据纠错表可以确定第五表项对应的优先级和第六表项对应的优先级。
需要说明的是,纠错表中每个表项对应的优先级可以是用户预先配置的,或者是网络设备自行确定的,具体本申请不做限定。
304f:若第五表项对应的优先级高于第六表项对应的优先级,则网络设备选择第一索引值作为第一目标索引值。
304g:若第五表项对应的优先级低于第六表项对应的优先级,则网络设备选择第二索引值作为第一目标索引值。
需要说明的是,图3A所示的实施例中步骤302和步骤303的执行顺序并不限定,可以先执行步骤302,再执行步骤303;或者先执行步骤303,再执行步骤302;或者同时并行执行步骤302和步骤303,具体本申请不做限定。
上述步骤304d至步骤304g中的实现方式通常是在访问控制场景下采用的实现方式,也就是报文规则信息表为ACL。
下面结合图3C介绍本实施例的技术方案。如图3C所示,网络设备中的模型训练与验证模块基于报文规则信息表进行模型训练,得到神经网络模型。模型训练与验证模块根据查找树算法将通过神经网络模型无法拟合的表项表示为对应的查找树结构。
网络设备中的表项查找模块接收第一报文,然后并行将第一报文的地址信息输入神经 网络模型和查找树结构,分别得到第一索引值和第二索引值。网络设备的结果选择模块从第一索引值和第二索引值中选择第一目标索引值,并根据第一目标索引值从动作信息表中确定第一报文对应的动作信息。然后,网络设备的报文处理模块根据第一报文对应的动作信息对第一报文进行处理。由上述图3C所示的示例可知,上述步骤302和步骤303可以并行执行。
需要说明的是,在上述图2C中,步骤304中的查找树结构可以存储在片内。由于查找树结构是通过神经网络模型无法拟合的表项对应的查找树结构,因此本申请的技术方案可以实现已有的大规模查找树结构的压缩,减少网络设备中查找树结构的存储开销,有效提升转发容量。
另外,由于已有的大规模查找树结构在片内无法全部存储,那么有一些查找树结构存储在片外。这样会导致在对报文进行处理时需要执行片外访存,导致片外访存次数较多,从而带来一定的片外访存带宽。而本申请的技术方案中,神经网络模型和查找树结构可以存储在片内。这样在确定索引值时只需要进行片内访存,无需进行片外访存,从而节省了片外访存带宽,有效提升转发容量。
305、网络设备根据第一目标索引值从动作信息表确定第一报文对应的动作信息。
306、网络设备根据第一报文对应的动作信息对第一报文进行处理。
上述步骤305至步骤306与前述图2A所示的实施例中的步骤203至步骤204类似,具体请参阅前述图2A所示的实施例的相关介绍,这里不再赘述。
上述图3A所示的实施例示出了网络设备将第一报文地址信息分别输入神经网络模型和查找树结构得到对应索引值的方案。需要说明的是,如果网络设备将第一报文的地址信息分别输入神经网络模型和查找树结构都没有返回结果,那么表明第一报文的地址信息不与报文规则信息表中的任何一个前缀匹配。如果网络设备将第一报文的地址信息输入神经网络模型得到第一索引值,将第一报文的地址信息输入查找树结构没有返回结果,那么网络设备以第一索引值为最终得到的索引值。如果网络设备将第一报文的地址信息输入神经网络模型没有返回结果,将第一报文的地址信息输入查找树结构得到第二索引值,那么网络设备以第二索引值为最终得到的索引值。具体可以表示为下表8。
表8
Figure PCTCN2022082138-appb-000001
上述表8中,<0,0>表示神经网络模型和查找树结构均没有返回结果,表明第一报文的地址信息不与报文规则信息表中的任何一个前缀匹配。
<1,0>表示神经网络模型输出index_NN,而查找树结构没有返回结果,那么网络设备 以index_NN为最终得到的索引值。
<0,1>表示神经网络模型没有返回结果,查找树结构返回index_lookup,那么网络设备以index_lookup为最终得到的索引值。
<1,1>表示神经网络模型输出index_NN,查找树结构返回index_lookup。那么当index_NN对应的表项的掩码长度mask_NN大于index_lookup对应的表项的掩码长度mask_lookup,那么网络设备以index_NN为最终得到的索引值。当index_NN对应的表项的掩码长度mask_NN小于或等于index_lookup对应的表项的掩码长度mask_lookup,那么网络设备以index_lookup为最终得到的索引值。
本申请实施例中,网络设备根据第一报文的地址信息和神经网络模型确定第一索引值。网络设备根据第一报文的地址信息和查找树结构确定第二索引值。然后,网络设备再从第一索引值和第二索引值中确定第一目标索引值,并根据第一目标索引值从动作信息表中确定第一报文对应的动作信息。网络设备根据第一报文对应的动作信息对第一报文进行处理。本申请中,通常报文规则信息表中90%的表项可以通过神经网络模型拟合,报文规则信息表中无法拟合的表项较少,因此网络设备只需要存储通过神经网络模型无法拟合的表项对应的查找树结构。因此本申请的技术方案可以实现已有的大规模查找树结构的压缩,减少网络设备中查找树结构的存储开销,有效提升转发容量。并且,相比于网络设备通过大规模的查找树结构确定索引值的方式来说,网络设备根据第一报文的地址信息和神经网络模型确定第一索引值的方式更为快速,查找时延较小。
本申请实施例中,可选的,上述步骤202中网络设备得到第一索引值之后,网络设备可以对第一索引值进行纠错,以便于后续更精准地为第一报文确定对应的动作信息。下面结合图4A所示的实施例介绍该过程。
请参阅图4A,图4A为本申请实施例报文处理方法的另一个实施例示意图。在图4A中,报文处理方法包括:
401、网络设备获取第一报文。
402、网络设备根据第一报文的地址信息和神经网络模型确定第一索引值。
步骤401至步骤402与前述图2A所示的实施例中步骤201至步骤202类似,具体请参阅前述图2A所示的实施例中步骤201至步骤202的相关介绍,这里不再赘述。
403、网络设备从纠错表确定第七表项对应的前缀和掩码。
其中,第七表项包括:纠错表中第一索引值对应的表项和在第一索引值的预设阈值范围内的索引值对应的表项。
纠错表包括至少一个表项,每个表项对应一个索引值,每个表项有对应的地址信息。每个表项对应的地址信息包括前缀和掩码。
纠错表包括的表项数量与报文规则信息表包括的表项数量相同,即纠错表中的表项按照索引值的大小顺序与报文规则信息表中的表项一一对应。即可以理解的是,纠错表中的表项对应的索引值与报文信息表中的对应表项的索引值相同。纠错表中的表项对应的地址信息为报文规则信息表中对应表项的地址信息。
纠错表与动作信息表可以是逻辑上的一个表格,而在实际存储时可以是通过一个表格 存储纠错表的内容和动作信息表的内容,具体本申请不做限定。
例如,报文规则信息表为上述表1。由于纠错表中的表项按照索引值的大小顺序与报文规则信息表中的表项一一对应。纠错表中的表项对应的地址信息为报文规则信息表中对应表项的地址信息。因此,纠错表可以表示为下述表9。
表9
前缀prefix
-/0
0/1
00/2
001/3
01/2
011/3
例如,第一索引值为索引值3,对应表9中的第四个表项,预设阈值error bound为2。那么第五表项包括上述表9中的第二个表项、第三个表项、第四个表项、第五个表项和第六个表项。网络设备从表9中确定第二个表项、第三个表项、第四个表项、第五个表项和第六个表项分别对应的前缀和掩码。
需要说明的是,预设阈值error bound可以是用户根据配置手册配置的,或者是网络设备自行确定的,具体本申请不做限定。
本实施例中,预设阈值的大小设计可以考虑以下至少一个因素:报文规则信息表中的表项数量大小、查找时延要求、模型精度要求。
其中,报文规则信息表中的表项数量越多,预设阈值越大。查找时延要求越高,预设阈值越小。模型精度要求越高,预设阈值越小。
404、网络设备确定第八表项对应的前缀与第一报文的目的地址匹配,且第八表项对应的掩码为第七表项对应的掩码中长度最大的掩码。
例如,第七表项包括上述表9中的第二个表项、第三个表项、第四个表项、第五个表项和第六个表项。第一报文的目的地址中的前三个高位为001,那么由上述表9可知,第四个表项的前缀为001。因此,网络设备确定第四个表项的前缀与第一报文的目的地址匹配。
需要说明的是,上述示例示出的是第七表项中只有第八表项的前缀与第一报文的目的地址匹配,那么网络设备可以直接选择第八表项,无需再比对表项对应的掩码长度。如果存在第七表项中存在多个表项的前缀与第一报文的目的地址匹配的情况,则网络设备需要进一步确定多个表项对应的掩码长度,确定掩码长度最大的表项作为第八表项。
405、网络设备确定第八表项对应的第四索引值。
例如,第八表项为上述表9中的第四个表项,那么第四索引值为索引值3。
下面结合图4B介绍本实施例的技术方案。如图4B所示,网络设备中的模型训练与验 证模块用于基于报文规则信息表进行模型训练,得到神经网络模型。
网络设备中的表项查找模块接收第一报文,将第一报文的地址信息输入神经网络模型,得到第一索引值。然后,网络设备根据纠错表对第一索引值进行纠错,得到第四索引值。网络设备的结果选择模块根据第四索引值从动作信息表中确定第一报文对应的动作信息。然后,网络设备的报文处理模块根据第一报文对应的动作信息对第一报文进行处理。
406、网络设备根据第四索引值从动作信息表中确定第一报文对应的动作信息。
由上述图2A所示的实施例203第一种可能的实现方式的相关描述可知,报文规则信息表包括的表项数量与动作信息表包括的表项数量相同,报文规则信息表包括的表项按照索引值的大小顺序与动作信息表包括的表项一一对应。即可以理解的是,报文规则信息表中的表项对应的索引值与动作信息表中的对应表项对应的索引值相同。由上述步骤403的描述可知,纠错表包括的表项数量与报文规则信息表包括的表项数量相同,纠错表包括的表项与报文规则信息表包括的表项一一对应。即可以理解的是,纠错表中的表项对应的索引值与报文信息表中的对应表项的索引值相同。因此,第四索引值可以理解为动作信息表中对应表项对应的索引值。
407、网络设备根据第一报文对应的动作信息对第一报文进行处理。
步骤406至步骤407与前述图2A所示的实施例中的步骤203至步骤204类似,具体请参阅前述图2A所示的实施例中的步骤203至步骤204的相关介绍,这里不再赘述。
本申请实施例中,网络设备根据第一报文的地址信息和神经网络模型确定第一索引值。然后,网络设备根据纠错表对第一索引值进行纠错,得到第四索引值。网络设备根据第四索引值从动作信息表中确定第一报文对应的动作信息,并根据第一报文对应的动作信息对第一报文进行处理。由此可知,网络设备无需存储大规模查找树结构,避免查找树结构带来的存储开销。相比于网络设备通过查找树结构确定第一报文对应的动作信息的方式,网络设备采用神经网络模型、纠错表和动作信息表确定第一报文对应的动作信息的方式更为快速,查找时延较小。进一步的,网络设备根据纠错表对第一索引值进行纠错,得到第四索引值,再确定第一报文对应的动作信息。这样可以更精确地为第一报文确定对应的动作信息。
本实施例中,可选的,如图2D所示,网络设备中的神经网络模型可以存储在片内,动作信息表可以存储在片外。这样网络设备在对第一报文进行处理时只需要进行片内访存,无需进行片外访存,从而节省了片外访存带宽,有效提升转发容量。
下面结合图5A所示的实施例介绍以下技术方案的详细流程。该技术方案为:网络设备根据第一报文的地址信息、神经网络模型和纠错表得到第五索引值。然后,网络设备根据第一报文的地址信息和查找树结构确定第六索引值。网络设备结合第五索引值和第六索引值最终确定第一报文对应的动作信息。
请参阅图5A,图5A为本申请实施例报文处理方法的另一个实施例示意图。在图5A中,报文处理方法包括:
501、网络设备获取第一报文。
502、网络设备根据第一报文的地址信息和神经网络模型确定第一索引值。
步骤501至步骤502与前述图2A所示的实施例中的步骤201至步骤202类似,具体请参阅前述图2A所示的实施例中的步骤201至步骤202的相关介绍,这里不再赘述。
503、网络设备从纠错表确定第九表项对应的前缀和掩码。
其中,第九表项包括纠错表中第一索引值对应的表项和在第一索引值的预设阈值范围内的索引值对应的表项。
504、网络设备确定第十表项对应的前缀与第一报文的目的地址匹配,且第十表项对应的掩码为第九表项对应的掩码中长度最大的掩码。
505、网络设确定第十表项对应的第五索引值。
步骤503至步骤505与前述图4A所示的实施例中的步骤403至步骤405类似,具体请参阅前述图4A所示的实施例中的步骤403至步骤405的相关介绍,这里不再赘述。
506、网络设备根据第一报文的地址信息和查找树结构确定第六索引值。
507、网络设备从第五索引值和第六索引值中确定第二目标索引值。
508、网络设备根据第二目标索引值从动作信息表确定第一报文对应的动作信息。
509、网络设备根据第一报文对应的动作信息对第一报文进行处理。
步骤506至步骤509与前述图3A所示的实施例中的步骤303至步骤306类似,具体请查阅前述图3A所示的实施例中的步骤303至步骤306的相关介绍,这里不再赘述。
下面结合图5B介绍本实施例的具体过程。
网络设备的模型训练与验证模块用于基于报文规则信息表进行模型训练,得到神经网络模型;根据查找树算法将通过神经网络模型无法拟合的表项表示为对应的查找树结构。
表项查找模块接收第一报文。表项查找模块并行将第一报文的地址信息分别输入神经网络模型和查找树结构,得到第一索引值和第六索引值。表项查找模块根据纠错表对第一索引值进行纠错,得到第五索引值。网络设备的结果选择模块从第五索引值和第六索引值中选择第二目标索引值,并根据第二目标索引值从动作信息表中确定第一报文对应的动作信息。网络设备的报文处理模块根据第一报文对应的动作信息对第一报文进行处理。
本申请实施例中,网络设备根据第一报文的地址信息和神经网络模型确定第一索引值;网络设备根据纠错表对第一索引值进行纠错,得到第五索引值。网络设备根据第一报文的地址信息和查找树结构确定第六索引值。然后,网络设备从第五索引值和第六索引值中选择第二目标索引值,并根据第二目标索引值从动作信息表中确定第一报文对应的动作信息。然后,网络设备中的报文处理模块根据第一报文对应的动作信息对第一报文进行处理。由于查找树结构是通过神经网络模型无法拟合的表项对应的查找树结构,因此本申请的技术方案可以实现已有的大规模查找树结构的压缩,减少网络设备中查找树结构的存储开销,有效提升转发容量。相比于网络设备通过大规模的查找树结构确定索引值的方式,网络设备采用神经网络模型、纠错表和动作信息表确定第四索引值的方式更为快速,查找时延较小。并且,网络设备可以进一步根据纠错表对第一索引值进行纠错,得到第五索引值。然后,网络设备再结合第五索引值和第六索引值确定第一报文对应的动作信息。这样更好更准确地为第一报文确定的动作信息,更精准地对第一报文进行处理。
本申请实施例中,可选的,在上述图2A所示的实施例的步骤202或上述图3A所示的 实施例的步骤302或上述图4A所示的实施例的步骤402或上述图5A所示的实施例的步骤502之前,网络设备可以根据报文规则信息表进行模型训练,得到神经网络模型。下面结合图6A介绍该训练过程。
请参阅图6A,图6A为本申请实施例报文处理方法的另一个实施例示意图。在图6A中,报文处理方法包括:
601、网络设备确定神经网络结构。
其中,神经网络结构可以为分层结构或者不分层结构,具体本申请不做限定。
下面以神经网络结构为分层结构为例进行说明。
例如,如图6B所示,神经网络结构为分层结构。网络设备确定神经网络结构的高度H、宽度W、以及神经网络结构包括的微模型数量。每个微模型可以采用RELU激活函数表示,这样神经网络模型可以通过分段线性函数表示。
需要说明的是,神经网络结构的宽度、高度和微模型数量可以是用户根据配置手册配置的,也可以是网络设备自行配置的,这里不再赘述。神经网络结构的高度和微模型数量具体可以结合场景需求设置。
例如,对于时延敏感的场景,神经网络结构的宽度可以较小,微模型数量较多。这样网络设备存储的神经网络模型的存储较大,但是网络设备通过神经网络模型进行检索的时延较小。例如,数据中心网络。
例如,对于存储要求较高的场景,神经网络结构的宽度可以较小,微模型数量较少。这样网络设备存储的神经网络模型的存储较小,但是网络设备通过神经网络模型进行检索的时延较大。例如,广域网场景。
因此,在实际应用中可以结合实际需求设置神经网络结构的宽度、高度和微模型数量。
602、网络设备根据报文规则信息表和神经网络结构进行模型训练,得到神经网络模型。
例如,如图6B所示的神经网络结构,网络设备可以在整个互联网协议(internet protocol address,IP)地址空间中采样IP地址,得到第一采样IP地址。网络设备根据上述表1所示的报文规则信息表确定第一采样IP地址对应的索引值。然后,网络设备根据第一采样IP地址、第一采样IP地址对应的索引值训练神经网络结构中的第一级(即stage0)的微模型。例如,如图6B所示的神经网络结构中,第一级的微模型Submodel 0,0(x)。该微模型可以通过线性整流(rectified linear unit,RELU)激活函数表示,那么训练过程主要得到的是该RELU激活函数中的各项系数的具体取值,各项系数的取值可以称为该微模型的参数。
当第一级的微模型收敛后,网络设备根据第一级的微模型的输出得到第二级(即stage1)的各个微模型负责的IP地址范围。网络设备从第二级的各个微模型负责的IP地址范围采样IP地址,得到第二采样IP地址。网络设备根据报文规则信息表确定第二采样IP地址对应的索引值。然后,网络设备根据第二采样IP地址、第二采样IP地址对应的索引值训练神经网络结构中的第二级的各个微模型。
对于神经网络结构中其他级的微模型的训练过程类似,这里不再说明。网络设备通过上述实现对神经网络模型的所有级的微模型的训练,得到每个微模型的参数。然后,网络 设备根据每个微模型的参数和神经网络结构确定神经网络模型。
上述步骤601至步骤602可以是网络设备的控制面执行的。例如,可以是网络设备的CPU;或者是网络设备中集成的AI芯片执行的,具体本申请不做限定。
需要说明的是,当网络设备首次启动时,网络设备可以执行上述步骤601至步骤602的过程,以得到神经网络模型;或者,当网络设备的报文规则信息表中的表项发生大规模更新时,网络设备可以执行上述步骤601至步骤602的过程,以得到最新的神经网络模型。
网络设备的报文规则信息表中的表项发生大规模更新的场景有多种。例如,数据通信网路***中节点发生故障或由于其他原因节点无法进行通信,导致转发路径无法使用,这种场景需要变更转发路径,那么报文规则信息表中的表项对应的动作信息发生改变。因此报文规则信息表中的表项需要更新。
需要说明的是,在上述图3A所示的实施例中的步骤304a中,如果神经网络模型还输出索引值对应的掩码长度,那么网络设备在训练神经网络模型时,还应当将采样IP地址匹配的表项对应的掩码长度作为训练数据。也就是网络设备结合采样IP地址、采样IP地址对应的索引值以及采样IP地址对应的掩码长度进行训练,得到神经网络模型。
可选的,网络设备的控制面向网络设备的数据面发送第一消息。
其中,第一消息用于向网络设备的数据面下发或更新神经网络模型。
可选的,第一消息包括第二报文。第二报文的报文头部包括神经网络模型使能位、神经网络模型的高度、神经网络模型的宽度、神经网络模型中的微模型标识。第二报文的载荷(payload)包括神经网络模型的参数。神经网络模型使能位取值为一。
例如,如图5B所示,网络设备在控制面上进行模型训练,得到神经网络模型。网络设备的控制面向网络设备的数据面发送第一消息,第一消息包括第二报文。第二报文的格式请参阅图6C,在图6C中,第二报文的报文头部(header)包括神经网络模型使能位NN enable、神经网络模型的高度H NN、宽度W NN以及神经网络模型中的微模型Submodel_ID的标识。
该神经网络模型使能位NN enable=1,用于指示该第二报文用于向网络设备的数据面指示神经网络模型。例如,如图6B所示,神经网络模型的高度H NN=3,神经网络模型的宽度W NN=512。
第一报文的载荷(payload)中包括每个微模型的参数。
例如,Submodel_ID包括Submodel 0,0(x)、Submodel 1,0(x)、Submodel 1,1(x)、Submodel 1,2(x)、Submodel 1,3(x)、Submodel 1,4(x)、Submodel 1,5(x)、Submodel 1,6(x)、Submodel 1,7(x)分别对应的Submodel ID。其中,w1为大于或等于7的整数。例如,a1,b1、c1和d1具体可以是图6B中的微模型Submodel 0,0(x)的参数。a2、b2、c2和d2具体可以是图6B中的微模型Submodel 1,0(x)的参数,以此类推。
需要说明的是,如果一个第二报文的载荷无法承载图6B所示的所有微模型的参数,那么第一消息可以包括多个第二报文,每个第二报文的报文头格式与图6C所示的报文头类似,不同的地方在于,第二报文的报文头中包括的微模型标识是该第二报文的载荷承载的微模型的参数对应的微模型标识。
例如,图6C所示的第二报文的报文头包括的微模型标识分别为 Submodel 0,0(x)、Submodel 1,0(x)、Submodel 1,1(x)、Submodel 1,2(x)、Submodel 1,3(x)、Submodel 1,4(x)、Submodel 1,5(x)、Submodel 1,6(x)、Submodel 1,7(x)分别对应的微模型标识。其中,w1为大于或等于7的整数。而第二报文的载荷包括这些微模型的参数。
本申请实施例中,可选的,上述图3A所示的实施例的步骤303之前,网络设备可以确定查找树结构。下面结合图7A所示的实施例介绍该过程。
请参阅图7A,图7A为本申请实施例报文处理方法的另一个实施例示意图。在图7A中,报文处理方法包括:
701、网络设备确定报文规则信息表中通过神经网络模型无法拟合的表项。
702、网络设备根据查找树算法表示通过神经网络模型无法拟合的表项,得到查找树结构。
例如,如图5B所示,网络设备确定通过神经网络模型无法拟合的表项,并将通过神经网络模型无法拟合的表项构成残参表(remainder set)。网络设备根据查找树算法将残参表包括的表项表示为查找树结构。关于步骤701至步骤702可以参阅前述图3A中步骤303的相关介绍,这里不再赘述。
上述步骤701至步骤702可以是网络设备的控制面执行的。例如,可以是网络设备的中央处理器(central processing unit,CPU);或者是网络设备中集成的人工智能(artificial intelligence,AI)芯片执行的,具体本申请不做限定。
需要说明的是,当网络设备首次启动时,网络设备可以执行上述步骤701至步骤702的过程,以得到查找树结构。或者,当网络设备的报文规则信息表中的表项发生大规模更新且通过神经网络模型无法拟合的表项发生变化时,网络设备可以执行上述步骤701至步骤702的过程,以得到最新的查找树结构。关于报文规则信息表中的表项发生大规模更新的情况的相关介绍请参阅前文的相关介绍。
可选的,网络设备的控制面向网络设备的数据面发送第二消息。
其中,第二消息用于向网络设备的数据面下发或更新查找树结构。
可选的,第二消息包括第三报文。第三报文的报文头部包括查找树结构使能位、查找树结构的类型、查找树结构中的待更新起始节点标识、查找树结构中的待更新终止节点标识。第三报文的载荷包括查找树结构。
其中,查找树使能位的取值为一。
例如,如图5B所示,网络设备在控制面上确定通过神经网络模型无法拟合的表项,并通过查找树算法表示通过神经网络模型无法拟合的表项得到查找树结构。网络设备的控制面向网络设备的数据面发送第二消息,第二消息包括第三报文。第三报文的格式请参阅图7B,第二报文的报文头包括查找树使能位Trie enable。NN enable=1,用于指示第三报文用于向网络设备的数据面指示查找树结构。第二报文的报文头包括查找树类型Trie_type、待更新起始节点标识Node_s、查找树结构中的待更新终止节点标识Node_e。第三报文的载荷包括查找树结构Lookup Trie。
例如,如图3B所示的查找树结构,第三报文的载荷包括两个节点,一个边、且该一个 边为右边、节点上的索引值1。
需要说明的是,如果一个第三报文的载荷无法承载图3B所示的查找树结构的相关信息,那么第二消息可以包括多个第三报文,每个第三报文的报文头格式与图7B所示的报文头类似,通过多个第三报文的载荷下发查找树结构的相关信息。
本申请实施例中,可选的,在图4B或图5B中,网络设备的控制面可以确定纠错表。然后,网络设备的控制面向网络设备的数据面发送第三消息。第三消息用于向网络设备的数据下发或更新纠错表。
可选的,第三消息包括第四报文,第四报文的报文头部包括纠错表使能位、纠错表中待更新表项的起始位置和终止位置,纠错表使能位的取值为一;第四报文的载荷包括待更新表项对应的前缀和掩码。具体可以参阅图8所示的第四报文的具体格式。
例如,如表7所示的纠错表,第四报文的报文头部包括待更新表项的起始位置为第一个表项,待更新表项的终止位置为第三个表项。也就是说该第四报文用于下发或更新表7中的前三个表项包括的前缀和掩码。那么该第四报文的载荷包括表7的第一个表项的前缀和掩码,表7的第二个表项的前缀和掩码和表7中的第三个表项的前缀和掩码。
需要说明的是,如果一个第四报文的载荷无法承载纠错表所有表项的前缀和掩码,那么第三消息可以包括多个第四报文,通过多个第四报文的载荷承载纠错表中所有表项的前缀和掩码。
下面结合图9和图10介绍本申请实施例提供的网络设备。
请参阅图9,图9为本申请实施例网络设备的一个结构示意图。网络设备可以是上述方法实施例中描述的网络设备,也可以是上述方法实施例中网络设备的芯片或组件。网络设备可以用于执行上述方法实施例中网络设备执行的部分或全部步骤。
如图9所示,网络设备包括收发模块901和处理模块902。
收发模块901,用于获取第一报文;
处理模块902,用于根据第一报文的地址信息和神经网络模型确定第一索引值;根据第一索引值从动作信息表确定第一报文对应的动作信息,动作信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息;根据第一报文对应的动作信息对第一报文进行处理。
一种可能的实现方式中,处理模块902还用于:
根据第一报文的地址信息和查找树结构确定第二索引值,查找树结构为报文规则信息表中通过神经网络模型无法拟合的表项对应的查找树结构;
从第一索引值和第二索引值确定第一目标索引值;
处理模块902具体用于:
根据第一目标索引值从动作信息表确定第一报文对应的动作信息。
另一种可能的实现方式中,神经网络模型是根据报文规则信息表进行模型训练得到的。
另一种可能的实现方式中,报文规则信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息,动作信息表用于指示报文规则信息表中的每个表项对应的动作信息。
另一种可能的实现方式中,报文规则信息表中的第一表项与动作信息表中的第二表项 对应,第一表项和第二表项分别包括一个或多个表项;第一表项对应的索引值分别与第二表项对应的索引值相同,第二表项对应的索引值包括第一索引值。
另一种可能的实现方式中,处理模块902具体用于:
根据第一报文的地址信息和神经网络模型确定第三索引值,第三索引值为报文规则信息表中的第三表项对应的索引值;
根据第三索引值从映射表确定动作信息表的第四表项对应的第一索引值,第四表项为动作信息表中与第三表项对应的表项,映射表包括报文规则信息表中每个表项对应的动作信息表的表项的索引值。
另一种可能的实现方式中,处理模块902具体用于:
确定第一索引值对应的掩码长度和第二索引值对应的掩码长度;
若第一索引值对应的掩码长度大于第二索引值对应的掩码长度,则选择第一索引值作为第一目标索引值;
若第一索引值对应的掩码长度小于第二索引值对应的掩码长度,则选择第二索引值作为第一目标索引值。
另一种可能的实现方式中,处理模块902具体用于:
确定第一索引值对应纠错表的第五表项和第二索引值对应纠错表的第六表项;纠错表包括至少一个表项,每个表项对应一个索引值和一个优先级,纠错表中的表项按照索引值的大小顺序与报文规则信息表中的表项一一对应,报文规则信息表中每个表项对应一个优先级,纠错表中每个表项对应的一个优先级与报文规则信息表中对应的表项的优先级相同;
根据纠错表确定第五表项对应的优先级和第六表项对应的优先级;
若第五表项对应的优先级高于第六表项对应的优先级,则选择第一索引值作为第一目标索引值;
若第五表项对应的优先级低于第六表项对应的优先级,则选择第二索引值作为第一目标索引值。
另一种可能的实现方式中,处理模块902还用于:
从纠错表确定第七表项对应的前缀和掩码;纠错表包括至少一个表项,每个表项对应一个索引值,每个表项有对应的地址信息,每个表项对应的地址信息包括前缀和掩码;第七表项包括在纠错表中第一索引值对应的表项和在第一索引值的预设阈值范围内的索引值对应的表项;
从第七表项中确定第八表项对应的前缀与第一报文的目的地址匹配,第八表项对应的掩码为第七表项对应的掩码中掩码长度最大的掩码;
确定第八表项对应的第四索引值;
处理模块902具体用于:
根据第四索引值从动作信息表确定第一报文对应的动作信息。
另一种可能的实现方式中,处理模块902还用于:
从纠错表确定第九表项对应的前缀和掩码;纠错表包括至少一个表项,每个表项对应一个索引值,每个表项有对应的地址信息,每个表项对应的地址信息包括前缀和掩码;第 九表项包括在纠错表中第一索引值对应的表项和在第一索引值的预设阈值范围内的索引值对应的表项;
从第九表项中确定第十表项对应的前缀与所述第一报文的目的地址匹配,第十表项对应的掩码为第九表项对应的掩码中掩码长度最大的掩码;
确定所述第十表项对应的第五索引值;
根据第一报文的地址信息和查找树结构确定第六索引值;查找树结构为报文规则信息表中通过神经网络模型无法拟合的表项对应的查找树结构;
从第五索引值和第六索引值确定第二目标索引值;
处理模块具体用于:
根据第二目标索引值从动作信息表确定第一报文对应的动作信息。
另一种可能的实现方式中,第一报文对应的动作信息包括端口信息;处理模块902具体用于:
根据端口信息确定第一报文的下一跳路由节点;
将第一报文转发到下一跳路由节点。
另一种可能的实现方式中,处理模块902还用于:
确定神经网络结构;
根据报文规则信息表和神经网络结构进行训练,得到神经网络模型,报文规则信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息,动作信息表用于指示报文规则信息表中每个表项对应的动作信息。
另一种可能的实现方式中,网络设备的控制面向网络设备的数据面发送第一消息,第一消息用于向网络设备的数据面下发或更新神经网络模型。
另一种可能的实现方式中,第一消息包括第二报文;第二报文的报文头部包括神经网络模型使能位、神经网络模型的高度、神经网络模型的宽度、神经网络模型包括的微模型标识,神经网络模型使能位取值为一;第二报文的载荷包括神经网络模型的参数。
另一种可能的实现方式中,处理模块902还用于:
确定报文规则信息表中通过神经网络模型无法拟合的表项,报文规则信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息,动作信息表用于指示报文规则信息表中每个表项对应的动作信息;
根据查找树算法表示通过神经网络模型无法拟合的表项,得到查找树结构。
另一种可能的实现方式中,网络设备的控制面向网络设备的数据面发送第二消息,第二消息用于向网络设备的数据面下发或更新查找树结构。
另一种可能的实现方式中,第二消息包括第三报文;第三报文的报文头部包括查找树使能位、查找树结构的类型、查找树结构中的待更新起始节点标识、以及查找树结构中的待更新终止节点标识,查找树使能位的取值为一;第三报文的载荷包括查找树结构。
另一种可能的实现方式中,网络设备的控制面向网络设备的数据面发送第三消息,第三消息用于向网络设备的数据面下发或更新所述纠错表。
另一种可能的实现方式中,第三消息包括第四报文,第四报文的报文头部包括纠错表 使能位、纠错表中待更新表项的起始位置和终止位置,纠错表使能位的取值为一;第四报文的载荷包括待更新表项对应的前缀和掩码。
本申请实施例中,收发模块901,用于获取第一报文;处理模块902,用于根据第一报文的地址信息和神经网络模型确定第一索引值;根据第一索引值从动作信息表确定第一报文对应的动作信息,动作信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息;根据第一报文对应的动作信息对第一报文进行处理。因此,网络设备无需存储大规模的查找树结构,避免查找树结构带来的存储开销。并且,相比于网络设备通过查找树结构确定第一报文对应的动作信息的方式,网络设备采用神经网络模型和动作信息表确定第一报文对应的动作信息的方式更为快速,查找时延较小。
本申请还提供一种网络设备,请参阅图10,本申请实施例网络设备的另一个结构示意图。网络设备可以用于执行图2A、图3A、图4A、图5A、图6A和图7A所示的实施例中网络设备执行的步骤,可以参考上述方法实施例中的相关描述。
网络设备包括处理器1001和存储器1002。可选的,网络设备还包括收发器1003。
一种可能的实现方式中,该处理器1001、存储器1002和收发器1003分别通过总线相连,该存储器中存储有计算机指令。
前述实施例中的收发模块901则具体可以是本实施例中的收发器1003,因此收发器1003的具体实现不再赘述。前述实施例中的处理模块902具体可以是本实施例中的处理器1001,因此该处理器1001的具体实现不再赘述。
本申请实施例还提供一种包括指令的计算机程序产品,当其在计算机上运行时,使得该计算机执行如上述图2A、图3A、图4A、图5A、图6A和图7A所示的实施例的报文处理方法。
本申请实施例还提供了一种计算机可读存储介质,包括计算机指令,当该计算机指令在计算机上运行时,使得计算机执行如上述图2A、图3A、图4A、图5A、图6A和图7A所示的实施例的通信方法。
本申请实施例还提供一种芯片装置,包括处理器,用于与存储器相连,调用该存储器中存储的程序,以使得该处理器执行上图2A、图3A、图4A、图5A、图6A和图7A所示的实施例的报文处理方法。
其中,上述任一处提到的处理器,可以是一个通用中央处理器,微处理器,特定应用集成电路(application-specific integrated circuit,ASIC),或一个或多个用于控制上述图2A、图3A、图4A、图5A、图6A和图7A所示的实施例的报文处理方法的程序执行的集成电路。上述任一处提到的存储器可以为只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)等。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的***,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的 划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (31)

  1. 一种报文处理方法,其特征在于,所述方法包括:
    网络设备获取第一报文;
    所述网络设备根据所述第一报文的地址信息和神经网络模型确定第一索引值;
    所述网络设备根据所述第一索引值从动作信息表确定所述第一报文对应的动作信息,所述动作信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息;
    所述网络设备根据所述第一报文对应的动作信息对所述第一报文进行处理。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    所述网络设备根据所述第一报文的地址信息和查找树结构确定第二索引值,所述查找树结构为报文规则信息表中通过所述神经网络模型无法拟合的表项对应的查找树结构;
    所述网络设备从所述第一索引值和所述第二索引值确定第一目标索引值;
    所述网络设备根据所述第一索引值从动作信息表确定所述第一报文对应的动作信息,包括:
    所述网络设备根据所述第一目标索引值从所述动作信息表确定所述第一报文对应的动作信息。
  3. 根据权利要求2所述的方法,其特征在于,所述神经网络模型是根据所述报文规则信息表进行模型训练得到的。
  4. 根据权利要求2或3所述的方法,其特征在于,所述报文规则信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息,所述动作信息表用于指示所述报文规则信息表中的每个表项对应的动作信息。
  5. 根据权利要求2至4中任一项所述的方法,其特征在于,所述报文规则信息表中的第一表项与所述动作信息表中的第二表项对应,所述第一表项和所述第二表项分别包括一个或多个表项;所述第一表项对应的索引值分别与所述第二表项对应的索引值相同,所述第二表项对应的索引值包括所述第一索引值。
  6. 根据权利要求2至4中任一项所述的方法,其特征在于,所述网络设备根据所述第一报文的地址信息和神经网络模型确定第一索引值,包括:
    所述网络设备根据所述第一报文的地址信息和所述神经网络模型确定第三索引值,所述第三索引值为所述报文规则信息表中的第三表项对应的索引值;
    所述网络设备根据所述第三索引值从映射表确定所述动作信息表的第四表项对应的第一索引值,所述第四表项为所述动作信息表中与所述第三表项对应的表项,所述映射表包括所述报文规则信息表中每个表项对应的所述动作信息表的表项的索引值。
  7. 根据权利要求2至6中任一项所述的方法,其特征在于,所述网络设备从所述第一索引值和所述第二索引值确定第一目标索引值,包括:
    所述网络设备确定所述第一索引值对应的掩码长度和所述第二索引值对应的掩码长度;
    若所述第一索引值对应的掩码长度大于所述第二索引值对应的掩码长度,则所述网络设备选择所述第一索引值作为所述第一目标索引值;
    若所述第一索引值对应的掩码长度小于所述第二索引值对应的掩码长度,则所述网络 设备选择所述第二索引值作为所述第一目标索引值。
  8. 根据权利要求2至6中任一项所述的方法,其特征在于,所述网络设备从第一索引值和所述第二索引值确定第一目标索引值,包括:
    所述网络设备确定所述第一索引值对应纠错表的第五表项和所述第二索引值对应纠错表的第六表项;
    所述纠错表包括至少一个表项,每个表项对应一个索引值和一个优先级,所述纠错表中的表项按照索引值的大小顺序与报文规则信息表中的表项一一对应,所述报文规则信息表中每个表项对应一个优先级,所述纠错表中每个表项对应的一个优先级与所述报文规则信息表中对应的表项的优先级相同;
    所述网络设备根据所述纠错表确定所述第五表项对应的优先级和所述第六表项对应的优先级;
    若所述第五表项对应的优先级高于所述第六表项对应的优先级,则所述网络设备选择所述第一索引值作为所述第一目标索引值;
    若所述第五表项对应的优先级低于所述第六表项对应的优先级,则所述网络设备选择所述第二索引值作为所述第一目标索引值。
  9. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    所述网络设备从纠错表确定第七表项对应的前缀和掩码;
    所述纠错表包括至少一个表项,每个表项对应一个索引值,每个表项有对应的地址信息,每个表项对应的地址信息包括前缀和掩码,所述第七表项包括在所述纠错表中所述第一索引值对应的表项和在所述第一索引值的预设阈值范围内的索引值对应的表项;
    所述网络设备从所述第七表项中确定所述第八表项对应的前缀与所述第一报文的目的地址匹配,所述第八表项对应的掩码为所述第七表项对应的掩码中掩码长度最大的掩码;
    所述网络设备确定所述第八表项对应的第四索引值;
    所述网络设备根据所述第一索引值从动作信息表确定所述第一报文对应的动作信息,包括:
    所述网络设备根据所述第四索引值从所述动作信息表确定所述第一报文对应的动作信息。
  10. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    所述网络设备从纠错表确定第九表项对应的前缀和掩码;
    所述纠错表包括至少一个表项,每个表项对应一个索引值,每个表项有对应的地址信息,每个表项对应的地址信息包括前缀和掩码,所述第九表项包括在所述纠错表中所述第一索引值对应的表项和在所述第一索引值的预设阈值范围内的索引值对应的表项;
    所述网络设备从所述第九表项中确定第十表项对应的前缀与所述第一报文的目的地址匹配,所述第十表项对应的掩码为所述第九表项对应的掩码中掩码长度最大的掩码;
    所述网络设备确定所述第十表项对应的第五索引值;
    所述网络设备根据所述第一报文的地址信息和查找树结构确定第六索引值;
    所述查找树结构为所述报文规则信息表中通过所述神经网络模型无法拟合的表项对应 的查找树结构;
    所述网络设备从所述第五索引值和所述第六索引值确定第二目标索引值;
    所述网络设备根据所述第一索引值从动作信息表确定所述第一报文对应的动作信息,包括:
    所述网络设备根据所述第二目标索引值从所述动作信息表确定所述第一报文对应的动作信息。
  11. 根据权利要求9或10所述的方法,其特征在于,所述纠错表中的表项按照索引值的大小顺序与报文规则信息表中的表项一一对应,所述纠错表中的表项对应的地址信息为所述报文规则信息表中对应表项对应的地址信息;
    所述报文规则信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息,且每个表项有对应的地址信息,所述动作信息表用于指示所述报文规则信息表中每个表项对应的动作信息。
  12. 根据权利要求1至11中任一项所述的方法,其特征在于,所述第一报文对应的动作信息包括端口信息;所述网络设备根据所述第一报文对应的动作信息对所述第一报文进行处理,包括:
    所述网络设备根据所述端口信息确定所述第一报文的下一跳路由节点;
    所述网络设备将所述第一报文转发到所述下一跳路由节点。
  13. 根据权利要求1至12中任一项所述的方法,其特征在于,所述网络设备根据所述第一报文的地址信息和神经网络模型确定第一索引值之前,所述方法还包括:
    所述网络设备确定神经网络结构;
    所述网络设备根据报文规则信息表和所述神经网络结构进行训练,得到所述神经网络模型,所述报文规则信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息,所述动作信息表用于指示所述报文规则信息表中每个表项对应的动作信息。
  14. 根据权利要求1至13中任一项所述的方法,其特征在于,所述方法还包括:
    所述网络设备确定报文规则信息表中通过所述神经网络模型无法拟合的表项,所述报文规则信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息,所述动作信息表用于指示所述报文规则信息表中每个表项对应的动作信息;
    所述网络设备根据查找树算法表示所述通过所述神经网络模型无法拟合的表项,得到查找树结构。
  15. 一种网络设备,其特征在于,所述网络设备包括:
    收发模块,用于获取第一报文;
    处理模块,用于根据所述第一报文的地址信息和神经网络模型确定第一索引值;根据所述第一索引值从动作信息表确定所述第一报文对应的动作信息,所述动作信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息;根据所述第一报文对应的动作信息对所述第一报文进行处理。
  16. 根据权利要求15所述的网络设备,其特征在于,所述处理模块还用于:
    根据所述第一报文的地址信息和查找树结构确定第二索引值,所述查找树结构为报文 规则信息表中通过所述神经网络模型无法拟合的表项对应的查找树结构;
    从所述第一索引值和所述第二索引值确定第一目标索引值;
    所述处理模块具体用于:
    根据所述第一目标索引值从所述动作信息表确定所述第一报文对应的动作信息。
  17. 根据权利要求16所述的网络设备,其特征在于,所述神经网络模型是根据所述报文规则信息表进行模型训练得到的。
  18. 根据权利要求16或17所述的网络设备,其特征在于,所述报文规则信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息,所述动作信息表用于指示所述报文规则信息表中的每个表项对应的动作信息。
  19. 根据权利要求16至18中任一项所述的网络设备,其特征在于,所述报文规则信息表中的第一表项与所述动作信息表中的第二表项对应,所述第一表项和所述第二表项分别包括一个或多个表项;所述第一表项对应的索引值分别与所述第二表项对应的索引值相同,所述第二表项对应的索引值包括所述第一索引值。
  20. 根据权利要求16至18中任一项所述的网络设备,其特征在于,所述处理模块具体用于:
    根据所述第一报文的地址信息和所述神经网络模型确定第三索引值,所述第三索引值为所述报文规则信息表中的第三表项对应的索引值;
    根据所述第三索引值从映射表确定所述动作信息表的第四表项对应的第一索引值,所述第四表项为所述动作信息表中与所述第三表项对应的表项,所述映射表包括所述报文规则信息表中每个表项对应的所述动作信息表的表项的索引值。
  21. 根据权利要求16至20中任一项所述的网络设备,其特征在于,所述处理模块具体用于:
    确定所述第一索引值对应的掩码长度和所述第二索引值对应的掩码长度;
    若所述第一索引值对应的掩码长度大于所述第二索引值对应的掩码长度,则选择所述第一索引值作为所述第一目标索引值;
    若所述第一索引值对应的掩码长度小于所述第二索引值对应的掩码长度,则选择所述第二索引值作为所述第一目标索引值。
  22. 根据权利要求16至20中任一项所述的网络设备,其特征在于,所述处理模块具体用于:
    确定所述第一索引值对应纠错表的第五表项和所述第二索引值对应纠错表的第六表项;
    所述纠错表包括至少一个表项,每个表项对应一个索引值和一个优先级,所述纠错表中的表项按照索引值的大小顺序与报文规则信息表中的表项一一对应,所述报文规则信息表中每个表项对应一个优先级,所述纠错表中每个表项对应的一个优先级与所述报文规则信息表中对应的表项的优先级相同;
    根据所述纠错表确定所述第五表项对应的优先级和所述第六表项对应的优先级;
    若所述第五表项对应的优先级高于所述第六表项对应的优先级,则选择所述第一索引值作为所述第一目标索引值;
    若所述第五表项对应的优先级低于所述第六表项对应的优先级,则选择所述第二索引值作为所述第一目标索引值。
  23. 根据权利要求15所述的网络设备,其特征在于,所述处理模块还用于:
    从纠错表确定第七表项对应的前缀和掩码;
    所述纠错表包括至少一个表项,每个表项对应一个索引值,每个表项有对应的地址信息,每个表项对应的地址信息包括前缀和掩码,所述第七表项包括在所述纠错表中所述第一索引值对应的表项和在所述第一索引值的预设阈值范围内的索引值对应的表项;
    从所述第七表项中确定所述第八表项对应的前缀与所述第一报文的目的地址匹配,所述第八表项对应的掩码为所述第七表项对应的掩码中掩码长度最大的掩码;
    确定所述第八表项对应的第四索引值;
    所述处理模块具体用于:
    根据所述第四索引值从所述动作信息表确定所述第一报文对应的动作信息。
  24. 根据权利要求15所述的网络设备,其特征在于,所述处理模块还用于:
    从纠错表确定第九表项对应的前缀和掩码;
    所述纠错表包括至少一个表项,每个表项对应一个索引值,每个表项有对应的地址信息,每个表项对应的地址信息包括前缀和掩码,所述第九表项包括在所述纠错表中所述第一索引值对应的表项和在所述第一索引值的预设阈值范围内的索引值对应的表项;
    从所述第九表项中确定第十表项对应的前缀与所述第一报文的目的地址匹配,所述第十表项对应的掩码为所述第九表项对应的掩码中掩码长度最大的掩码;
    确定所述第十表项对应的第五索引值;
    根据所述第一报文的地址信息和查找树结构确定第六索引值;
    所述查找树结构为所述报文规则信息表中通过所述神经网络模型无法拟合的表项对应的查找树结构;
    从所述第五索引值和所述第六索引值确定第二目标索引值;
    所述处理模块具体用于:
    根据所述第二目标索引值从所述动作信息表确定所述第一报文对应的动作信息。
  25. 根据权利要求23或24所述的网络设备,其特征在于,所述纠错表中的表项按照索引值的大小顺序与报文规则信息表中的表项一一对应,所述纠错表中的表项对应的地址信息为所述报文规则信息表中对应表项对应的地址信息;
    所述报文规则信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息,且每个表项有对应的地址信息,所述动作信息表用于指示所述报文规则信息表中每个表项对应的动作信息。
  26. 根据权利要求15至25中任一项所述的网络设备,其特征在于,所述第一报文对应的动作信息包括端口信息;所述处理模块具体用于:
    根据所述端口信息确定所述第一报文的下一跳路由节点;
    将所述第一报文转发到所述下一跳路由节点。
  27. 根据权利要求15至26中任一项所述的网络设备,其特征在于,所述处理模块还用 于:
    确定神经网络结构;
    根据报文规则信息表和所述神经网络结构进行训练,得到所述神经网络模型,所述报文规则信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息,所述动作信息表用于指示所述报文规则信息表中每个表项对应的动作信息。
  28. 根据权利要求15至27中任一项所述的网络设备,其特征在于,所述处理模块还用于:
    确定报文规则信息表中通过所述神经网络模型无法拟合的表项,所述报文规则信息表包括至少一个表项,每个表项对应一个索引值和一个动作信息,所述动作信息表用于指示所述报文规则信息表中每个表项对应的动作信息;
    根据查找树算法表示所述通过所述神经网络模型无法拟合的表项,得到查找树结构。
  29. 一种网络设备,其特征在于,所述网络设备包括处理器和存储器;
    所述存储器用于存储计算机程序;
    所述处理器用于调用并运行所述存储器中存储的所述计算机程序,使得所述网络设备执行如权利要求1至14中任一项所述的方法。
  30. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在计算机上运行时,使得计算机执行如权利要求1至14中任一项所述的方法。
  31. 一种计算程序产品,其特征在于,包括计算机执行指令,当所述计算机执行指令在计算机上运行时,使得所述计算机执行如权利要求1至14中任一项所述的方法。
PCT/CN2022/082138 2021-03-23 2022-03-22 报文处理方法以及网络设备 WO2022199559A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP22774216.0A EP4311187A1 (en) 2021-03-23 2022-03-22 Packet processing method and network device
US18/471,725 US20240022512A1 (en) 2021-03-23 2023-09-21 Packet Processing Method and Network Device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110309197.7 2021-03-23
CN202110309197.7A CN115134298A (zh) 2021-03-23 2021-03-23 报文处理方法以及网络设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/471,725 Continuation US20240022512A1 (en) 2021-03-23 2023-09-21 Packet Processing Method and Network Device

Publications (1)

Publication Number Publication Date
WO2022199559A1 true WO2022199559A1 (zh) 2022-09-29

Family

ID=83374592

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/082138 WO2022199559A1 (zh) 2021-03-23 2022-03-22 报文处理方法以及网络设备

Country Status (4)

Country Link
US (1) US20240022512A1 (zh)
EP (1) EP4311187A1 (zh)
CN (1) CN115134298A (zh)
WO (1) WO2022199559A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170041220A1 (en) * 2015-08-04 2017-02-09 Telefonaktiebolaget L M Ericsson (Publ) Method and system for memory allocation in a software-defined networking (sdn) system
CN107995116A (zh) * 2017-11-30 2018-05-04 新华三技术有限公司 报文发送方法及通信设备
CN108156034A (zh) * 2017-12-22 2018-06-12 武汉噢易云计算股份有限公司 一种基于深度神经网络辅助的报文转发方法和报文转发***
CN110442570A (zh) * 2019-06-06 2019-11-12 北京左江科技股份有限公司 一种BitMap高速模糊查找方法
CN110851658A (zh) * 2019-10-12 2020-02-28 天津大学 树形索引数据结构、内容存储池、路由器及树形索引方法
CN111385209A (zh) * 2018-12-28 2020-07-07 华为技术有限公司 一种报文处理方法、报文转发方法、装置及设备
US20200412635A1 (en) * 2019-06-27 2020-12-31 Intel Corporation Routing updates in icn based networks

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170041220A1 (en) * 2015-08-04 2017-02-09 Telefonaktiebolaget L M Ericsson (Publ) Method and system for memory allocation in a software-defined networking (sdn) system
CN107995116A (zh) * 2017-11-30 2018-05-04 新华三技术有限公司 报文发送方法及通信设备
CN108156034A (zh) * 2017-12-22 2018-06-12 武汉噢易云计算股份有限公司 一种基于深度神经网络辅助的报文转发方法和报文转发***
CN111385209A (zh) * 2018-12-28 2020-07-07 华为技术有限公司 一种报文处理方法、报文转发方法、装置及设备
CN110442570A (zh) * 2019-06-06 2019-11-12 北京左江科技股份有限公司 一种BitMap高速模糊查找方法
US20200412635A1 (en) * 2019-06-27 2020-12-31 Intel Corporation Routing updates in icn based networks
CN110851658A (zh) * 2019-10-12 2020-02-28 天津大学 树形索引数据结构、内容存储池、路由器及树形索引方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MEKINDA LEONCE; MUSCARIELLO LUCA: "Supervised Machine Learning-Based Routing for Named Data Networking", 2016 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), IEEE, 4 December 2016 (2016-12-04), pages 1 - 6, XP033059029, DOI: 10.1109/GLOCOM.2016.7842307 *

Also Published As

Publication number Publication date
EP4311187A1 (en) 2024-01-24
US20240022512A1 (en) 2024-01-18
CN115134298A (zh) 2022-09-30

Similar Documents

Publication Publication Date Title
US11777845B2 (en) Service-function chaining using extended service-function chain proxy for service-function offload
US20200296011A1 (en) Satisfying service level agreement metrics for unknown applications
US11575606B2 (en) Method, apparatus, and system for generating, and processing packets according to, a flow filtering rule
US10237130B2 (en) Method for processing VxLAN data units
US9847940B2 (en) Control method, packet processing device, and storage medium
US10361954B2 (en) Method and apparatus for processing modified packet
EP3292661B1 (en) Packet forwarding
US20180241608A1 (en) Forwarding ethernet packets
US9960995B2 (en) Packet forwarding using a physical unit and a virtual forwarding unit
US11398977B2 (en) Packet classifier
US10313275B2 (en) Packet forwarding
WO2016115698A1 (zh) 数据报文的转发方法、装置及设备
Ha et al. Efficient flow table management scheme in SDN-based cloud computing networks
US10313274B2 (en) Packet forwarding
CN110022263B (zh) 一种数据传输的方法及相关装置
WO2022199559A1 (zh) 报文处理方法以及网络设备
CN108777654B (zh) 报文转发方法及路由设备
CN106453144B (zh) 软件定义网络中的报文处理方法和设备
CN110636005B (zh) 知识中心网络的知识路由方法及装置
CN107948091B (zh) 一种网包分类的方法及装置
CN110505137B (zh) 功能扩展式有线网络装置
WO2023161052A1 (en) Ip packet load balancer based on hashed ip addresses

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22774216

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022774216

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2022774216

Country of ref document: EP

Effective date: 20231020