CN115086253B - Ethernet exchange chip and high-bandwidth message forwarding method - Google Patents

Ethernet exchange chip and high-bandwidth message forwarding method Download PDF

Info

Publication number
CN115086253B
CN115086253B CN202210685527.7A CN202210685527A CN115086253B CN 115086253 B CN115086253 B CN 115086253B CN 202210685527 A CN202210685527 A CN 202210685527A CN 115086253 B CN115086253 B CN 115086253B
Authority
CN
China
Prior art keywords
message
processing engine
loopback
direction processing
port
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210685527.7A
Other languages
Chinese (zh)
Other versions
CN115086253A (en
Inventor
何志川
赵仕中
钱超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Centec Communications Co Ltd
Original Assignee
Suzhou Centec Communications Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Centec Communications Co Ltd filed Critical Suzhou Centec Communications Co Ltd
Priority to CN202210685527.7A priority Critical patent/CN115086253B/en
Publication of CN115086253A publication Critical patent/CN115086253A/en
Application granted granted Critical
Publication of CN115086253B publication Critical patent/CN115086253B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/351Switches specially adapted for specific applications for local area network [LAN], e.g. Ethernet switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/54Organization of routing tables
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers

Abstract

The application provides an Ethernet switching chip and a high-bandwidth message forwarding method, wherein an outgoing direction processing engine in the switching chip is also connected with an incoming direction processing engine through a plurality of loopback channels forming an aggregation group. When the original message arrives, the message is sent to the outgoing direction processing engine after being analyzed by the incoming direction processing engine and the priority scheduling processing of the cache scheduling engine, and the outgoing direction processing engine determines a target channel according to the message content of the message and the port information of the source port under the condition that the message is received and the destination port is a loopback port, loops the message back to the incoming direction processing engine, and then normally forwards the loopback message. In the scheme, a plurality of loopback channels forming a plurality of aggregation groups are arranged between the outgoing direction processing engine and the incoming direction processing engine, and a channel determining mechanism of the outgoing direction processing engine is combined, so that loopback load can be shared and loopback bandwidth can be improved on the basis of successfully realizing message loopback.

Description

Ethernet exchange chip and high-bandwidth message forwarding method
Technical Field
The invention relates to the technical field of network communication, in particular to an Ethernet switching chip and a high-bandwidth message forwarding method.
Background
The switch chip, which is one of the switch core chips, determines the performance of the switch. The main function of the exchanger is to provide high-performance and low-delay exchange in the sub-network, and the function of the high-performance exchange is mainly completed by the exchange chip.
In existing switching chips, for the traffic to be processed, one traffic can only be looped back through one designated loop-back channel. In this way in the prior art, if the designated loop-back channel bandwidth is smaller, but the bandwidth required by the service is larger, the service processing efficiency will be affected by performing the loop-back only through the designated loop-back channel. Therefore, the structure of the existing switching chip and the loop-back processing mode are not beneficial to the efficient processing of the message.
Disclosure of Invention
The invention aims at providing an Ethernet switching chip and a high-bandwidth message forwarding method, which can share the loopback load and improve the loopback bandwidth.
Embodiments of the invention may be implemented as follows:
in a first aspect, the present invention provides an ethernet switching chip, comprising: the system comprises an input direction processing engine, a cache scheduling engine and an output direction processing engine, wherein the input direction processing engine is connected with the cache scheduling engine, the cache scheduling engine is connected with the output direction processing engine, the output direction processing engine is also connected with the input direction processing engine through a plurality of loopback channels, and the loopback channels form a plurality of aggregation groups;
the incoming direction processing engine is used for analyzing the original message when receiving the original message, and sending the original message and the analysis information of the original message to the cache scheduling engine;
the buffer scheduling engine is used for buffering the received message and the analysis information, and sequentially sending the message and the analysis information to the outgoing direction processing engine after priority scheduling processing;
the output direction processing engine is used for determining a target channel from the plurality of loopback channels according to the message content of the message and the port information of the source port when the message is received and the destination port of the message is a loopback port, and looping back the message to the input direction processing engine through the target channel;
the incoming direction processing engine is further used for analyzing the ring-back message when the ring-back message is obtained, and sending the ring-back message and the analysis information of the ring-back message to the cache scheduling engine;
the outgoing direction processing engine is further configured to send the message through the network channel when the message is received and the destination port of the message is a device port of the next hop device.
In an alternative embodiment, each aggregation group includes at least one loop-back channel;
the outgoing direction processing engine is used for:
and determining the service type of the message according to the message content of the message, determining an aggregation group matched with the service type from a plurality of aggregation groups according to the service type, and determining a target channel from loopback channels included in the matched aggregation group according to the message content and port information of a source port.
In an alternative embodiment, the outbound direction processing engine is configured to:
when a message is received, the message is edited, CRC operation is carried out according to the content of the edited message and port information of a source port to obtain a hash value, and a target channel is determined from loop-back channels included in a matched aggregation group according to the hash value.
In an alternative embodiment, the outbound direction processing engine is configured to:
when a message is received, the IP address in the message is replaced, or the MAC address in the message is replaced, or the message is packaged externally, so that the editing of the message is realized.
In an optional embodiment, the parsing information of the original message includes a first destination port, the parsing information of the ring message includes a second destination port, and the inbound direction processing engine is configured to:
when an original message is received, searching a forwarding table according to a source port of the original message to obtain a first destination port of the original message, wherein the first destination port is a loopback port; and/or
When a ring-back message is received, searching a forwarding table according to a source port of the ring-back message to obtain a second destination port of the ring-back message, wherein the second destination port is a device port of next-hop equipment;
and when the port type of the source port of the original message and/or the ring-back message is a three-layer interface and the MAC address is a routing MAC, the searched forwarding table is a routing table, otherwise, the searched forwarding table is a two-layer forwarding table.
In an alternative embodiment, the cache scheduling engine is configured to:
and for the cached messages and the analysis information, acquiring priority information of each message, and sequentially sending each message and the corresponding analysis information to the outgoing direction processing engine after priority scheduling processing according to the order of priority from high to low.
In an optional embodiment, a common loopback channel is arranged between the outgoing direction processing engine and the incoming direction processing engine, and the common loopback channel is one of the multiple loopback channels, or the common loopback channel is a channel outside the multiple loopback channels;
the outgoing direction processing engine is further configured to:
and when the message is received and the destination port of the message is a loopback port, detecting whether the loopback port enables the aggregation group, if the aggregation group is enabled, executing the steps of determining a target channel from the plurality of loopback channels according to the message content and the source port of the message, and looping back the message to the incoming direction processing engine through the target channel, and if the aggregation group is not enabled, looping back the message to the incoming direction processing engine through the common loopback channel.
In a second aspect, the present invention provides a high bandwidth packet forwarding method, applied to an ethernet switch chip, where the ethernet switch chip includes an ingress direction processing engine, a cache scheduling engine, and an egress direction processing engine, where the ingress direction processing engine is connected to the cache scheduling engine, the cache scheduling engine is connected to the egress direction processing engine, and the egress direction processing engine is further connected to the ingress direction processing engine through a plurality of loopback channels, where the plurality of loopback channels form a plurality of aggregation groups, and the method includes:
when the incoming direction processing engine receives an original message, analyzing the original message, and sending the original message and analysis information of the original message to the cache scheduling engine;
the buffer scheduling engine buffers the received message and the analysis information, and sequentially sends the buffer information to the outgoing direction processing engine after priority scheduling processing;
when the outgoing direction processing engine receives a message and the destination port of the message is a loopback port, determining a target channel from the plurality of loopback channels according to the message content of the message and the port information of the source port, and looping back the message to the incoming direction processing engine through the target channel;
when the inbound processing engine obtains the ring-back message, the ring-back message is analyzed, the ring-back message and the analysis information of the ring-back message are sent to the cache scheduling engine, and the ring-back message is sent to the outbound processing engine through the cache scheduling engine;
and the outgoing direction processing engine sends the message through a network channel when receiving the message and the destination port of the message is the equipment port of the next-hop equipment.
In an alternative embodiment, each aggregation group includes at least one loop-back channel;
the step of determining a target channel from the plurality of loopback channels according to the message content and the source port of the message comprises the following steps:
and determining the service type of the message according to the message content of the message, determining an aggregation group matched with the service type from a plurality of aggregation groups according to the service type, and determining a target channel from loopback channels included in the matched aggregation group according to the message content and port information of a source port.
In an optional embodiment, the step of determining the target channel from the loopback channels included in the matched aggregation group according to the message content and the port information of the source port includes:
when a message is received, the message is edited, CRC operation is carried out according to the content of the edited message and port information of a source port to obtain a hash value, and a target channel is determined from loop-back channels included in a matched aggregation group according to the hash value.
The beneficial effects of the embodiment of the invention include, for example:
the application provides an Ethernet switching chip and a high-bandwidth message forwarding method, wherein the switching chip comprises an incoming direction processing engine, a cache scheduling engine and an outgoing direction processing engine which are sequentially connected, and the outgoing direction processing engine is also connected with the incoming direction processing engine through a plurality of loopback channels forming an aggregation group. When the original message arrives, the message is sent to the outgoing direction processing engine after being analyzed by the incoming direction processing engine and the priority scheduling processing of the cache scheduling engine, and the outgoing direction processing engine determines a target channel from a plurality of loopback channels according to the message content of the message and the port information of the source port under the condition that the message is received and the destination port is the loopback port, loops the message back to the incoming direction processing engine, and then normally forwards the loopback message. In the scheme, a plurality of loopback channels forming an aggregation group are arranged between the outgoing direction processing engine and the incoming direction processing engine, and a channel determining mechanism of the outgoing direction processing engine is combined, so that the loopback load can be shared and the loopback bandwidth can be improved on the basis of successfully realizing message loopback.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a block diagram of an Ethernet switch chip in the prior art;
fig. 2 is one of the block diagrams of the ethernet switch chip provided in the embodiment of the present application;
FIG. 3 is a second block diagram of an Ethernet switch chip according to an embodiment of the present disclosure;
fig. 4 is a third block diagram of an ethernet switch chip according to an embodiment of the present disclosure;
fig. 5 is a flowchart of a high bandwidth packet forwarding method according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
Furthermore, the terms "first," "second," and the like, if any, are used merely for distinguishing between descriptions and not for indicating or implying a relative importance.
It should be noted that the features of the embodiments of the present invention may be combined with each other without conflict.
As shown in fig. 1, the ethernet switch chip used in the prior art is shown in a block diagram. As shown in fig. 1, the ethernet switch chip includes an ingress direction processing engine, a cache scheduling engine, and an egress direction processing engine. The output direction processing engine and the input direction processing engine are connected through a single loop-back channel.
Under the Ethernet switch chip structure in the prior art, when forwarding the message, the message of one service type can only be looped back through the single loop-back channel. Therefore, when the bandwidth required by the message is large, for example, the required bandwidth is 200G, and the bandwidth of the single loopback channel is small, for example, 100G, the single loopback channel cannot meet the service requirement, which affects the loopback processing efficiency of the message.
Based on the above research, the application provides an ethernet switching chip, in which a plurality of loopback channels forming an aggregation group are arranged between an outgoing direction processing engine and an incoming direction processing engine, and a channel determining mechanism of the outgoing direction processing engine is combined, so that the loopback load can be shared and the loopback bandwidth can be improved on the basis of successfully realizing message loopback.
Referring to fig. 2, a block diagram of an ethernet switch chip according to an embodiment of the present application is shown. In this embodiment, the ethernet switch chip includes an ingress direction processing engine IPE, a buffer scheduling engine BSR, and an egress direction processing engine EPE. The input direction processing engine is connected with the cache scheduling engine, and the cache scheduling engine is connected with the output direction processing engine. The outgoing direction processing engine is also connected with the incoming direction processing engine through a plurality of loopback channels, and the plurality of loopback channels form a plurality of aggregation group channels Agg. Three loop-back channels are shown schematically in fig. 2, but are not limited in practical application.
In this embodiment, when receiving an original message, the inbound direction processing engine parses the original message, and sends the original message and parsing information of the original message to the cache scheduling engine. The parsing information of the original message includes a first destination port of the original message, and the first destination port is a loopback port.
The buffer scheduling engine is used for buffering the received message and the analysis information, and sequentially sending the message and the analysis information to the outgoing direction processing engine after priority scheduling processing. The buffer scheduling engine may be configured to buffer the received original packet, or may buffer the received ring report. When the incoming direction processing engine continuously sends the message to the cache scheduling engine, the cache scheduling engine can realize the cache of the message, and the message can be scheduled according to the priority so as to be sent to the outgoing direction processing engine for processing.
And the output direction processing engine is used for determining a target channel from a plurality of loopback channels according to the message content of the message and the port information of the source port when the message is received and the destination port of the message is the loopback port, and looping the message back to the input direction processing engine through the target channel. That is, the first destination port of the original message is a loopback port, so when the outbound processing engine receives the message for the first time, that is, when the original message is received, the target channel can be determined according to the original message, and then the original message is looped back, and the message looped back to the inbound processing engine is called a loopback message.
The inbound processing engine is further configured to parse the ring report when the ring report is obtained, and send the ring report and the parsed information of the ring report to the cache scheduling engine. The analysis information of the ring report includes a second destination port, where the second destination port is a device port of the next hop device.
As can be seen from the above, the buffer scheduling engine buffers all received messages, including the original message received for the first time and the ring message received for the second time, and sequentially sends the messages and their analysis information to the outbound processing engine after the priority scheduling processing.
And the outgoing direction processing engine is also used for sending the message through the network channel when the message is received and the destination port of the message is the device port of the next-hop device. As can be seen from the above, the destination port of the ring packet is the device port of the next hop device, so that the outbound processing engine sends the packet from the network channel when the received packet is the ring packet.
In the ethernet switch chip provided in this embodiment, a plurality of loopback channels are set between the outbound processing engine and the inbound processing engine, where the plurality of loopback channels are bound as an aggregation group. And a channel determining mechanism of the direction processing engine is combined, so that on the basis of successfully realizing message loopback, service flow can be shared into a plurality of loopback channels by improving the loopback bandwidth, and the sharing of loopback load is realized.
The ethernet switching chip provided in this embodiment is particularly suitable for an ethernet environment with high requirements for data transmission implementation, such as a data center network, an industrial network, and the like.
In this embodiment, when the original message arrives, the ingress direction processing engine may search the forwarding table according to the source port of the original message when receiving the original message, so as to obtain the first destination port of the original message. From the above, the first destination port of the original message is a loopback port.
In the process, if the port type of the source port of the original message is a three-layer interface and the MAC address is a routing MAC, the searched forwarding table is a routing table, otherwise, the searched forwarding table is a two-layer forwarding table.
The incoming direction processing engine sends the original message and the analysis information comprising the destination port to the cache scheduling engine. When the priority scheduling processing of the cached messages and the analysis information is realized, the cache scheduling engine obtains the priority information of each message for the cached messages and the analysis information, and sequentially sends each message and the corresponding analysis information to the outbound direction processing engine after the priority scheduling processing is carried out according to the order of the priority from high to low.
In this embodiment, the priority information of each message may be preset, for example, the priority may be set according to the service type of the message, the size of the message, and so on.
In this embodiment, a plurality of loopback channels between the outbound direction processing engine and the inbound direction processing engine are divided into a plurality of aggregation groups, each including at least one loopback channel. Wherein each aggregation group has a group id.
After receiving the message and the analysis information of the message sent by the cache scheduling engine, the outbound processing engine can obtain the destination port as a loopback port. And a normal loop-back channel is also provided between the outgoing direction processing engine and the incoming direction processing engine, the normal loop-back channel being one of a plurality of loop-back channels as shown in fig. 3, or the normal loop-back channel being a channel other than the plurality of loop-back channels as shown in fig. 4.
And under the condition that the message is received by the outbound processing engine and the destination port of the message is a loopback port, firstly detecting whether the loopback port enables the aggregation group, if so, determining a target channel from a plurality of loopback channels according to the message content and the source port of the message, and looping the message back to the inbound processing engine through the target channel. If the aggregation group is not enabled, the message is looped back to the incoming direction processing engine through the common loop-back channel.
In this embodiment, a common loopback engine is further disposed between the outbound direction processing engine and the inbound direction processing engine, so that the loopback processing of the message can be successfully implemented under the condition that the aggregation group is not enabled.
In this embodiment, under the condition that the loopback port enables the aggregation group, the outbound direction processing engine determines the service type of the message according to the message content of the original message, determines the aggregation group with the matched service type from a plurality of aggregation groups according to the service type, and determines the target channel from the loopback channels included in the matched aggregation group according to the message content and the source port.
In this embodiment, the messages of different service types are looped back through different aggregation groups, for example, a message of one service type corresponds to one aggregation group, or a message of multiple service types corresponds to one aggregation group. In short, messages belonging to the same service type can be looped back through the same aggregation group. Thus, the loop bandwidth can be improved through the plurality of loop channels aggregated in the aggregation group, so that the message flow burden is shared.
In this embodiment, on the basis of determining that the matched aggregation group performs loopback based on the service type of the message, a target channel is also required to be determined from a plurality of loopback channels included in the aggregation group. Optionally, the outbound processing engine in this embodiment may be configured to edit a packet when the packet is received, perform CRC (Cyclic Redundancy Check, cyclic redundancy check code) operation according to the content of the edited packet and port information of the source port to obtain a hash value, and determine a target channel from the loopback channels included in the matched aggregation group according to the hash value.
In this embodiment, when the outbound processing engine edits the message, the outbound processing engine may replace an IP address in the message, or replace an MAC address in the message, or perform outer encapsulation on the message, so as to edit the message.
The edited message content and the port information of the source port can obtain a hash value after CRC operation, and the aggregation group comprises a plurality of loopback channels. The numbers in the groups can be set for the loopback channels respectively, residual calculation can be performed based on the obtained hash value, and the target channel is determined from the loopback channels by using the residual result and the numbers of the loopback channels.
Because the messages of the same service type are similar to each other in terms of the message content and the port information of the source port, in the embodiment, hash value calculation is performed based on the message content and the port information of the source port, and then the target channel is determined based on the hash value, so that the messages of the same service type are more likely to be distributed to the same loopback channel for loopback processing, and the loopback processing process is more standard.
And under the condition of the determined target channel, the outgoing direction processing engine loops the message back to the incoming direction processing engine through the target channel, the incoming direction processing engine analyzes the message after receiving the loop-back message from the target channel, and similarly, when the port type of the source port of the message is a three-layer interface and the MAC address is a route MAC address, the destination port is obtained through searching the route table, and otherwise, the destination port is obtained through searching the FDB table. At this time, the forwarding table sends out normal forwarding behavior, and the destination port is the device port of the next hop device. The incoming direction processing engine sends the message and the analysis information containing the destination port to the cache scheduling engine.
When the buffer scheduling engine receives the message and the analysis information thereof for the second time, the message and the analysis information are buffered similarly, and sent to the outbound processing engine after priority scheduling processing.
When the outgoing direction processing engine receives the message for the second time, the device port with the destination port being the next hop device can be obtained at the same time, the message can be edited at the moment, and the edited message is sent out from the network channel of the exchange chip.
The embodiment of the application also provides a high-bandwidth message forwarding method which is applied to the Ethernet switching chip.
Referring to fig. 5, a flowchart of a high bandwidth packet forwarding method according to the present embodiment may be implemented by the ethernet switch chip. The specific flow shown in fig. 5 is explained below.
S101, when the incoming direction processing engine receives an original message, analyzing the original message, and sending the original message and analysis information of the original message to the cache scheduling engine.
S102, the buffer scheduling engine buffers the received message and the analysis information, and sends the message and the analysis information to the outgoing direction processing engine in sequence after priority scheduling processing.
S103, when the outgoing direction processing engine receives a message and the destination port of the message is a loopback port, determining a target channel from the plurality of loopback channels according to the message content of the message and the port information of the source port, and looping back the message to the incoming direction processing engine through the target channel.
S104, when the inbound processing engine obtains the ring-back message, the ring-back message is analyzed, the ring-back message and the analysis information of the ring-back message are sent to the cache dispatching engine, and the ring-back message is sent to the outbound processing engine through the cache dispatching engine.
S105, when the outgoing direction processing engine receives the message and the destination port of the message is the device port of the next-hop device, the message is sent out through the network channel.
The high-bandwidth message forwarding method provided by the embodiment is applied to the Ethernet switching chip, and message loopback is performed through a plurality of loopback channels forming a plurality of aggregation groups between the outgoing direction processing engine and the incoming direction processing engine in the Ethernet switching chip. And the outbound processing engine can determine the target port from a plurality of loopback channels based on the message content of the message and the port information of the source port, thereby realizing the loopback of the message. The message forwarding method can successfully realize message loopback through a channel determining mechanism on the basis of a high-bandwidth aggregation group, and can improve the loopback bandwidth and share the loopback load on the basis of avoiding complex loopback logic processing.
In one possible implementation, the outbound direction processing engine, when determining the target channel, may be implemented by:
and determining the service type of the message according to the message content of the message, determining an aggregation group matched with the service type from a plurality of aggregation groups according to the service type, and determining a target channel from loopback channels included in the matched aggregation group according to the message content and port information of a source port.
In one possible implementation manner, the step of determining, by the outbound processing engine, the target channel in the matched aggregation group may be implemented by:
when a message is received, the message is edited, CRC operation is carried out according to the content of the edited message and port information of a source port to obtain a hash value, and a target channel is determined from loop-back channels included in a matched aggregation group according to the hash value.
In one possible implementation, the step of editing the message by the outbound processing engine may be implemented by:
when a message is received, the IP address in the message is replaced, or the MAC address in the message is replaced, or the message is packaged externally, so that the editing of the message is realized.
In one possible implementation manner, the parsing information of the original message includes a first destination port, the parsing information of the ring message includes a second destination port, and the step of parsing the original message and/or the ring message by the inbound processing engine may be implemented by:
when an original message is received, searching a forwarding table according to a source port of the original message to obtain a first destination port of the original message, wherein the first destination port is a loopback port; and/or
When a ring-back message is received, searching a forwarding table according to a source port of the ring-back message to obtain a second destination port of the ring-back message, wherein the second destination port is a device port of next-hop equipment;
and when the port type of the source port of the original message and/or the ring-back message is a three-layer interface and the MAC address is a routing MAC, the searched forwarding table is a routing table, otherwise, the searched forwarding table is a two-layer forwarding table.
In one possible implementation manner, the step of performing the priority scheduling processing by the cache scheduling engine may be implemented by:
and for the cached messages and the analysis information, acquiring priority information of each message, and sequentially sending each message and the corresponding analysis information to the outgoing direction processing engine after priority scheduling processing according to the order of priority from high to low.
In one possible implementation manner, a common loopback channel is provided between the outbound direction processing engine and the inbound direction processing engine, the common loopback channel is one of the plurality of loopback channels, or the common loopback channel is a channel outside the plurality of loopback channels, and the message forwarding method may further include the following steps:
and the outgoing direction processing engine detects whether a loopback port enables an aggregation group when a message is received and a destination port of the message is a loopback port, if the aggregation group is enabled, the method executes the steps of determining a target channel from the plurality of loopback channels according to the message content and the source port of the message, looping the message back to the incoming direction processing engine through the target channel, and if the aggregation group is not enabled, looping the message back to the incoming direction processing engine through the common loopback channel.
For the description of the method steps of the high bandwidth packet forwarding method provided in this embodiment, reference may be made to the description related to the ethernet switch chip in the foregoing embodiment, which is not described in detail herein.
In summary, the embodiment of the present invention provides an ethernet switching chip and a high bandwidth packet forwarding method, where the switching chip includes an ingress direction processing engine, a cache scheduling engine, and an egress direction processing engine that are sequentially connected, and the egress direction processing engine is further connected to the ingress direction processing engine through a plurality of loopback channels that form an aggregation group. When the original message arrives, the message is sent to the outgoing direction processing engine after being analyzed by the incoming direction processing engine and the priority scheduling processing of the cache scheduling engine, and the outgoing direction processing engine determines a target channel from a plurality of loopback channels according to the message content of the message and the port information of the source port under the condition that the port receiving the message is a loopback port, loops the message back to the incoming direction processing engine, and then normally forwards the loopback message. In the scheme, a plurality of loopback channels forming a plurality of aggregation groups are arranged between the outgoing direction processing engine and the incoming direction processing engine, and a channel determining mechanism of the outgoing direction processing engine is combined, so that loopback load can be shared and loopback bandwidth can be improved on the basis of successfully realizing message loopback.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the scope of the present invention should be included in the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (6)

1. An ethernet switching chip, comprising: the system comprises an input direction processing engine, a cache scheduling engine and an output direction processing engine, wherein the input direction processing engine is connected with the cache scheduling engine, the cache scheduling engine is connected with the output direction processing engine, the output direction processing engine is also connected with the input direction processing engine through a plurality of loopback channels, and the loopback channels form a plurality of aggregation groups;
the incoming direction processing engine is used for analyzing the original message when receiving the original message, and sending the original message and the analysis information of the original message to the cache scheduling engine;
the buffer scheduling engine is used for buffering the received message and the analysis information, and sequentially sending the message and the analysis information to the outgoing direction processing engine after priority scheduling processing;
the output direction processing engine is used for determining a target channel from the plurality of loopback channels according to the message content of the message and the port information of the source port when the message is received and the destination port of the message is a loopback port, and looping back the message to the input direction processing engine through the target channel;
the incoming direction processing engine is further used for analyzing the ring-back message when the ring-back message is obtained, and sending the ring-back message and the analysis information of the ring-back message to the cache scheduling engine;
the outgoing direction processing engine is further used for sending the message through a network channel when the message is received and the destination port of the message is the device port of the next-hop device;
each aggregation group comprises at least one loopback channel, the outgoing direction processing engine is used for determining the service type of a message according to the message content of the message, determining an aggregation group matched with the service type from a plurality of aggregation groups according to the service type, and determining a target channel from the loopback channels included in the matched aggregation groups according to the message content and the port information of a source port;
the outgoing direction processing engine is specifically configured to edit a message when the message is received, perform CRC operation according to the content of the edited message and port information of a source port to obtain a hash value, and determine a target channel from loopback channels included in a matched aggregation group according to the hash value.
2. The ethernet switching chip of claim 1, wherein said outbound processing engine is configured to:
when a message is received, the IP address in the message is replaced, or the MAC address in the message is replaced, or the message is packaged externally, so that the editing of the message is realized.
3. The ethernet switch chip of claim 1, wherein the parsing information of the original packet includes a first destination port, the parsing information of the ring packet includes a second destination port, and the ingress direction processing engine is configured to:
when an original message is received, searching a forwarding table according to a source port of the original message to obtain a first destination port of the original message, wherein the first destination port is a loopback port; and/or
When a ring-back message is received, searching a forwarding table according to a source port of the ring-back message to obtain a second destination port of the ring-back message, wherein the second destination port is a device port of next-hop equipment;
and when the port type of the source port of the original message and/or the ring-back message is a three-layer interface and the MAC address is a routing MAC, the searched forwarding table is a routing table, otherwise, the searched forwarding table is a two-layer forwarding table.
4. The ethernet switch chip of claim 1, wherein the cache scheduling engine is configured to:
and for the cached messages and the analysis information, acquiring priority information of each message, and sequentially sending each message and the corresponding analysis information to the outgoing direction processing engine after priority scheduling processing according to the order of priority from high to low.
5. The ethernet switch chip of claim 1, wherein a normal loopback channel is provided between the outbound processing engine and the inbound processing engine, the normal loopback channel being one of the plurality of loopback channels, or the normal loopback channel being a channel other than the plurality of loopback channels;
the outgoing direction processing engine is further configured to:
and when the message is received and the destination port of the message is a loopback port, detecting whether the loopback port enables the aggregation group, if the aggregation group is enabled, executing the steps of determining a target channel from the plurality of loopback channels according to the message content and the source port of the message, and looping back the message to the incoming direction processing engine through the target channel, and if the aggregation group is not enabled, looping back the message to the incoming direction processing engine through the common loopback channel.
6. The utility model provides a high bandwidth message forwarding method which is characterized in that is applied to the Ethernet switching chip, the Ethernet switching chip includes income direction processing engine, buffering dispatch engine and play direction processing engine, income direction processing engine is connected with buffering dispatch engine, buffering dispatch engine is connected with play direction processing engine, play direction processing engine still through a plurality of loopback channel with income direction processing engine connection, a plurality of loopback channels constitute a plurality of aggregation groups, the method includes:
when the incoming direction processing engine receives an original message, analyzing the original message, and sending the original message and analysis information of the original message to the cache scheduling engine;
the buffer scheduling engine buffers the received message and the analysis information, and sequentially sends the buffer information to the outgoing direction processing engine after priority scheduling processing;
when the outgoing direction processing engine receives a message and the destination port of the message is a loopback port, determining a target channel from the plurality of loopback channels according to the message content of the message and the port information of the source port, and looping back the message to the incoming direction processing engine through the target channel;
when the inbound processing engine obtains the ring-back message, the ring-back message is analyzed, the ring-back message and the analysis information of the ring-back message are sent to the cache scheduling engine, and the ring-back message is sent to the outbound processing engine through the cache scheduling engine;
the outgoing direction processing engine sends the message through a network channel when receiving the message and the destination port of the message is the equipment port of the next-hop equipment;
each aggregation group comprises at least one loopback channel, and the step of determining a target channel from the plurality of loopback channels according to the message content and the source port of the message comprises the following steps:
determining the service type of a message according to the message content of the message, determining an aggregation group matched with the service type from a plurality of aggregation groups according to the service type, and determining a target channel from loopback channels included in the matched aggregation group according to the message content and port information of a source port;
the step of determining a target channel from the loopback channels included in the matched aggregation group according to the message content and the port information of the source port includes:
when a message is received, the message is edited, CRC operation is carried out according to the content of the edited message and port information of a source port to obtain a hash value, and a target channel is determined from loop-back channels included in a matched aggregation group according to the hash value.
CN202210685527.7A 2022-06-16 2022-06-16 Ethernet exchange chip and high-bandwidth message forwarding method Active CN115086253B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210685527.7A CN115086253B (en) 2022-06-16 2022-06-16 Ethernet exchange chip and high-bandwidth message forwarding method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210685527.7A CN115086253B (en) 2022-06-16 2022-06-16 Ethernet exchange chip and high-bandwidth message forwarding method

Publications (2)

Publication Number Publication Date
CN115086253A CN115086253A (en) 2022-09-20
CN115086253B true CN115086253B (en) 2024-03-29

Family

ID=83253301

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210685527.7A Active CN115086253B (en) 2022-06-16 2022-06-16 Ethernet exchange chip and high-bandwidth message forwarding method

Country Status (1)

Country Link
CN (1) CN115086253B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101534253A (en) * 2009-04-09 2009-09-16 中兴通讯股份有限公司 Message forwarding method and device
CN103368775A (en) * 2013-07-09 2013-10-23 杭州华三通信技术有限公司 Traffic backup method and core switching equipment
CN108134747A (en) * 2017-12-22 2018-06-08 盛科网络(苏州)有限公司 The realization method and system of Ethernet switching chip, its multicast mirror image flow equalization
CN108683617A (en) * 2018-04-28 2018-10-19 新华三技术有限公司 Message diversion method, device and shunting interchanger
JP6436262B1 (en) * 2018-07-03 2018-12-12 日本電気株式会社 Network management apparatus, network system, method, and program
US10171368B1 (en) * 2013-07-01 2019-01-01 Juniper Networks, Inc. Methods and apparatus for implementing multiple loopback links
WO2022105289A1 (en) * 2020-11-23 2022-05-27 北京锐安科技有限公司 Flow forwarding method, service card and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070183415A1 (en) * 2006-02-03 2007-08-09 Utstarcom Incorporated Method and system for internal data loop back in a high data rate switch

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101534253A (en) * 2009-04-09 2009-09-16 中兴通讯股份有限公司 Message forwarding method and device
US10171368B1 (en) * 2013-07-01 2019-01-01 Juniper Networks, Inc. Methods and apparatus for implementing multiple loopback links
CN103368775A (en) * 2013-07-09 2013-10-23 杭州华三通信技术有限公司 Traffic backup method and core switching equipment
CN108134747A (en) * 2017-12-22 2018-06-08 盛科网络(苏州)有限公司 The realization method and system of Ethernet switching chip, its multicast mirror image flow equalization
CN108683617A (en) * 2018-04-28 2018-10-19 新华三技术有限公司 Message diversion method, device and shunting interchanger
JP6436262B1 (en) * 2018-07-03 2018-12-12 日本電気株式会社 Network management apparatus, network system, method, and program
WO2022105289A1 (en) * 2020-11-23 2022-05-27 北京锐安科技有限公司 Flow forwarding method, service card and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种网络设备内部的单端口环路检测技术;杨勇等;《通讯世界》;20200325(第03期);全文 *
基于CE网络多链路聚合的探针测试设计;张峥栋;《信息与电脑(理论版)》;20190315(第05期);全文 *

Also Published As

Publication number Publication date
CN115086253A (en) 2022-09-20

Similar Documents

Publication Publication Date Title
CN102970227B (en) The method and apparatus of VXLAN message repeating is realized in ASIC
CN102957616B (en) In the method and system of ASIC repeating TRILL network message
CN100479465C (en) Method of implementing link aggregation of network equipment
CN101573913B (en) Method and apparatus for improved multicast routing
JP4391636B2 (en) Switching device with multi-stage queuing method
US6801950B1 (en) Stackable network unit including register for identifying trunk connection status of stacked units
US7693169B2 (en) Transmission apparatus and frame transmission method
GB2343816A (en) VLAN tag header modification
CN101123529A (en) Multicast implementation method, system and device based on PCIE switching network
CN104917681A (en) System and method for packet forwarding using a conjunctive normal from strategy in a content-centric network
CN101616094A (en) The acquisition methods of message forwarding path and equipment
CN105635000A (en) Message storing and forwarding method, circuit and device
JP2004159019A (en) Extended vlan tag swap system
US20100158033A1 (en) Communication apparatus in label switching network
US8488489B2 (en) Scalable packet-switch
CN115086253B (en) Ethernet exchange chip and high-bandwidth message forwarding method
US20080247402A1 (en) Communication relaying apparatus, communication relay, and controlling method
US7724737B1 (en) Systems and methods for memory utilization during packet forwarding
CN100459580C (en) Method of forwarding services with three-layer resilient packet ring
CN102780640B (en) Method and device for realizing multilayer LM (loss measurement) in switching chip
CN112637705B (en) Method and device for forwarding in-band remote measurement message
CN110932968B (en) Flow forwarding method and device
CN110493057B (en) Wireless access equipment and forwarding control method thereof
CN111490941A (en) Multi-protocol label switching MP L S label processing method and network equipment
CN110430146A (en) Cell recombination method and switching fabric based on CrossBar exchange

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant