CN112910794B - Load balancing system for multi-path E1 networking - Google Patents

Load balancing system for multi-path E1 networking Download PDF

Info

Publication number
CN112910794B
CN112910794B CN202110022218.7A CN202110022218A CN112910794B CN 112910794 B CN112910794 B CN 112910794B CN 202110022218 A CN202110022218 A CN 202110022218A CN 112910794 B CN112910794 B CN 112910794B
Authority
CN
China
Prior art keywords
module
data
networking
path
link
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110022218.7A
Other languages
Chinese (zh)
Other versions
CN112910794A (en
Inventor
黄治朔
崔俊彬
张磊
赵炜
刘惠颖
魏勇
袁欣雨
刘辛彤
付强
穆春宇
成思远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GHT CO Ltd
State Grid Corp of China SGCC
Information and Telecommunication Branch of State Grid Hebei Electric Power Co Ltd
Original Assignee
GHT CO Ltd
State Grid Corp of China SGCC
Information and Telecommunication Branch of State Grid Hebei Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GHT CO Ltd, State Grid Corp of China SGCC, Information and Telecommunication Branch of State Grid Hebei Electric Power Co Ltd filed Critical GHT CO Ltd
Priority to CN202110022218.7A priority Critical patent/CN112910794B/en
Publication of CN112910794A publication Critical patent/CN112910794A/en
Application granted granted Critical
Publication of CN112910794B publication Critical patent/CN112910794B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4604LAN interconnection over a backbone network, e.g. Internet, Frame Relay
    • H04L12/462LAN interconnection over a bridge based backbone
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a load balancing system of a multi-path E1 networking, which comprises a lower layer Ethernet networking, an FPGA chip, an Ethernet chip and an upper layer Ethernet networking, wherein a downlink port of the FPGA chip is connected with the lower layer Ethernet networking, and an uplink port of the FPGA chip is connected with the upper layer Ethernet networking through the Ethernet chip; the FPGA chip is configured with a bus bridge module; the bus bridging module is used for processing data requests of a plurality of paths of E1 links by adopting a preset processing strategy; and the processing strategy is to continue to jump to the processing of the data receiving request of the next E1 link after the data receiving request of one E1 link is processed until the data receiving request of each E1 link is traversed. According to the invention, through carrying out load balancing design on the bridge module of the core network IPOE interface and adopting a non-blocking state machine based on rapid jump, the problem of IPOE data packet congestion at the junction of the core network in a service concurrent scene is avoided.

Description

Load balancing system for multi-path E1 networking
Technical Field
The invention relates to the technical field of communication, in particular to a load balancing system for multi-path E1 networking.
Background
In a 2M networking based on an E1 link, a usage scenario of multiplexing multiple E1 links is very common, for example, a secondary device of a provincial core network performs networking with a local office direction, and the local core network performs networking with the local office direction. The 2M networking of multiplexing of multiple E1 links through an IPOE data channel inevitably results in a scenario of multiple concurrent services.
Under the condition that the multi-office direction service load is high, if the load balance design is not carried out on the IPOE receiving position of the core network, under the condition that a certain office direction load is full, the core network receiving tandem position always processes the office direction request with high load occupation. If other office directions send requests at this time, the requests of other office directions are always set aside and cannot be processed.
Disclosure of Invention
The embodiment of the invention aims to provide a load balancing system of a multi-path E1 networking, which solves the problem of IPOE data packet congestion at a core network junction under a service concurrency scene by carrying out load balancing design on a bridge module of an IPOE interface of a core network and adopting a non-blocking state machine based on quick jump.
In order to achieve the above object, an embodiment of the present invention provides a load balancing system for a multi-path E1 networking, including a lower ethernet networking, an FPGA chip, an ethernet chip, and an upper ethernet networking, wherein a downlink port of the FPGA chip is connected to the lower ethernet networking, and an uplink port of the FPGA chip is connected to the upper ethernet networking through the ethernet chip; the FPGA chip is configured with a bus bridge module; wherein the content of the first and second substances,
the bus bridging module is used for processing data requests of a plurality of paths of E1 links by adopting a preset processing strategy; and the processing strategy is to continue to jump to the processing of the data receiving request of the next E1 link after the data receiving request of one E1 link is processed until the data receiving request of each E1 link is traversed.
Preferably, the FPGA chip is further configured with an encoding and decoding module, an analysis and conversion module, and a buffer module; wherein the content of the first and second substances,
the coding and decoding module is used for receiving the differential signal sent by the lower layer Ethernet network, decoding the differential signal into a binary code stream and sending the binary code stream to the analysis and conversion module;
the analysis conversion module is used for carrying out serial-parallel conversion on the received binary code stream, analyzing the binary code stream into data of an Avalon-ST bus protocol and sending the data to the buffer module;
and the buffer module is used for reading the data of the Avalon-ST bus protocol and carrying out collection bridging in a data packet format.
Preferably, the method further comprises the following steps:
the buffer module is further configured to receive data of the Avalon-ST bus protocol sent by the upper ethernet group network, and send the data to the analysis conversion module;
the analysis conversion module is also used for recombining the received data of the Avalon-ST bus protocol according to an HDLC protocol, converting the data into a serial binary code stream through serial conversion and sending the serial binary code stream to the coding and decoding module;
and the coding and decoding module is also used for coding the received serial binary code stream and sending the coded serial binary code stream to the lower-layer Ethernet network.
Preferably, the ethernet network further comprises a Buffer chip, and each E1 link in the lower ethernet network is connected to the FPGA chip through one Buffer chip.
Preferably, the codec rule adopted by the codec module is an HDB3 codec rule.
Preferably, the parsing protocol adopted by the parsing conversion module is an HDLC protocol.
Preferably, the buffering mode of the buffering module is whole packet buffering, and when it is confirmed that data sent by the downlink E1 link is buffered as a complete data packet, the data packet is read at a preset reading rate.
Preferably, the buffer rate of the buffer module is 2Mbps.
Preferably, the preset reading rate is 50Mbps.
Preferably, the latency of the bus bridge module for processing the data request of each path of the E1 link is 1 clock cycle.
Compared with the prior art, the load balancing system of the multi-path E1 networking provided by the embodiment of the invention has the advantages that the fast jump strategy is applied to the design of the Avalon-ST bus bridge in the E1 networking, the fast jump design is carried out on the request processing state machine, the purpose of carrying out load balancing processing when multi-path services are concurrent is achieved, the problem of concurrent blockage of the multi-path services is solved, and only extremely small processing time resources and FPGA logic resource consumption need to be additionally added for the redesigned state machine.
Drawings
Fig. 1 is a schematic structural diagram of a load balancing system for multi-path E1 networking according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of data processing of a fast jump based non-blocking state machine according to an embodiment of the present invention;
fig. 3 is a schematic data processing diagram of a priority-based bridging policy according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating how a state machine based on a fast jump design spends time processing data requests of each E1 link according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an FPGA chip according to an embodiment of the present invention;
fig. 6 is a schematic processing flow diagram of an FPGA chip when receiving a data request of an upper ethernet networking and a lower ethernet networking according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a schematic structural diagram of a load balancing system for a multi-path E1 networking according to embodiment 1 of the present invention is shown, where the system includes a lower ethernet networking, an FPGA chip, an ethernet chip, and an upper ethernet networking, where a downlink port of the FPGA chip is connected to the lower ethernet networking, and an uplink port of the FPGA chip is connected to the upper ethernet networking through the ethernet chip; the FPGA chip is configured with a bus bridge module; wherein the content of the first and second substances,
the bus bridging module is used for processing data requests of a plurality of paths of E1 links by adopting a preset processing strategy; and the processing strategy is to continue to jump to the processing of the data receiving request of the next E1 link after the data receiving request of one E1 link is processed until the data receiving request of each E1 link is traversed.
Specifically, the load balancing system of the multi-path E1 networking includes a lower ethernet networking, an FPGA (Field Programmable Gate Array) chip, an ethernet chip, and an upper ethernet networking, wherein a downlink port of the FPGA chip is connected to the lower ethernet networking, and an uplink port of the FPGA chip is connected to the upper ethernet networking through the ethernet chip. Generally, both the lower layer ethernet network and the upper layer ethernet network are composed of multiple E1 links. The scheme of the invention can be realized after the system is powered on.
Wherein the FPGA chip is configured with the bus bridge module. The bus bridge module is mainly used for realizing a load balancing function. The realization process is as follows:
the bus-bridge module is connected to the bus-bridge module, the system comprises a data processing module, a data processing module and a data processing module, wherein the data processing module is used for processing data requests of a plurality of paths of E1 links by adopting a preset processing strategy; and the processing strategy is that after the data receiving request of one E1 link is processed, the next E1 link continues to be processed until the data receiving request of each E1 link is traversed. When the bus bridge module operates at a clock rate of 50Mhz, up to 25 lanes of 2M link data can be processed while ensuring that no blocking occurs. Therefore, after the module processes the data receiving request of a certain path of E1 link, the module directly uses the 50Mhz clock to jump to the next path to judge the state of the E1 receiving request, even if the next path of E1 link does not request to receive data, the strategy can be used for traversing and judging the receiving request of each path in the process of receiving data once, thereby avoiding the situation that the subsequent other paths E1 are blocked due to high load of a certain path E1, and the strategy can be called as a non-blocking state machine based on quick jump. Referring to fig. 2, a schematic data processing diagram of a non-blocking state machine based on fast jump according to the embodiment of the present invention is shown. In order to further highlight the advantages of the present invention, this embodiment of the present invention further describes processing of multiple concurrent data in the prior art, in which a priority-based bridging policy is generally adopted for processing multiple concurrent data, that is, when a bridge is always processing a request with a high load, other requests are easily set aside and cannot be processed. Fig. 3 is a schematic data processing diagram of a bridging policy based on priority according to this embodiment of the present invention.
Although the redesigned state machine of the invention needs extra processing time resources, the time cost is low and can be ignored. Referring to fig. 4, it is a schematic diagram of the state machine based on the fast jump design according to the embodiment of the present invention, which spends time when processing data requests of each path of E1 link. As can be seen from FIG. 4, even if only one E1 link is received at full capacity, the additional added state machine hop time is n-1 lanes of 50Mhz clock cycles. Taking a 2M system as an example, one FPGA chip processes 16 paths of requests received by the E1 link, and the intermediate jump time of the state machine is 20ns × 15=300ns, which is equivalent to only increasing bandwidth consumption of 120bps in one full-load E1 link, and the bandwidth of the E1 link relative to 2Mbps can be almost ignored. Therefore, the bus bridge module can judge the receiving request of each E1 link with little additional processing time cost.
Embodiment 1 of the present invention provides a load balancing system for a multi-path E1 networking, and avoids the problem of congestion of an IPOE data packet at a tandem of a core network in a service concurrency scenario by performing load balancing design on a bridge module of an IPOE interface of the core network and using a race-free state machine based on fast skip.
As an improvement of the above scheme, the FPGA chip is further configured with an encoding and decoding module, an analysis and conversion module, and a buffer module; wherein the content of the first and second substances,
the coding and decoding module is used for receiving the differential signal sent by the lower layer Ethernet network, decoding the differential signal into a binary code stream and sending the binary code stream to the analysis and conversion module;
the analysis conversion module is used for carrying out serial-parallel conversion on the received binary code stream, analyzing the binary code stream into data of an Avalon-ST bus protocol and sending the data to the buffer module;
and the buffer module is used for reading the data of the Avalon-ST bus protocol and carrying out aggregation and bridging in a data packet format.
Specifically, referring to fig. 5, a schematic structural diagram of an FPGA chip according to the embodiment of the present invention is shown. As can be seen from fig. 5, the FPGA chip is further configured with the encoding/decoding module, the parsing/converting module, and the buffering module; wherein, the first and the second end of the pipe are connected with each other,
and the coding and decoding module is used for receiving the differential signal sent by the lower Ethernet network, decoding the differential signal into a binary code stream and sending the binary code stream to the analysis and conversion module. The differential signal refers to an HDB3 differential signal, and an HDB3 original differential signal of the E1 link is connected to an FPGA pin through the Buffer chip after being subjected to positive and negative decision shaping. And the coding and decoding module is used for decoding the differential signal and then obtaining the clock of the opposite terminal equipment.
And the analysis conversion module is used for carrying out serial-parallel conversion on the received binary code stream, analyzing the binary code stream into data of an Avalon-ST bus protocol, and sending the data to the buffer module. That is to say, when the parsing and converting module receives the binary code stream sent by the encoding and decoding module, the binary code stream is first converted in a serial-parallel manner and then parsed into data of the Avalon-ST bus protocol.
And the buffer module is used for reading data of the Avalon-ST bus protocol and carrying out collection bridging in a data packet format. Generally, an E1 link works at a rate of 2M, and before entering an FPGA chip for data processing, a data packet at a rate of 2Mbps needs to be converted into a rate of 50Mbps for processing. And the buffer module performs whole packet buffer on the data packet received by the E1 link, confirms that one complete packet is buffered at the rate of 2Mbps, and then reads out the data packet at the rate of 50Mbps for convergence and bridging.
The process is that the FPGA chip processes the data request received from the lower Ethernet network. Fig. 6 is a schematic view of a corresponding processing flow when the FPGA chip receives a data request from an upper ethernet networking and a lower ethernet networking according to the embodiment of the present invention. The upper half part is a corresponding processing flow when a data request of the lower layer Ethernet network is received, and the lower half part is a corresponding processing flow when a data request of the upper layer Ethernet network is received.
In the embodiment of the invention, the original HDB3 differential signals of the E1 link are decoded by using the coding and decoding module to obtain the binary code stream, the binary code stream is subjected to serial-parallel conversion by using the analysis and conversion module and then is analyzed into the data of the Avalon-ST bus protocol, then the data of the Avalon-ST bus protocol is read by using the buffer module and is collected and bridged in the format of a data packet, and the data request of the lower-layer Ethernet networking is processed.
As an improvement of the above scheme, the method further comprises the following steps:
the buffer module is further configured to receive data of the Avalon-ST bus protocol sent by the upper ethernet group network, and send the data to the analysis conversion module;
the analysis conversion module is also used for recombining the received data of the Avalon-ST bus protocol according to an HDLC protocol, converting the data into a serial binary code stream through serial conversion and sending the serial binary code stream to the coding and decoding module;
and the coding and decoding module is also used for coding the received serial binary code stream and sending the coded serial binary code stream to the lower-layer Ethernet network.
Specifically, referring to the lower half flow of fig. 6, when the FPGA chip receives a data request of the upper ethernet network, reverse transmission needs to be performed, and the corresponding processing flow is as follows:
the buffer module is further configured to receive data of the Avalon-ST bus protocol sent by an upper ethernet network, and send the data to the analysis conversion module. Similarly, the receiving process also needs buffering of the whole packet, and after the buffering of a complete data packet is completed, the buffering module sends the data packet to the parsing and converting module through the Avalon-ST bus.
The analysis conversion module is also used for recombining the received data of the Avalon-ST bus protocol according to the HDLC protocol, the recombining is also called packaging, after the packaging is finished, the data are converted into serial binary code streams through serial conversion, and the serial binary code streams are sent to the coding and decoding module.
And the coding and decoding module is also used for coding the received serial binary code stream and sending the coded serial binary code stream to a lower-layer Ethernet network. Before reaching each E1 link, the coded signals are sent to the Buffer chip and converted into positive and negative levels.
The embodiment of the invention firstly utilizes the buffer module to receive the data of the Avalon-ST bus protocol sent by the upper layer Ethernet networking, then utilizes the analysis conversion module to recombine the received data of the Avalon-ST bus protocol according to the HDLC protocol, converts the data into the serial binary code stream through serial conversion, then codes the serial binary code stream through the coding and decoding module, and sends the coded serial binary code stream to the lower layer Ethernet networking so as to realize the processing of the data request of the upper layer Ethernet networking.
As an improvement of the above scheme, the ethernet network further comprises a Buffer chip, and each path of E1 link in the lower ethernet networking is connected with the FPGA chip through one Buffer chip.
Specifically, the load balancing system for the multi-path E1 networking further includes the Buffer chip, and each path of E1 link in the lower ethernet networking is connected to the FPGA chip through one Buffer chip. That is, each E1 link in the lower ethernet network is connected to a pin of the FPGA chip through the Buffer chip.
In the embodiment of the invention, the Buffer chip is additionally arranged between each path of E1 link and the FPGA chip so as to reduce the positive and negative level distortion of each path of E1 link.
As an improvement of the above scheme, the codec rule adopted by the codec module is an HDB3 codec rule.
Specifically, the encoding and decoding rule adopted by the encoding and decoding module is an HDB3 encoding and decoding rule. Namely, the coding and decoding module decodes the received differential signal into a binary code stream according to the HDB3 decoding rule, and codes the received binary code stream into a differential signal according to the HDB3 coding rule.
As an improvement of the above scheme, an analysis protocol adopted by the analysis conversion module is an HDLC protocol.
Specifically, the analysis protocol adopted by the analysis conversion module is an HDLC protocol. Preferably, the HDLC protocol uses a parallel HDLC protocol, and translates the idle codes with respect to a standard HDLC protocol, thereby ensuring that the data packet is not decoded when the data identical to the idle codes appears in the original data.
As an improvement of the above scheme, the buffering mode of the buffering module is whole packet buffering, and when it is determined that data sent by a downlink E1 link is buffered as a complete data packet, the data packet is read at a preset reading rate.
Specifically, the buffering mode of the buffering module is whole packet buffering, and when it is confirmed that data sent by a downlink E1 link is buffered as a complete data packet, the data packet is read at a preset reading rate. For example, after confirming that a complete packet is buffered at the rate of 2Mbps, the data packet is read out at the rate of 50Mbps for aggregation bridging.
As an improvement of the above scheme, the buffer rate of the buffer module is 2Mbps.
Specifically, the buffer rate of the buffer module is 2Mbps. The buffer module performs whole packet buffer on the data packet received by the E1 link, and reads the data packet after confirming buffer of a whole packet at the rate of 2Mbps, so that incomplete data or missing data of the data packet is avoided.
As an improvement of the above scheme, the preset reading rate is 50Mbps.
Specifically, the preset reading rate is 50Mbps. And after the data packet is completely cached, reading is carried out, and the reading speed is greater than the caching speed, so that the content of the data packet can be rapidly acquired, the data request is processed, and the link blockage is reduced.
As an improvement of the above scheme, the latency of the bus bridge module to process the data request of each path of E1 link is 1 clock cycle.
Specifically, the latency of the bus bridge module for processing the data request of each path of the E1 link is 1 clock cycle. That is, the bus bridge module can make the reception request judgment of each path E1 with a little increase in processing time cost. Even if only one path E1 is received at full load, the additionally increased state machine jump time is n-1 paths of 50Mhz clock cycles. Taking a 2M system as an example, a piece of FPGA processes 16 paths of requests received by E1 links, where the intermediate jump time of the state machine is 20ns × 15=300ns, which is equivalent to only increasing bandwidth consumption of 120bps under one full-load E1 link, and the bandwidth of the E1 link relative to 2Mbps can be almost ignored.
To sum up, the load balancing system for a multi-path E1 networking provided in the embodiments of the present invention applies a fast jump strategy to the design of an Avalon-ST bus bridge in the E1 networking, and performs a fast jump design on a request processing state machine, so as to achieve the purpose of performing load balancing processing when multiple paths of services are concurrent, and solve the problem of concurrent blocking of multiple paths of services, and the redesigned state machine only needs to additionally increase a very small processing time resource and FPGA logic resource consumption, and the waiting time for processing and receiving a request is only 1 clock cycle, so that bandwidth consumption can be almost ignored, and resource consumption of the bus bridge is not increased. The invention can be compatible with various multi-path E1 networking modes, and can ensure that the problem of blocking caused by one-path office direction can not occur under the networking environment.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (9)

1. A load balancing system of a multi-path E1 networking is characterized by comprising a lower-layer Ethernet networking, an FPGA chip, an Ethernet chip and an upper-layer Ethernet networking, wherein a downlink port of the FPGA chip is connected with the lower-layer Ethernet networking, and an uplink port of the FPGA chip is connected with the upper-layer Ethernet networking through the Ethernet chip; the FPGA chip is configured with a bus bridge module; wherein the content of the first and second substances,
the bus bridging module is used for processing data requests of a plurality of paths of E1 links by adopting a preset processing strategy; after the data receiving request of one path of E1 link is processed, continuing to jump to the next path of E1 link until the data receiving request of each path of E1 link is traversed;
the waiting time of the bus bridge module for processing the data request of each path of E1 link is 1 clock cycle.
2. The multi-path E1 networking load balancing system according to claim 1, wherein said FPGA chip is further configured with a codec module, a parsing conversion module, and a buffer module; wherein, the first and the second end of the pipe are connected with each other,
the coding and decoding module is used for receiving the differential signal sent by the lower layer Ethernet network, decoding the differential signal into a binary code stream and sending the binary code stream to the analysis and conversion module;
the analysis conversion module is used for carrying out serial-parallel conversion on the received binary code stream, analyzing the binary code stream into data of an Avalon-ST bus protocol and sending the data to the buffer module;
and the buffer module is used for reading the data of the Avalon-ST bus protocol and carrying out aggregation and bridging in a data packet format.
3. The load balancing system of a multi-lane E1 networking of claim 2, further comprising:
the buffer module is further configured to receive data of the Avalon-ST bus protocol sent by the upper ethernet group network, and send the data to the analysis conversion module;
the analysis conversion module is also used for recombining the received data of the Avalon-ST bus protocol according to an HDLC protocol, converting the data into a serial binary code stream through serial conversion and sending the serial binary code stream to the coding and decoding module;
and the coding and decoding module is also used for coding the received serial binary code stream and sending the coded serial binary code stream to the lower-layer Ethernet network.
4. The load balancing system for multi-path E1 networking according to claim 1, further comprising a Buffer chip, wherein each E1 link in the lower ethernet networking is connected to the FPGA chip through one Buffer chip.
5. The system for load balancing of multi-path E1 networking according to claim 2, wherein the codec rule adopted by the codec module is an HDB3 codec rule.
6. The system for load balancing of multiple E1 networks according to claim 2, wherein the parsing protocol employed by the parsing and converting module is HDLC protocol.
7. The multi-path E1 networking load balancing system according to claim 2, wherein the buffering mode of the buffering module is full packet buffering, and when it is determined that data sent by the downlink E1 link is buffered as a complete data packet, the data packet is read at a preset reading rate.
8. The system for load balancing of multi-path E1 networking according to claim 7, wherein the buffering rate of the buffering module is 2Mbps.
9. The multi-E1 networking load balancing system according to claim 7, wherein the predetermined reading rate is 50Mbps.
CN202110022218.7A 2021-01-07 2021-01-07 Load balancing system for multi-path E1 networking Active CN112910794B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110022218.7A CN112910794B (en) 2021-01-07 2021-01-07 Load balancing system for multi-path E1 networking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110022218.7A CN112910794B (en) 2021-01-07 2021-01-07 Load balancing system for multi-path E1 networking

Publications (2)

Publication Number Publication Date
CN112910794A CN112910794A (en) 2021-06-04
CN112910794B true CN112910794B (en) 2023-04-07

Family

ID=76112275

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110022218.7A Active CN112910794B (en) 2021-01-07 2021-01-07 Load balancing system for multi-path E1 networking

Country Status (1)

Country Link
CN (1) CN112910794B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113810945B (en) * 2021-09-22 2023-06-20 广州通则康威智能科技有限公司 Multi-path uplink load balancing method, device, computer equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110113374A (en) * 2019-03-15 2019-08-09 平安科技(深圳)有限公司 Streaming media server executes multitask method, device and storage medium, terminal device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101997745A (en) * 2010-11-23 2011-03-30 珠海市佳讯实业有限公司 FPGA-based E1 insertion time slot and E1_IP data aggregation hybrid access device and method
CN103701715A (en) * 2012-09-27 2014-04-02 京信通信***(中国)有限公司 Method and device for sending and receiving Ethernet data packet based on multiple E1 channels
CN104142858B (en) * 2013-11-29 2016-09-28 腾讯科技(深圳)有限公司 Blocked task dispatching method and device
CN104660360B (en) * 2015-02-03 2017-05-03 电信科学技术第五研究所 Ethernet data and multi-channel E1 data processing method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110113374A (en) * 2019-03-15 2019-08-09 平安科技(深圳)有限公司 Streaming media server executes multitask method, device and storage medium, terminal device
WO2020186792A1 (en) * 2019-03-15 2020-09-24 平安科技(深圳)有限公司 Streaming media server task execution method and apparatus, and storage medium and terminal device

Also Published As

Publication number Publication date
CN112910794A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN112243253B (en) Communication equipment
JP4488320B2 (en) Internet access for cellular networks
CN1906906B (en) Optimized radio bearer configuration for voice over IP
CN1225874C (en) Method and apparatus for packet delay reduction using scheduling and header compression
CN106789609B (en) FC-EG gateway, communication conversion method between fiber channel and Ethernet
CN103561472B (en) A kind of Multi-service link distribution and reconstruction unit and method thereof
WO2019179157A1 (en) Data traffic processing method and related network device
CN1759541A (en) Video packets over a wireless link under varying delay and bandwidth conditions
CN1881979B (en) Ethernet physical layer low-speed transmission realizing method and its applied network apparatus
CN110944358B (en) Data transmission method and device
US11425050B2 (en) Method and apparatus for correcting a packet delay variation
CN1467962A (en) Voice packet preferential control equipment and control method thereof
US8289899B2 (en) Communication method and intermediate network device with branch selection functions
CN112910794B (en) Load balancing system for multi-path E1 networking
CN111682994A (en) Annular or linear network system based on EPA protocol and transmission method of non-real-time data
CN111464437A (en) Multipath transmission path optimization method based on forward time delay in vehicle-mounted heterogeneous network
CN111741499A (en) Multi-band convergence method for intelligent wireless networking
US20020119780A1 (en) Communication method, radio network controller and base node for implementing this method
CN100435544C (en) Modem system and collector for transmission routes with different characteristics
Ichikawa et al. High-speed packet switching systems for multimedia communications
Bai et al. Multi-path transmission protocol in VANET
CN110365579B (en) Congestion and fault perception wireless router in wireless network on chip and routing method thereof
US20030161294A1 (en) Apparatus and method for de-prioritization of bypass packets in a packet based communication system
CN100499464C (en) Ethernet receiving and transmission device based on TV coaxial line
CN103108407B (en) Handling method and device of data segment and recombination among protocol layers based on general packet radio service (GPRS)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 050000 No.10 Fuqiang street, Yuhua District, Shijiazhuang City, Hebei Province

Applicant after: STATE GRID HEBEI INFORMATION & TELECOMMUNICATION BRANCH

Applicant after: GHT Co.,Ltd.

Applicant after: STATE GRID CORPORATION OF CHINA

Address before: 510663, Guangdong, Guangzhou hi tech Industrial Development Zone, science, South Road, No. 16, No.

Applicant before: GHT Co.,Ltd.

Applicant before: STATE GRID HEBEI INFORMATION & TELECOMMUNICATION BRANCH

Applicant before: STATE GRID CORPORATION OF CHINA

GR01 Patent grant
GR01 Patent grant