CN115550080A - Network card, data transmission system, method, computer equipment and storage medium - Google Patents

Network card, data transmission system, method, computer equipment and storage medium Download PDF

Info

Publication number
CN115550080A
CN115550080A CN202211138632.5A CN202211138632A CN115550080A CN 115550080 A CN115550080 A CN 115550080A CN 202211138632 A CN202211138632 A CN 202211138632A CN 115550080 A CN115550080 A CN 115550080A
Authority
CN
China
Prior art keywords
data
cache
shunting
module
network card
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211138632.5A
Other languages
Chinese (zh)
Inventor
刘晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202211138632.5A priority Critical patent/CN115550080A/en
Publication of CN115550080A publication Critical patent/CN115550080A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application relates to a network card, a data transmission system, a data transmission method, computer equipment and a storage medium, wherein the network card comprises a network card body and a shunting module, the network card body comprises a network port module, a data processing module and a cache module, the network port module comprises a transmission network port and at least one shunting network port, and the shunting network port is used for signal connection with second equipment for shunting; the shunting module is used for monitoring data flow between the transmission network port and the data processing module, acquiring the occupied cache amount of the cache module, intercepting the data flow to acquire shunting data when the occupied cache amount is greater than or equal to a first cache threshold value, and forwarding the shunting data through the shunting network port; when the cache occupation amount is smaller than or equal to the second cache threshold value, the returned shunting data is acquired through the shunting network port, and the network card can solve the problem of network resource waste caused by data discarding in the prior art.

Description

Network card, data transmission system, method, computer equipment and storage medium
Technical Field
The present application relates to the field of data transmission technologies, and in particular, to a network card, a data transmission system, a data transmission method, a computer device, and a storage medium.
Background
The network card is a network component working on a data link layer, is an interface for connecting a computer and a transmission medium in a network, not only can realize physical connection and electric signal matching with a local area network transmission medium, but also relates to functions of frame sending and receiving, frame packaging and unpacking, medium access control, data encoding and decoding, data caching and the like.
The data received by the current network card is mainly processed through CPU resources, when the network card is not allocated with enough CPU resources, the data which is not processed in time can be temporarily placed in the cache of the network card, the data in the cache of the network card is processed when the CPU can provide enough resources, when more data are sent to the network card end and the cache of the network card is full, the network card discards the newly sent data, and the retransmission mechanism of a network protocol is utilized for retransmission, so that the waste of network resources is caused.
Disclosure of Invention
Therefore, the network card, the data transmission system, the data transmission method and the storage medium are provided, and the problem of network resource waste caused by data discarding in the prior art is solved.
In one aspect, a network card is provided, including:
the network card body comprises a network interface module, a data processing module and a cache module, wherein the data processing module is in signal connection with the cache module, the network interface module comprises a transmission network interface and at least one shunt network interface, and the shunt network interface is used for shunting and returning shunting data;
the shunting module is used for monitoring data flow between the transmission network port and the data processing module and acquiring the cache occupancy of the cache module, wherein when the cache occupancy is greater than or equal to a first cache threshold value, the shunting module intercepts the data flow to obtain shunting data and forwards the shunting data through the shunting network port; or when the cache occupation amount is smaller than or equal to a second cache threshold value, the shunting module acquires the returned shunting data through the shunting port.
In one embodiment, the network card further includes a storage module for receiving the split-flow data forwarded by the adjacent network card, the storage module is in signal connection with the data processing module, and the adjacent network card is in signal connection with the network card body through the split-flow network port.
In another aspect, a data transmission system is provided, including:
the first device comprises a first network card, wherein the first network card comprises a network port module, a data processing module and a cache module, the data processing module is in signal connection with the cache module, and the network port module comprises a transmission network port and at least one shunt network port; the first network card further comprises a shunting module which is used for monitoring data flow between the transmission network port and the data processing module and acquiring the cache occupation amount of the cache module;
the second equipment is in signal connection with the first equipment through the shunt network port;
when the cache occupancy is greater than or equal to a first cache threshold, the shunting module intercepts the data stream to obtain shunted data and forwards the shunted data to the second device through a shunting port; or when the cache occupation amount is smaller than or equal to a second cache threshold value, the shunting module acquires shunting data stored in the second device through the shunting net mouth.
In another aspect, a data transmission method is provided, including:
monitoring the data flow of the first network card, and acquiring the cache occupancy of the first network card;
intercepting the data stream to obtain distributed data when the cache occupancy is greater than or equal to a first cache threshold;
forwarding the shunting data to second equipment so that the second equipment can store the shunting data to obtain data to be recovered;
and when the cache occupancy is reduced to be less than or equal to a second cache threshold, acquiring the data to be recovered returned by the second device.
In one embodiment, before forwarding the split data to the second device, the method further includes:
and sending a distribution notice to allow the second equipment to respond to the distribution notice to prepare for carrying resources.
In one embodiment, before acquiring the data to be recycled, the method further includes:
and sending a return notification to allow the second equipment to respond to the return notification and pass back the data to be recovered.
In one embodiment, when the cache occupancy is lower than the first cache threshold, the data stream is transparently transmitted to realize data transmission of the first network card.
In one embodiment, the acquiring the cache occupancy of the first network card includes:
monitoring the cache state of the first network card at a first monitoring frequency, and determining whether the cache is occupied:
if so, monitoring the cache state of the first network card by using a second monitoring frequency to acquire the cache occupation amount, wherein the second monitoring frequency is greater than the first monitoring frequency.
There is also provided a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method when executing the computer program
A computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method.
According to the network card, the data transmission system, the data transmission method, the computer equipment and the storage medium, the data flow entering the network card is monitored through the shunting module, the occupied amount of the buffer memory is obtained, when the occupied amount of the buffer memory is larger than or equal to a set first buffer memory threshold value, the shunting mechanism is started, the data flow is intercepted and forwarded to the specific equipment connected with the shunting network port through the shunting network port, and when the buffer memory is reduced to a set second buffer memory threshold value, the returned shunting data are obtained, data loss is avoided, a sender does not need to retransmit the data, and therefore waste of network resources is reduced.
Drawings
FIG. 1 is a diagram of an application environment in one embodiment;
FIG. 2 is a schematic diagram illustrating a network card according to an embodiment;
FIG. 3 is a block diagram of a data transmission system in one embodiment;
FIG. 4 is a flow chart illustrating a data transmission method according to another embodiment;
FIG. 5 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Reference numerals: the device comprises a first device 100, a first network card 110, a second device 200, a second network card 210, a data processing module 111, a buffer module 112, a transmission network port 113, a distribution network port 114, a distribution module 115, and a storage module 116.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
The network card is an interface for data transmission of network equipment, taking a server as an example, the network card is used as a data channel to carry out receiving and sending of data, each correct receiving of the data can directly affect the collection and processing of the data by the server, when the network card is not allocated with enough CPU resources, the data which is not processed in time can be temporarily placed in a cache of the network card, and the data in the cache of the network card is processed when the CPU can provide enough resources, but if the data in the cache of the network card can not be processed all the time, and more data are sent to the network card end at the moment, so that the cache of the network card is full, the newly sent data are discarded by the network card at the moment.
The network card provided by the present application may be applied to an application environment as shown in fig. 1, where a terminal 102 communicates with a device 104 through a network. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, the device 104 may be implemented by an independent device or a device cluster formed by a plurality of devices, taking the device 104 as a server as an example, a network card of the server is connected with a switch port to implement data transmission, and the network card implements data transmission with an internal system of the server through a PCIE bus.
As shown in fig. 2, the present application provides a network card, including:
the network card body comprises a network interface module, a data processing module 111 and a cache module 112, wherein the data processing module 111 is in signal connection with the cache module 112, and the network interface module comprises a transmission network interface 113 and at least one shunt network interface 114.
It can be understood that the network card body is connected to the switch port through the transmission network port 113 to implement uplink and downlink of data, and a data stream entering the network card from the switch is correspondingly processed in the data processing module 111, which is not described herein again. When large-flow data transmission is carried out, a part of data flow occupies a data cache due to the limitation of processing capacity.
The network card further includes a shunting module 115, and the shunting module 115 is configured to monitor data flow between the transmission network port 113 and the data processing module 111, and obtain a cache occupancy amount of the cache module 112.
Illustratively, the shunting module 115 may be a shunting layer disposed inside the network card chip. The data stream is transparently transmitted through the shunting module 115 when the buffer is not occupied, and when a certain value is found occupied by the network card buffer module 112, a shunting mechanism can be started.
For example, when the cache occupancy is greater than or equal to the first cache threshold, the offloading module 115 intercepts the data stream to obtain offloading data and forwards the offloading data through the offloading port 114, where the offloading port 114 is used to connect a second device for offloading, and the offloading data is temporarily stored through the second device. When the cache occupancy is reduced, the shunting data stored in the second device may be obtained again to ensure the integrity of the data, for example, when the cache occupancy is less than or equal to the second cache threshold, the shunting module 115 obtains the returned shunting data through the shunting port 114.
The first cache threshold and the second cache threshold can be set according to the actual network card operation condition, and the following description defines that 50% of cache occupancy is the first cache threshold and 0% of cache occupancy is the second cache threshold.
The network card provided by the application is provided with the shunting module 115, can monitor received data flow, forwards the data flow to appointed second equipment when high data flow cannot be processed in time, for example, the second equipment is connected to a server of the shunting port 114, and returns data when the second equipment is idle, so that the problem that data at the server end is lost can be effectively solved, retransmission of the data is avoided, and waste of network resources is reduced.
In one embodiment, the network card further includes a storage module 116, the storage module 116 is in signal connection with the data processing module 111, and the storage module 116 is configured to receive split data forwarded from an adjacent network card.
It can be understood that the adjacent network card is in signal connection with the network card body through the shunt port 114.
Illustratively, the storage module 116 may be a memory connected outside the network card, or may be a part of a device connected to the network card, for example, an internal memory of a server connected to the network card, and when the local network card receives, through the offload port 114, offload data transmitted by a network card of another server, the local server may store the offload data in a specific storage area of the local server.
The network card of the above embodiment may be applied to multiple servers at the same time, and each server may use, when receiving data by itself, an adjacent server connected to the server through the distribution network port 114 as a distribution device, or may use the adjacent server as a distribution device of the adjacent server.
As shown in FIG. 3, in one embodiment, a data transmission system is provided, comprising
A first device 100, for example, a local server, configured to receive a data stream from a switch, where the local server includes a first network card 110, where the first network card 110 includes a network interface module, a data processing module 111, and a cache module 112, where the data processing module 111 is in signal connection with the cache module 112, and when large-traffic data occurs and cannot be processed in time, the data is cached by using the cache module 112, the network interface module includes a transmission network interface 113 and at least one distribution network interface 114, and is connected to the switch through the transmission network interface 113, and a data stream from a port of the switch enters the data processing module 111 through the transmission network interface 113; the first network card 110 further includes a shunting module 115, where the shunting module 115 is configured to monitor data flow between the transmission network port 113 and the data processing module 111, and obtain a cache occupancy amount of the cache module 112;
the data transmission system further comprises at least one second device 200, for example another server adjacent to the local server, which is in signal connection with the first device 100 through the distribution network port 114.
When the cache occupancy of the local server is greater than or equal to a first cache threshold, for example, 50%, the offloading module 115 intercepts a data stream entering the first network card 110 of the local server to obtain offloading data, and forwards the offloading data to a neighboring server through the offloading port 114 for storage without discarding.
When the buffer occupancy is less than or equal to the second buffer threshold, for example, when the buffer in the buffer module 112 is empty, the offloading module 115 obtains the offloading data stored in the neighboring server through the offloading port 114 without retransmission by the sender.
By adopting the data transmission system disclosed in the above embodiment, the data flow can be shunted and controlled inside the first network card 110, the data is shunted through the shunting port 114 by means of the shunting module 115 of the first network card 110, for example, a shunting layer arranged in a chip of the first network card 110, and when the buffer memory of the network card reaches a preset first buffer threshold value, the data flow is shunted in time, so that the problem of packet loss is avoided.
In an embodiment, the neighboring server uses a second network card 210 having the same structure as the first network card 110 in the local server, the offload ports 114 of the first network card 110 and the second network card 210 are interconnected, and respective internal memories of the local server and the neighboring server are used as the storage modules 116 of the corresponding network cards, so that both the local server and the neighboring server can be used as offload devices.
Illustratively, when the local server receives high-traffic data, the processing capacity is limited, the data traffic starts to occupy the cache of the first network card 110, when the cache occupancy of the cache module 112 of the first network card 110 reaches 50%, the shunting module 115 starts a shunting mechanism, intercepts the newly-entered data stream, forwards the newly-entered data stream to the shunting port 114 of the first network card 110, is connected to the shunting interface of the second network card 210 of the adjacent server receiving the shunted data through a network cable, and the adjacent server stores the shunted data in a specific area; when the cache of the first network card 110 of the home server is emptied, the returned streaming data is acquired from the neighboring server to ensure the integrity of the data.
On the other hand, the home terminal server may be used as a flow distribution device of the neighboring server, for example, when the neighboring server receives high-traffic data, the cache occupancy amount of the cache module 112 of the second network card 210 reaches 50%, that is, the flow distribution data may be forwarded to the home terminal server through the connected flow distribution network port 114 for storage, and after the cache is emptied, the returned flow distribution data is obtained from the home terminal server.
According to the data transmission system, the shunting module 115 is added in the network card to avoid data loss, when the cache occupancy of the network card exceeds a certain limit value, the shunting module 115 starts a shunting mechanism to forward the data stream to the second device 200 connected with the shunting port 114 in time, and when the cache is idle, the shunting data is acquired from the adjacent second device 200, so that the purpose of avoiding data packet loss is achieved, the data is not required to be retransmitted by a data sender, and the waste of network resources is reduced.
In one embodiment, in the data transmission system, when the shunting module 115 of the first network card 110 of the home-end server finds that the cache occupancy is greater than or equal to 50% of the first cache threshold, a shunting notification is sent to a neighboring server, and the neighboring server responds to the shunting notification to prepare for carrying resources; when the cache of the home server is emptied, the shunting module 115 of the first network card 110 sends a return notification to the neighboring server that receives the shunted data, and at this time, the neighboring server returns the previously stored shunted data to the home server after receiving the return notification.
In one embodiment, the offloading module 115 initially monitors the buffer status of the first network card 110 at a first monitoring frequency, and determines whether the buffer is occupied: when the cache is occupied, monitoring the cache state of the first network card 110 at a second monitoring frequency to acquire the cache occupation amount, wherein the second monitoring frequency is greater than the first monitoring frequency, and the shunting response speed is improved.
All or part of each module in the data transmission system and the network card can be realized by software, hardware and combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
As shown in fig. 4, in an embodiment, a data transmission method is provided, which is described by taking the method as an example for being applied to a server, and includes the following steps:
step 401, the shunting module 115 in the first network card 110 of the home server monitors the data stream of the first network card 110, and obtains the cache occupancy amount of the first network card 110.
Illustratively, the shunting module 115 is connected between the portal module of the first network card 110 and the data processing module 111, and when the buffer module 112 is unoccupied or the buffer occupancy amount is small, the shunting module 115 realizes data stream transmission between the portal module and the data processing module 111 in a transparent transmission manner.
Step 402, the shunting module 115 performs threshold judgment on the buffer occupancy, and when the buffer occupancy is greater than or equal to the first buffer threshold, for example, 50%, the shunting module 115 intercepts the data stream to obtain shunted data, and forwards the shunted data to the second device 200, so that the second device 200 can store the shunted data to obtain data to be recovered.
It can be understood that the second device 200 has a storage module 116, and can receive and store forwarded distributed data, where the second device 200 may be a neighboring server that is neighboring to the local server and is connected to the local server through a network cable, and the first network card 110 of the local server is provided with a distributed network port 114, the distributed network port 114 is in signal connection with the distributed module 115, and the neighboring server realizes data communication with the local server through the distributed network port 114.
Step 403, when the cache occupancy amount of the first network card 110 of the home server is reduced to be less than or equal to the second cache threshold, for example, after the cache of the first network card 110 is emptied, the offloading module 115 in the first network card 110 of the home server obtains the data to be recovered returned back by the second device 200 again, and the data to be recovered is normally transmitted to the upper protocol stack of the server by the offloading module 115 for processing.
In the data transmission method, the data is shunted through the shunting port 114 by the shunting module 115 of the network card, when the buffer memory of the network card reaches the limit value, the data stream is shunted in time, and the shunted data is obtained from the second device 200 connected with the network card after the buffer memory is reduced, so that the problem of data is avoided, and a data sender, such as a terminal, is not required to retransmit the data, so that the waste of network resources is reduced.
In another embodiment, the neighboring server uses the second network card 210 with the same structure as the local server to perform data transmission, the first network card 110 is interconnected with the offloading port 114 of the second network card 210 to implement normal data circulation, and after the second port obtains offloading data to the first network card 110, the offloading module 115 of the second port transparently transmits the transmitted offloading data for the neighboring server to store the offloading data. When the neighboring server receives the high-traffic data, the shunting module 115 of the second network port of the neighboring server intercepts and shunts the data stream entering the second network card 210, and forwards the shunted data to the home server, and the shunting module 115 of the first network card 110 of the home server transparently transmits the shunted data, so that the home server can store the shunted data from the neighboring server.
In an embodiment, before forwarding the offloading data to the second device 200, the home server further includes a step of sending an offloading notification, so that the second device 200 prepares to accept resources, for example, prepare a specific storage space, in response to the offloading notification.
On the other hand, before acquiring the to-be-recovered data returned by the second device 200, the home server further includes a step of sending a return notification, so that the second device 200 returns the to-be-recovered data in response to the return notification.
In an embodiment, the shunting module 115 monitors the data stream in a variable frequency manner, after the data stream enters the first network card 110 on the local server, the shunting layer monitors the cache state of the first network card 110 with a first smaller monitoring frequency, and when the data stream starts to occupy the network card cache, the shunting module 115 increases the monitoring frequency to monitor with a second monitoring frequency greater than the first monitoring frequency, which is beneficial to improving the response speed, and when the cache occupancy reaches a threshold, the data stream is shunted in time.
The data transmission method provided by the application avoids data loss by adding the shunting module 115 such as a shunting layer inside the network card, and the working principle is as follows: when a data stream enters the first network card 110 on the local server, the data stream is monitored by the shunting layer, when the data stream begins to occupy the cache of the first network card 110, the monitoring frequency is increased by the shunting layer, when the cache of the network card is found to be occupied to reach a first cache threshold value, for example, 50%, a shunting mechanism is started, firstly, a shunting notification is sent to another server for carrying the shunting data, the server is ready to carry the shunting data, then, the newly-incoming data stream is intercepted by the local server and forwarded to the shunting interface, the shunting interface of the local server is connected to the shunting interface of the network card of another server for carrying the shunting data through a network cable, the data is stored in a specific area of the other server, a recovery instruction of the local server is waited, when the cache of the local server reduces a second cache threshold value, for example, is emptied, the shunting layer of the first network card 110 of the local server sends the flow back to the other server to notify, and then the other server sends the previously-stored data to the shunting interface, and then the shunting layer is sent to the normal stack.
On the other hand, the local server and the neighboring server may use the same network card to receive the shunted data from each other or return the shunted data to the other.
According to the data transmission method, the data stream is shunted in time before the cache of the network card is fully occupied, the problem of data loss is avoided, the data sending party is not required to retransmit data, and further the waste of network resources is reduced.
It should be understood that, although the steps in the flowchart of fig. 4 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not limited to being performed in the exact order illustrated and, unless explicitly stated herein, may be performed in other orders. Moreover, at least some of the steps in fig. 4 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, a computer device is provided, which may be a server, and the internal structure thereof may be as shown in fig. 5. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operating system and the computer program to run on the non-volatile storage medium. The database of the computer device is used for storing data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a data transmission method.
Those skilled in the art will appreciate that the architecture shown in fig. 5 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
step A, monitoring the data flow of the first network card 110, and acquiring the cache occupancy of the first network card 110;
step B, performing threshold judgment on the occupied amount of the cache, intercepting the data stream to obtain distributed data when the occupied amount of the cache is greater than or equal to a first cache threshold, and forwarding the distributed data to the second equipment 200 so as to enable the second equipment 200 to store and obtain data to be recovered;
and step C, when the buffer occupancy is reduced to be less than or equal to a second buffer threshold, acquiring the data to be recovered returned by the second device 200.
The computer equipment performs threshold judgment on the cache occupation amount of the network card, and intercepts and shunts the data stream entering the network card when the cache occupation amount reaches a first cache threshold value, so that the data is prevented from being discarded due to full cache; after the buffer occupancy of the first network card 110 decreases to the second buffer threshold, the integrity of the data is ensured by acquiring the data to be recovered, which is transmitted back by the second device 200.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
when the processor finds that the cache occupancy of the first network card 110 is greater than or equal to the second cache threshold, a shunting notification is sent to the second device 200, so that the second device 200 responds to the shunting notification to prepare for carrying resources, then intercepts a new incoming data stream, and forwards the new incoming data stream to a shunting interface, and the shunting interface is connected to the second device 200 through a network cable, so as to realize storage of shunting data by the second device 200.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
when the processor finds that the cache occupancy amount is reduced to be less than or equal to a second cache threshold value, a return notification is sent to the second device 200, so that the second device 200 responds to the return notification, and the second device 200 returns the data to be recovered after receiving the return notification.
In one embodiment, the processor executes the computer program to transparently transmit the data stream entering the first network card 110 via the transmission port 113 without operating the data stream when the cache occupancy of the first network card 110 is lower than the first cache threshold.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
the shunting data forwarded from the adjacent network card is received through the shunting interface and stored in the own storage module 116.
The computer device can forward own shunt data to the shunt device by taking another device as the shunt device when the computer device receives the high-flow data, and can also receive the shunt data from the shunt device for the shunt device when the shunt device receives the high-flow data.
In one embodiment, when the processor executes the computer program, the processor monitors the buffer status of the first network card 110 at a first monitoring frequency, and determines whether the buffer is occupied: if so, monitoring the cache state of the first network card 110 at a second monitoring frequency to obtain the cache occupancy, wherein the second monitoring frequency is greater than the first monitoring frequency to improve the shunting response speed.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
step A, determining the occupation condition of the cache of the first network card 110, including monitoring the data flow of the first network card 110 and acquiring the cache occupation amount of the first network card 110;
b, threshold judgment is carried out on the cache occupancy, and when the cache occupancy is greater than or equal to a first cache threshold, the data flow is intercepted to obtain shunt data; forwarding the split data to a second device 200, so that the second device 200 can store the split data to obtain data to be recovered;
and step C, when the buffer occupancy is reduced to be less than or equal to a second buffer threshold, acquiring the data to be recovered transmitted back by the second device 200.
In one embodiment, the computer program when executed by the processor further performs the steps of:
before forwarding the split data to the second device 200, a split notification is sent to the second device 200, so that the second device 200 prepares to accept resources, for example, a specific area for storing the split data, in response to the split notification.
On the other hand, before the data to be recovered returned by the second device 200 is acquired, a return notification is sent to the second device 200, so that the second device 200 returns the data to be recovered in response to the return notification.
It is to be understood that the data stream is transparently transferred when the buffer occupancy is below the first buffer threshold.
In one embodiment, the computer program when executed by the processor further performs the steps of:
monitoring the cache state of the first network card 110 at the first monitoring frequency, and determining whether the cache is occupied:
if so, monitoring the cache state of the first network card 110 with a second monitoring frequency to obtain the cache occupancy, wherein the second monitoring frequency is greater than the first monitoring frequency.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by hardware instructions of a computer program, which may be stored in a non-volatile computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A network card, comprising:
the network card body comprises a network interface module, a data processing module and a cache module, wherein the data processing module is in signal connection with the cache module, the network interface module comprises a transmission network interface and at least one shunt network interface, and the shunt network interface is used for shunting and returning shunting data;
the shunting module is used for monitoring data flow between the transmission network port and the data processing module and acquiring the occupied cache amount of the cache module, wherein when the occupied cache amount is greater than or equal to a first cache threshold value, the shunting module intercepts the data flow to obtain shunting data and forwards the shunting data through the shunting network port; or when the cache occupation amount is smaller than or equal to a second cache threshold value, the shunting module acquires the returned shunting data through the shunting port.
2. The network card of claim 1, further comprising a storage module for receiving the split data forwarded by the adjacent network card, wherein the storage module is in signal connection with the data processing module, and the adjacent network card is in signal connection with the network card body through the split port.
3. A data transmission system, comprising:
the first device comprises a first network card, wherein the first network card comprises a network port module, a data processing module and a cache module, the data processing module is in signal connection with the cache module, and the network port module comprises a transmission network port and at least one shunt network port; the first network card further comprises a shunting module which is used for monitoring data flow between the transmission network port and the data processing module and acquiring the cache occupation amount of the cache module;
the second equipment is in signal connection with the first equipment through the shunt network port;
when the cache occupancy is greater than or equal to a first cache threshold, the shunting module intercepts the data stream to obtain shunted data and forwards the shunted data to the second device through a shunting port; or, when the cache occupancy is less than or equal to a second cache threshold, the shunting module acquires the shunting data stored in the second device through the shunting port.
4. A method of data transmission, comprising:
monitoring the data flow of the first network card, and acquiring the cache occupancy of the first network card;
when the cache occupation amount is larger than or equal to a first cache threshold value, intercepting the data flow to obtain shunting data;
forwarding the shunting data to second equipment so that the second equipment can store the shunting data to obtain data to be recovered;
and when the cache occupancy is reduced to be less than or equal to a second cache threshold, acquiring the data to be recovered returned by the second device.
5. The data transmission method according to claim 4, wherein before forwarding the split data to the second device, further comprising:
and sending a distribution notice to allow the second equipment to respond to the distribution notice to prepare for carrying resources.
6. The data transmission method according to claim 4, wherein before acquiring the data to be recycled returned by the second device, the method further comprises:
and sending a return notification to allow the second equipment to respond to the return notification and return the data to be recovered.
7. The data transmission method according to claim 4, wherein when the buffer occupancy is lower than the first buffer threshold, the data stream is transparently transmitted to implement data transmission of the first network card.
8. The data transmission method according to claim 4, wherein the acquiring the cache occupancy of the first network card comprises:
monitoring the cache state of the first network card at a first monitoring frequency, and determining whether the cache is occupied:
if so, monitoring the cache state of the first network card by using a second monitoring frequency to acquire the cache occupation amount, wherein the second monitoring frequency is greater than the first monitoring frequency.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method according to any of claims 4 to 8 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 4 to 8.
CN202211138632.5A 2022-09-19 2022-09-19 Network card, data transmission system, method, computer equipment and storage medium Pending CN115550080A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211138632.5A CN115550080A (en) 2022-09-19 2022-09-19 Network card, data transmission system, method, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211138632.5A CN115550080A (en) 2022-09-19 2022-09-19 Network card, data transmission system, method, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115550080A true CN115550080A (en) 2022-12-30

Family

ID=84727913

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211138632.5A Pending CN115550080A (en) 2022-09-19 2022-09-19 Network card, data transmission system, method, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115550080A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070070762A1 (en) * 2005-09-13 2007-03-29 Sung-Yang Wu Method and apparatus for optimizing data buffering
CN106970827A (en) * 2017-03-27 2017-07-21 联想(北京)有限公司 Information processing method, information processor, electronic equipment
WO2021258696A1 (en) * 2020-06-24 2021-12-30 珠海格力电器股份有限公司 Communication data processing method and apparatus, computer device and storage medium
CN114189477A (en) * 2021-10-22 2022-03-15 新华三大数据技术有限公司 Message congestion control method and device
CN114253456A (en) * 2020-09-21 2022-03-29 深圳市茁壮网络股份有限公司 Cache load balancing method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070070762A1 (en) * 2005-09-13 2007-03-29 Sung-Yang Wu Method and apparatus for optimizing data buffering
CN106970827A (en) * 2017-03-27 2017-07-21 联想(北京)有限公司 Information processing method, information processor, electronic equipment
WO2021258696A1 (en) * 2020-06-24 2021-12-30 珠海格力电器股份有限公司 Communication data processing method and apparatus, computer device and storage medium
CN114253456A (en) * 2020-09-21 2022-03-29 深圳市茁壮网络股份有限公司 Cache load balancing method and device
CN114189477A (en) * 2021-10-22 2022-03-15 新华三大数据技术有限公司 Message congestion control method and device

Similar Documents

Publication Publication Date Title
CN106664290B (en) data transmission method and device of photoelectric hybrid network
US8649265B2 (en) Low power and fast application service transmission
CN113014528B (en) Message processing method, processing unit and virtual private network server
CN102394925B (en) Communication method and device for remote monitoring diagnosis center and area regulation center
CN112153013B (en) Socket data forwarding method and device, electronic equipment and storage medium
CN110995617B (en) MQTT-based data transmission method and device, computer equipment and storage medium
CN107787570A (en) Light weight transportation protocol
CN111740847B (en) High-speed network data transmission system and method based on FPGA
CN111464505B (en) Message processing method, device, apparatus, storage medium and processor
CN112788698B (en) Data processing method and device and terminal equipment
EP3627870A1 (en) Data transmission method, terminal device and network device
CN110557785A (en) Data distribution method and device based on MEC
CN111884946B (en) Data distribution transmission method, device, computer equipment and storage medium in multiple wireless access systems
CN111510419A (en) Data compression method and base station
CN110830928A (en) Communication method and device
CN116208615A (en) Network data processing method, processing module, array server and medium
CN115550080A (en) Network card, data transmission system, method, computer equipment and storage medium
CN109246022B (en) Physical layer acceleration control method and device, physical layer acceleration card and server thereof
CN116074253A (en) Message chained forwarding method and device
CN111641565A (en) Method, device and system for transmitting Ethernet message
CN115865830A (en) Method and device for reducing RDMA (remote direct memory Access) network overhead based on connection management
CN115967687A (en) Method for fast forwarding message, network device, storage medium and computer program product
CN114401231A (en) Method and device for controlling network flow, computer equipment and storage medium
CN107682894B (en) User plane data processing method, device and system
CN110557374B (en) Power data acquisition method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination