CN117318892A - Computing system, data processing method, network card, host computer and storage medium - Google Patents

Computing system, data processing method, network card, host computer and storage medium Download PDF

Info

Publication number
CN117318892A
CN117318892A CN202311598344.2A CN202311598344A CN117318892A CN 117318892 A CN117318892 A CN 117318892A CN 202311598344 A CN202311598344 A CN 202311598344A CN 117318892 A CN117318892 A CN 117318892A
Authority
CN
China
Prior art keywords
data
processing unit
application data
target application
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311598344.2A
Other languages
Chinese (zh)
Other versions
CN117318892B (en
Inventor
杨震旦
程曙光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Cloud Computing Ltd
Original Assignee
Alibaba Cloud Computing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Cloud Computing Ltd filed Critical Alibaba Cloud Computing Ltd
Priority to CN202311598344.2A priority Critical patent/CN117318892B/en
Publication of CN117318892A publication Critical patent/CN117318892A/en
Application granted granted Critical
Publication of CN117318892B publication Critical patent/CN117318892B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0061Error detection codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1004Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's to protect a block of data words, e.g. CRC or checksum
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/28Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4282Bus transfer protocol, e.g. handshake; Synchronisation on a serial bus, e.g. I2C bus, SPI bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/173Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer And Data Communications (AREA)

Abstract

The embodiment of the application provides a computing system, a data processing method, a network card, a host and a storage medium. And a hardware processing unit is added on the network card and is used for performing check code calculation on target application data to be transmitted. On the one hand, the check code calculation of the target application data is unloaded to hardware for completion, so that not only can the serial processing resources of the host be released, but also the hardware acceleration of the check code calculation can be realized, and the processing efficiency of the check code calculation is improved. On the other hand, the network card is positioned on the transmission link of the target application data, so that the check code of the target application data is calculated on the hardware processing unit of the network card, the calculation of the off-load check code of the associated hardware is realized, the data link of the application data can be shortened, and the check code calculation efficiency of the application data is further improved.

Description

Computing system, data processing method, network card, host computer and storage medium
Technical Field
The present disclosure relates to the field of communications technologies, and in particular, to a computing system, a data processing method, a network card, a host, and a storage medium.
Background
Consistency and correctness of stored data is critical to the storage services of cloud computing platforms. In the process of data transmission and storage, the transmission and storage of data are required to be ensured to be accurate, which is an important factor for ensuring the validity and reliability of storage service. The current industry generally performs check code calculation on data to be transmitted at a transmitting end, and performs accuracy check on the received data at a receiving end to ensure the correctness of the data.
In existing schemes, check code calculation of data to be transmitted and accuracy verification of received data are often performed by a central processing unit (Central Processing Unit, CPU) of a computing device. The CPU software calculates and checks the check code of the data, and the processing efficiency is low.
Disclosure of Invention
Aspects of the present application provide a computing system, a data processing method, a network card, a host, and a storage medium, which are used to implement hardware offloading of data check code calculation, and are helpful to improve data check code calculation efficiency.
In a first aspect, embodiments of the present application provide a computing system comprising: a host and a network card; the host is in communication connection with the network card; the network card comprises: a hardware processing unit and a network interface; the computing system further includes: a serial processing unit; the serial processing unit is arranged on the network card or the host; the serial processing unit is in communication connection with the hardware processing unit;
the serial processing unit is operated with a network protocol stack and is used for acquiring the memory address information of the target application data of the application program and generating a network protocol header through the network protocol stack; providing the memory address information of the target application data and the network protocol header to the hardware processing unit;
The hardware processing unit is used for reading the target application data from the memory of the host by utilizing a Direct Memory Access (DMA) mode according to the memory address information of the target application data, and calculating a check code of the target application data; assembling the target application data, the check code of the target application data and the network protocol header into a first message; and sending the first message to a receiving end through the network interface.
Optionally, the hardware processing unit or the serial processing unit is further configured to:
receiving a target message sent by a sending end through the network interface;
acquiring effective load data and a check code of the effective load data from the target message;
and utilizing the check code of the payload data to check the accuracy of the payload data.
In a second aspect, an embodiment of the present application further provides a data processing method, which is applicable to a hardware processing unit on a network card, where the network card is used for communication connection with a host; the method comprises the following steps:
acquiring memory address information and a network protocol header of target application data of an application program provided by a serial processing unit; the serial processing unit is arranged on the host or the network card;
Reading the target application data from the memory of the host by utilizing a Direct Memory Access (DMA) mode according to the memory address information of the target application data;
calculating a check code of the target application data; assembling the target application data, the check code of the target application data and the network protocol header into a first message;
and sending the first message to a receiving end through a network interface of the network card.
In a third aspect, an embodiment of the present application further provides a data processing method, which is applicable to a network card or a serial processing unit on a host, where the network card is connected with the host in a communication manner; the method comprises the following steps:
acquiring memory address information of target application data of an application program;
generating a network protocol header through the running network protocol stack;
providing the memory address information of the target application data and the network protocol header to a hardware processing unit of the network card, so that the hardware processing unit can read the target application data from the memory of the host by utilizing a Direct Memory Access (DMA) mode according to the memory address information of the target application data and calculate a check code of the target application data; assembling the target application data, the check code of the target application data and the network protocol header into a first message; and transmitting the first message to a receiving end through a network interface of the network card.
In a fourth aspect, an embodiment of the present application further provides a data processing method, which is applicable to a hardware processing unit on a network card, including:
acquiring a target message received by a network interface of the network card;
acquiring effective load data and a check code of the effective load data from the target message;
and utilizing the check code of the payload data to check the accuracy of the payload data.
In a fifth aspect, embodiments of the present application further provide a network card, where the network card includes: a hardware processing unit and a network interface; the network card is used for being in communication connection with a host;
the hardware processing unit is in communication connection with the serial processing unit; the serial processing unit is arranged on the network card or the host;
the hardware processing unit is configured to perform the steps in the data processing method provided in the second aspect and/or the fourth aspect;
the serial processing unit is configured to perform the steps in the data processing method provided in the third aspect.
In a sixth aspect, embodiments of the present application further provide a host, including: a memory and a processor; wherein the memory is used for storing a computer program; the host is used for being in communication connection with the network card;
The processor is coupled to the memory for executing the computer program for performing the steps of the data processing method provided in the third aspect described above.
In a seventh aspect, embodiments of the present application also provide a computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform the steps in the data processing method provided in the second and/or third and/or fourth aspects above.
In the embodiment of the application, a hardware processing unit is added on the network card and is used for performing check code calculation on target application data to be transmitted. On the one hand, the check code calculation of the target application data is unloaded to hardware for completion, so that not only can the serial processing resources of the host be released, but also the hardware acceleration of the check code calculation can be realized, and the processing efficiency of the check code calculation is improved. On the other hand, the network card is positioned on the transmission link of the target application data, so that the check code of the target application data is calculated on the hardware processing unit of the network card, the calculation of the off-load check code of the associated hardware is realized, the data link of the application data can be shortened, and the check code calculation efficiency of the application data is further improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIGS. 1a, 1b, 2a and 2b are schematic structural diagrams of computing devices in conventional storage systems;
FIGS. 3a, 3b, 4a and 4b are schematic structural diagrams of a computing system according to an embodiment of the present application;
fig. 5 is a schematic diagram of a data slicing principle provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of a data processing procedure according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of a data segmentation result provided in an embodiment of the present application;
fig. 8a and fig. 8b are schematic diagrams of a data processing procedure when a computing system provided in an embodiment of the present application is used as a receiving end;
FIG. 8c is a flowchart illustrating a data processing method according to an embodiment of the present disclosure;
FIGS. 9 and 10 are flow diagrams of other data processing methods according to embodiments of the present application;
fig. 11 is a schematic structural diagram of a network card according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a host according to an embodiment of the present application.
Detailed Description
For the purposes, technical solutions and advantages of the present application, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Fig. 1a and 1b are schematic structural views of a conventional memory system. As shown in fig. 1a and 1b, the storage system includes: a computing device on the user side and a storage service device. The computing device and the storage service device on the user side may each include: serial processing unit and network card. The serial processing unit is typically a central processing unit (Central Processing Unit, CPU). The serial processing unit and the network card may be communicatively coupled via a data bus. The data bus may be a serial interface data bus such as, but not limited to, a high speed serial computer expansion bus standard (Peripheral Component Interconnect Express, PCIe) interface, a universal serial bus (Universal Serial Bus, USB) serial interface, an RS485 interface, or an RS232 interface, among others.
The serial processing unit 101 of the computing device on the user side runs an Application (Application). An application refers to a computer program that runs at the application layer for the purpose of accomplishing a particular job or jobs. The application data generated by the application program belongs to user data, and is generally encapsulated in a Payload part (Payload) of a message, which may be called Payload data. In order to ensure the accuracy of the data, the transmitting end generally performs check code calculation on the data to be transmitted, and the receiving end performs accuracy check on the received data to ensure the accuracy of the data.
The serial processing units of the user-side computing device and the storage service device may run the storage service. In the computing device shown in fig. 1a, the check code calculation and check of the application data to be transmitted, as well as the network protocol stack, are performed by a serial processing unit. As shown in fig. 1a, when a computing device on a user side needs to write data, the serial processing unit 101 may be used as a transmitting end to obtain target application data to be transmitted of an application program; performing redundancy cyclic check (Cyclic Redundancy Check, CRC) calculation on the target application data through a storage service operated by the serial processing unit 101 to obtain a CRC check code of the target application data; then, encapsulating the target application data and the CRC check code through a network protocol stack to obtain a message to be transmitted; the message to be transmitted is then sent to the storage service device as the receiving end through the network card 201.
For the storage service device, the network card 202 may receive the message to be transmitted, and transmit the message to be transmitted to the serial processing unit 102 of the storage service device. The serial processing unit 102 performs protocol analysis on the message to be transmitted through the network protocol stack to obtain the target application data and the CRC check code of the target application data. Thereafter, the serial processing unit 102 performs accuracy check on the target application data by the storage service using the CRC check code.
Specifically, the serial processing unit 102 may calculate the CRC check code of the target application data in the same manner as the transmission end calculates the check code of the target application data. Then, comparing the calculated CRC check code with the CRC check code analyzed from the message to be transmitted; if the target application data and the target application data are the same, determining that the target application data pass through accuracy verification; if the target application data and the target application data are different, determining that the target application data do not pass the accuracy check.
For target application data that passes the accuracy check, the target application data may be stored to a storage medium through a storage service.
As shown in fig. 1b, when the computing device on the user side needs to read data, the storage service device may be used as a transmitting end, and the serial processing unit 102 may obtain application data to be read by an application program in the computing device, as target application data to be transmitted by the application program; performing CRC calculation on the target application data through the target service operated by the serial processing unit 102 to obtain a CRC check code of the target application data; then, encapsulating the target application data and the CRC check code through a network protocol stack to obtain a message to be transmitted; and then, the message to be transmitted is sent to a storage service device serving as a receiving end through the network card 202.
For the computing device on the user side, the network card 201 may receive the message to be transmitted, and transmit the message to be transmitted to the serial processing unit 101 of the storage service device. The serial processing unit 101 performs protocol analysis on the message to be transmitted through the network protocol stack to obtain the target application data and the CRC check code of the target application data. After that, the serial processing unit 101 performs accuracy check on the target application data by the target service using the CRC check code.
For target application data that passes the accuracy check, the target application data may be provided to the application program for use.
The CRC calculation and CRC check modes in the memory system provided in fig. 1a and 1b are implemented by serial processing unit software, and require the consumption of resources, such as CPU resources, of the serial processing unit of the computing device. When the network throughput is high, performing check code calculation and check on each message introduces more remarkable CPU resource overhead, and the processing efficiency of the check code calculation and check is low. In addition, in the scenario that some data paths are directly connected with hardware, the service operated by the serial processing unit can only obtain some metadata, and check code calculation and check can not be carried out on the complete target application data.
To improve processing performance and reduce processing delay, in some conventional schemes, dedicated hardware is introduced in the computing device, and CRC check code computation and verification of software-hardware bypass interactions is implemented based on the dedicated hardware, as shown in fig. 2a and 2 b. Specifically, as shown in fig. 2a, when the computing device on the user side needs to write data, the storage service in the serial processing unit 101 may serve as a transmitting end, and after receiving the target application data, the storage service transmits the memory start address and the length of the target application data to the dedicated hardware 301.
The dedicated hardware 301 reads the target application data from the memory 401 of the computing device according to the memory start address and the length of the target application data; and calculates a CRC check code of the target application data. The dedicated hardware 301 then sends the CRC check code of the target application data to the storage service in the serial processing unit 101. Further, the serial processing unit 101 supplies the target application data and the CRC check code of the target application data to the network protocol stack running in the serial processing unit 101 through the storage service; and encapsulating the target application data and the CRC check code of the target application data into a message to be transmitted through a network protocol stack. And then, the message to be transmitted is sent to the receiving end through the network card 201.
For the storage service device as the receiving end, the network card 202 receives the message to be transmitted and sends the message to be transmitted to the serial processing unit 102. The serial processing unit 102 performs protocol analysis on the message to be transmitted through the network protocol stack to obtain the target application data and the CRC check code of the target application data. The target application data and the CRC check code for the target application data are then stored to the memory 402. Further, the serial processing unit 102 sends the target application data and the memory start address and length of the CRC check code of the target application data to the dedicated hardware 302 through the storage service.
The dedicated hardware 302 obtains the target application data and the CRC check code of the target application data from the memory 402 according to the target application data and the memory start address and length of the CRC check code of the target application data; and according to the CRC check code of the target application data, carrying out accuracy check on the target application data. If the verification passes, the dedicated hardware 302 sends the verification result to the storage service. The storage service may store the target application data to the storage medium if the verification result is verification pass.
As shown in fig. 2b, when the computing device on the user side needs to read data, the storage service device may serve as a transmitting end, and the storage service in the serial processing unit 102 obtains, from the storage medium, application data to be read by the computing device on the user side, as target application data. The memory service in the serial processing unit 102 then sends the memory start address and length of the target application data to the dedicated hardware 302.
The dedicated hardware 302 reads the target application data from the memory 402 of the computing device according to the memory start address and length of the target application data; and calculates a CRC check code of the target application data. The dedicated hardware 302 then sends the CRC check code of the target application data to the memory service in the serial processing unit 102. Further, the serial processing unit 102 provides the target application data and the CRC check code of the target application data to the network protocol stack running in the serial processing unit 102 through the storage service; and encapsulating the target application data and the CRC check code of the target application data into a message to be transmitted through a network protocol stack. The message to be transmitted is then sent to the receiving end through the network card 202.
For the computing device as the receiving end, its network card 201 receives the message to be transmitted, and sends the message to be transmitted to the serial processing unit 101. The serial processing unit 101 performs protocol analysis on the message to be transmitted through the network protocol stack to obtain the target application data and the CRC check code of the target application data. Then, the target application data and the CRC check code of the target application data are stored in the memory 401. Further, the serial processing unit 101 transmits the target application data and the memory start address and length of the CRC check code of the target application data to the dedicated hardware 301 through the storage service.
The dedicated hardware 301 obtains the target application data and the CRC check code of the target application data from the memory 401 according to the target application data and the memory start address and length of the CRC check code of the target application data; and according to the CRC check code of the target application data, carrying out accuracy check on the target application data. If the verification passes, the dedicated hardware 301 sends the verification result to the storage service. The storage service may store the target application data to the storage medium if the verification result is verification pass.
According to the CRC check code calculation and CRC check process provided in fig. 2a and fig. 2b based on the dedicated hardware to implement software-hardware bypass interaction, for the transmitting end, before the target application data reach the network card, the dedicated hardware is required to acquire the target application data to perform the check code calculation, and then the calculated check code is sent to the network card by the serial processing unit, which increases the number of software-hardware interactions, increases the length of the data link, and results in lower processing efficiency of the check code calculation. Similarly, at the receiving end, before the target application data is stored in the memory, dedicated hardware is also required to perform accuracy verification, and then the verified target application data is stored in the memory, so that the number of times of software and hardware interaction is increased, the length of a data link is increased, and the verification efficiency of the target application data is lower.
In order to solve the above technical problems, some embodiments of the present application provide an unloading method for calculating a channel associated check code. Specifically, a hardware processing unit is added on the network card, and is used for performing check code calculation on target application data to be transmitted. On the one hand, the check code calculation of the target application data is unloaded to hardware for completion, so that not only can the serial processing resources of the host be released, but also the hardware acceleration of the check code calculation can be realized, and the processing efficiency of the check code calculation is improved. On the other hand, the network card is positioned on the transmission link of the target application data, so that the check code of the target application data is calculated on the hardware processing unit of the network card, the calculation of the off-load check code of the associated hardware is realized, the data link of the application data can be shortened, and the check code calculation efficiency of the application data is further improved.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
It should be noted that: like reference numerals denote like objects in the following figures and embodiments, and thus once an object is defined in one figure or embodiment, further discussion thereof is not necessary in the subsequent figures and embodiments.
Fig. 3a, fig. 3b, fig. 4a, and fig. 4b are schematic structural diagrams of a computing system according to an embodiment of the present application. With reference to fig. 3a, 3b, 4a, and 4b, a computing system may include: a host S10 and a network card 20. The host S10 may be implemented as any computer device having a function of calculation, storage, etc., such as a server or a terminal device. The terminal equipment can be a computer, a workstation, a mobile phone or the like.
The host S10 may include: serial processing unit 10a, memory 40, and the like. In the present embodiment, the number of serial processing units 10a is not limited. The serial processing unit 10a may be 1 or more. The plural means 2 or more than 2. Each serial processing unit 10a may be a single-core processing unit or a multi-core processing unit.
In this embodiment, the serial processing unit 10a is generally a processing chip disposed on a motherboard of the host S10, such as a central processing unit (Central Processing Unit, CPU) of the host S10. The CPU may be a separate Chip, a CPU integrated in a System on Chip (SoC), a CPU integrated in a micro control unit (Microcontroller Unit, MCU), or the like.
The network card 20 is a network card provided with a hardware processing unit 20 a. Hardware processing unit 20a may be a hardware processor that performs data processing in a hardware description language (Hardware Description Language, HDL), or an application specific integrated circuit (Application Specific Integrated Circuit, ASIC). The hardware description language may be Very High-speed-Speed Integrated Circuit HardwareDescription Language (VHDL), verilog HDL, system Verilog, system C, etc. Accordingly, the hardware processing unit 20a may be a Field programmable gate array (Field-Programmable Gate Array, FPGA), a programmable array logic device (Programmable Array Logic, PAL), a general array logic device (General Array Logic, GAL), a complex programmable logic device (Complex Programmable Logic Device, CPLD), or the like.
In the present embodiment, the host S10 is communicatively connected to the network card 20. In some embodiments, the network card 20 may be integrated on the motherboard of the host S10, or may be removably mounted on the host S10 and communicatively connected to the motherboard of the host S10 (specifically, the serial processing unit 10). Alternatively, the network card 20 may be mounted on the host S10 through a bus interface. The bus interface may be a serial bus interface, such as, but not limited to, a peripheral component interconnect express (Peripheral Component Interconnect Express, PCIe), a PCI interface, a super channel interconnect (Ultra Path Interconnect, UPI) interface universal serial bus (Universal Serial Bus, USB) serial bus interface, an RS485 interface, or an RS232 interface, etc. Preferably, the serial bus is a PCIe bus interface, which can increase the data transmission rate between the host S20 and the network card 20.
The bus interface of the host S10 can be extended according to the specification of the host S10, and generally, the number of communication interfaces of the host S10 is plural. The plural means 2 or more than 2. When the network card 20 and the host S10 are in communication connection through the bus interface, there may be a plurality of network cards 20, so as to realize expansion of the network card 20.
In this embodiment, in conjunction with fig. 3a, 3b, 4a, and 4b, the computing system may further include: a serial processing unit 10. As shown in fig. 4a and 4b, the serial processing unit 10 may include: the serial processing unit 10a (defined as a first serial processing unit 10 a) on the host S10. Alternatively, as shown in fig. 3a and 3b, the serial processing unit 10 includes: the serial processing unit 10a on the host S10 and the serial processing unit 10b (defined as the second serial processing unit 10 b) on the network card 20. Regarding the implementation of the second serial processing unit 10b, reference may be made to the relevant content of the implementation of the serial processing unit on the host S10, which is not described herein.
In this embodiment, the serial processing unit 10 runs a network protocol stack in combination with fig. 3a, 3b, 4a and 4 b. The network protocol stack is an important component in a computer network and is responsible for handling the delivery and handling of network data packets between different protocol layers. As shown in fig. 4a and 4b, the serial processing unit 10 running the network protocol stack may be a second serial unit 10b on the network card 20. As shown in fig. 3a and 3b, the serial processing unit 10 running the network protocol stack may be a first serial unit 10a on the host S10.
In this embodiment, as shown in fig. 3b and 4b, when the computing system is implemented as a computing device on the user side, the host S10 runs an application program. Specifically, the application runs on the first serial processing unit 10a on the host S10 side. The application program needs to send and/or receive data through the network card 20. In the embodiment of the present application, data that an application program needs to send or receive is collectively referred to as application data.
The network card 20 may transmit and/or receive application data through a network interface 20b of the network card. In this embodiment, the communication components in the network interface 20b are configured to facilitate wired or wireless communication between the device in which they are located and other devices. The device in which the communication component is located may access a wireless network based on a communication standard, such as wireless fidelity (Wireless Fidelity, wiFi), 2G or 3G,4G,5G or a combination thereof. In one exemplary embodiment, the communication component receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component may also be implemented based on near field communication (Near Field Communication, NFC) technology, radio frequency identification (Radio Frequency Identification, RFID) technology, infrared data association (Infrared Data Association, irDA) technology, ultra Wide Band (UWB) technology, bluetooth (BT) technology, or other technologies.
In this embodiment, the computing system may be implemented as a transmitting end, configured to transmit application data of an application program; and the system can also be realized as a receiving end for receiving application data sent to an application program by other equipment. The following describes exemplary data processing manners provided in the embodiments of the present application from the perspective of implementing the computing system as a transmitting end and a receiving end, respectively.
With reference to fig. 3a, 3b, 4a and 4b, when the computing system is implemented as a transmitting end, the serial processing unit 10 (the first serial processing unit 10a or the second serial processing unit 10 b) may obtain the memory address information of the target application data of the application program (corresponding to step 1 in fig. 3a, 3b, 4a and 4 b). The target application data is application data to be transmitted. The memory address information of the target application data, for identifying the storage location of the target application data in the memory 40 of the host S10, may include: the memory start address of the target application data and the length of the target application data.
Specifically, in conjunction with fig. 3a, 3b, 4a, and 4b, the serial processing unit 10 (the first serial processing unit 10a or the second serial processing unit 10 b) runs a target service that processes application data of an application program. The target service may be a storage service, a computing service, a communication service, a data processing service, or the like. The target application data to be transmitted is generally determined according to the requirements of these services. The second serial processing unit 10b on the network card 20 is illustrated in fig. 3a and 3b as the serial processing unit running the target service and network protocol stack, and the first serial processing unit 10a on the host side is illustrated in fig. 4a and 4b as the serial processing unit running the target service and network protocol stack. Of course, the target service and network protocol stack may also run on the first serial processing unit 10a on the host S10; or, the target service runs on the first serial processing unit 10a of the host S10, and the network protocol stack runs on the second serial processing unit 10b of the network card 20; alternatively, the network protocol stack runs on the first serial processing unit 10a on the host S10, the target service runs on the second serial processing unit 10b on the network card 20, etc.
Preferably, if the computing system is implemented as a computing device on the user side, the target service and the network protocol stack are both run on the second serial processing unit 10b on the network card 20 (as shown in fig. 3 b), so that the target service and the network protocol stack can be unloaded from the host to the network card, the resource consumption of the target service and the network protocol stack on the first serial processing unit 10a of the host S10 can be reduced, for example, the CPU resource consumption of the target service and the network protocol stack on the host S10 can be reduced, and the processing resource of the host S10 can be saved.
If the computing system is implemented as a computing device that provides a target service (defined as a target service device), such as a storage device that provides a storage service, the target service and the network protocol stack are both running on the first serial processing unit 10a (as shown in fig. 4 a) on the host S10. This is mainly because the CPU resources on the host S10 side of the target service device need not be provided to the user running application programs and thus can be used to run the target services and network protocol stacks.
In this embodiment, referring to fig. 3a, 3b, 4a and 4b, the serial processing unit 10 (the first serial processing unit 10a or the second serial processing unit 10 b) may obtain the memory address information of the target application data of the application program through the target service (corresponding to step 1 in fig. 3a, 3b, 4a and 4 b); the network protocol header may also be generated by the network protocol stack (corresponding to step 2 in fig. 3a, 3b, 4a and 4 b). The network protocol header is obtained by processing the network protocol stack between different protocol layers.
In general, the computing systems shown in fig. 3b and 4b are typically implemented as user-side computing devices for providing application data to a target service device requesting the target service device to process the application data. For example, the target service is a storage service, and the computing device on the user side may write data to the storage device, i.e., send target application data to the target service device, which stores the target application data. Accordingly, with reference to fig. 3b and fig. 4b, the serial processing unit 10 (the first serial processing unit 10a or the second serial processing unit 10 b) may acquire the application data to be transmitted of the application program as the target application data; and obtains the memory address information of the target application data (corresponding to step 1 in fig. 3b and fig. 4 b).
The computing systems shown in fig. 3a and 4a are generally implemented as target service devices for providing target services. The computing device at the user side can request target application data from the target service device according to the requirement of the target service, and the target service device can read the application data requested by the computing device at the user side from the stored target application data as target application data. Accordingly, referring to fig. 3a and fig. 4a, the serial processing unit 10 (the first serial processing unit 10a or the second serial processing unit 10 b) may obtain, from the stored application data, the application data requested to be read by the computing device on the user side as the target application data; and obtains the memory address information of the target application data (corresponding to step 1 in fig. 3a and fig. 4 a).
Further, the serial processing unit 10 (the first serial processing unit 10a or the second serial processing unit 10 b) may provide the memory address information of the target application data and the network protocol header to the hardware processing unit 20a (corresponding to step 2 in fig. 3a, 3b, 4a and 4 b).
The hardware processing unit 20a may read the target application data from the memory 40 of the host by using a direct memory access (Direct Memory Access, DMA) manner according to the memory address information of the target application data (corresponding to steps 3 and 4 in fig. 3a, 3b, 4a and 4 b); and calculates the check code of the target application data (corresponding to step 5 in fig. 3a, 3b, 4a and 4 b).
In the embodiment of the present application, the specific implementation manner of calculating the check code of the target application data by the hardware processing unit 20a is not limited. In some embodiments, hardware processing unit 20a may utilize a verification algorithm to calculate a verification code for the target application data. The check algorithm may be a redundant cyclic check code (Cyclic Redundancy Check, CRC) algorithm, a parity algorithm, an exclusive or check algorithm, or a Message-Digest (MD) algorithm, such as an MD5 check algorithm, or the like.
For the CRC algorithm, the target application data can be encrypted by using the CRC algorithm to obtain a CRC check code of the target application data, namely, a check code of the target application data.
Further, the hardware processing unit 20a may assemble the target application data, the check code of the target application data, and the network protocol header into a message to be sent (defined as a first message) (corresponding to step 6 in fig. 3a, 3b, 4a, and 4 b). Specifically, the hardware processing unit 20a may assemble the network protocol header, the target application data, and the check code of the target application data into the first message according to the message format. Further, the hardware processing unit 20a may send the first message to the receiving end through the network interface 20b (corresponding to step 7 in fig. 3a, 3b, 4a and 4 b).
In this embodiment, a hardware processing unit is added to the network card, and is used for performing check code calculation on target application data to be transmitted. On the one hand, the check code calculation of the target application data is unloaded to hardware for completion, so that not only can the serial processing resources of the host be released, but also the hardware acceleration of the check code calculation can be realized, and the processing efficiency of the check code calculation is improved. On the other hand, the network card is positioned on the transmission link of the target application data, so that the check code of the target application data is calculated on the hardware processing unit of the network card, the calculation of the off-load check code of the associated hardware is realized, the data link of the application data can be shortened, and the check code calculation efficiency of the application data is further improved.
In the embodiment of the application, the data of the target service sent by the target service of the sending end to the target service of the receiving end, besides the application data, also includes metadata of the target service, such as a protocol header of a protocol followed by the target service, and the like. For example, the target service is a storage service, and the metadata of the storage service may include a protocol header of a storage protocol followed by the storage service, and the like. Thus, the target service also provides metadata of the target service to the network protocol stack. However, it is not possible for the network protocol stack and the hardware processing unit to perceive which part of the incoming data is application data and which part is metadata of the target service. Thus, metadata of the target service is mixed with target application data for check code calculation, but the target service generally does not want to do so.
On the other hand, when the network protocol layer performs data transmission, the metadata of the incoming target service and the target application data are cut together according to parameters such as a maximum transmission unit (Maximum Transmission Unit, MTU) of the network card, and then a certain data slice which is cut may include: metadata and partial application data of the target service destroy the original data composition form of the upper layer. Therefore, if the network protocol stack and the hardware processing unit can distinguish the metadata of the target service from the application data, the technical problem can be solved.
In this embodiment, in order to make the network protocol stack and the hardware processing unit distinguish the metadata of the target service from the application data, as shown in fig. 5, an Input/Output (IO) vector (IOV) and a hash table (Scatter Gather List, SGL) are introduced. Wherein an IO vector is a data structure defining a vector element, the data structure acting as an array of multiple elements. The IO vector may include: pointers iov _base and iov _len. Wherein for each transmitted element, the pointer iov _base points to a buffer in which the received data or the data to be transmitted is stored. iov _len represents the length of data received or the length of data to be transmitted. I.e. the IO vector may comprise: address (iov _base) and length (iov _len) of the data. A hash table (SGL) is a linked list of a plurality of IO vectors.
In this embodiment, the serial processing unit 10 may further obtain the memory address information of the metadata of the target service. The memory address information may include: the memory start address of the metadata of the target service and the length of the metadata of the target service. The memory address information of the metadata of the target service is the memory address information of the memory corresponding to the serial processing unit for deploying the target service. For example, as shown in fig. 3a and 3b, if the serial processing unit for deploying the target service is the second serial processing unit 10b on the network card 20, the memory address information of the metadata of the target service is the address information of the memory (not shown in the drawing) of the metadata of the target service on the network card 20, and includes: the memory start address of the metadata of the target service in the memory on the network card 20 and the length of the metadata of the target service.
The memory address information of the target application data, which is address information of the target application data in the memory 40 of the host S10, includes: the memory start address of the target application data in the host-side memory 40 and the length of the metadata of the target service.
Based on the IO vector and the SGL, in the present embodiment, as shown in FIG. 5, in order to distinguish the metadata of the target service from the target application data, the serial processing unit 10 may add the memory address information of the metadata of the target service to the first IO vector (IOV [0 ]); and adding the memory address information of the target application data to the second IO vector. The first IO vector and the second IO vector are different IO vectors. The second IO vector may be 1 or more. The plural means 2 or more than 2.
In some embodiments, the length of the target application data may be longer, such as exceeding the MTU of the network card, so the target service is generally provided with data slicing requirements. For example, logical block addresses (Logical Block Address, LBAs) are required to be aligned according to the memory page size. Wherein LBA is a general mechanism describing the block of the computer memory device where the data is located. The memory page size is typically 4kB. The data slicing requirement is to align slicing data according to the memory page size, so that the data volume of the data slices is an integer multiple of the memory page size. For another example, if the length of the single message does not exceed the MTU of the network card, the data slicing requirement is to slice the data according to the MTU of the network card, so that the data volume of each data slice is smaller than or equal to the MTU.
Based on the data slicing requirement of the target service, the serial processing unit 10 may determine the memory address information of the plurality of data slices to be sliced by the target application data according to the data slicing requirement and the memory address information of the target application data. The plurality refers to 2 or more than 2, and the specific number is determined by the data segmentation requirement and the length of the target application data. For example, as shown in fig. 5, the data slicing requirement is to perform data slicing according to the alignment of the memory page size of 4kB, and the length of the target application data is 13kB, so that the target application data of 13kB can be sliced into 2 data slices of 1kB and 12 kB.
Further, referring to fig. 5 and 6, the serial processing unit 10 may add the memory address information of the metadata of the target service to the first IO vector (e.g. IOV [0 ]); and adding the memory address information of the plurality of data slices to a plurality of second IO vectors (e.g., IOV [1] and IOV [2 ]). The second IO vectors are in one-to-one correspondence with the data fragments, namely, one second IO vector stores memory address information of one data fragment.
Further, referring to fig. 5 and 6, the serial processing unit 10 may generate a hash set list (SGL) according to the first IO vector and the second IO vector. Since the hash set list is a linked list of IO vectors. For the network protocol stack deployed by the serial processing unit 10, a single IO vector may be obtained through the hash table, and the memory address information recorded by the IO vector may be provided to the hardware processing unit 20a. Because the memory address information of the metadata of the target service and the memory address information of the data fragments to be segmented of the target application data are independently stored in different IO vectors. Therefore, when the serial processing unit 10 provides the memory address information of the metadata of the target service and the memory address information of the target application data to the hardware processing unit 20a through the network protocol stack, the memory address information of the metadata of the target service or the memory address information of the data fragments to be segmented of the target application data can be obtained by reading the corresponding IO vector, so that the memory address information of the metadata of the target service and the memory address information of the data fragments to be segmented of the target application data are not mixed together, and the distinction between the metadata of the target service and the target application data is realized. Furthermore, when the hardware processing unit 20a reads the corresponding data according to the memory address information, the metadata and the application data mixed together are not read, so that the upper-layer requirement of the target service can be met.
Specifically, referring to fig. 5 and 6, when the serial processing unit 10 provides the memory address information of the target application data and the network protocol header to the hardware processing unit 20a, the second IO vector may be obtained from the hash table (SGL); and provides the memory address information and the network protocol header recorded by the second IO vector to the hardware processing unit 20a. Correspondingly, the hardware processing unit 20a can read the target application data from the memory 40 of the host in a DMA mode according to the memory address information recorded by the second IO vector; and calculates a check code of the target application data. Fig. 6 illustrates an example of calculating a CRC check code of target application data, but is not limited thereto. Then, the hardware processing unit 20a assembles the network protocol header, the target application data and the check code of the target application data into a first message; and sends the first message to the receiving end through the network interface 20 b. The IO vector read from the SGL in FIG. 6 may be the first IO vector or the second IO vector. Correspondingly, the information recorded by the first IO vector is the memory address information of the metadata of the target service; the information recorded by the second IO vector is the memory address information of the data fragment corresponding to the second IO vector.
For the above embodiment where the target application data needs to be split into multiple data slices, the second IO vector is multiple. Accordingly, when the serial processing unit 10 provides the memory address information and the network protocol header of the target application data to the hardware processing unit 20a, as shown in fig. 6, the serial processing unit 10 may obtain a plurality of second IO vectors from the hash table; and provides the memory address information and the network protocol header recorded by the second IO vectors to the hardware processing unit 20a.
Accordingly, for any one of the plurality of data slices a, the hardware processing unit 20a may read the data slice a from the memory 40 of the host in a DMA manner according to the memory address information recorded by the second IO vector corresponding to the data slice a; and calculates the check code of data fragment a. For the specific embodiment of calculating the check code of the data fragment a, reference may be made to the related content of the check code of the calculation target application data, which is not described herein.
Further, referring to fig. 6 and fig. 7, the hardware processing unit 20a may assemble the network protocol header, the data fragment a, and the check code of the data fragment a into a first packet corresponding to the data fragment a; and sends the first message corresponding to the data fragment a to the receiving end through the network interface 20 b.
The processing procedure of the hardware processing unit 20a is the same for each data slice, and the above embodiment will be described by taking the data slice a as an example. The hardware processing unit 20a may obtain the first message corresponding to each data fragment according to the same manner, i.e. obtain a plurality of first messages; and sending the first messages to the receiving end in batches.
In this embodiment of the present application, since the metadata of the target service is also data required by the target service, the sending end also needs to perform check code calculation on the metadata of the target service. Specifically, with reference to fig. 5 and 6, the serial processing unit 10 may obtain the first IO vector from the hash table; and sends the memory address information of the metadata of the target service recorded by the first IO vector to the hardware processing unit 20a.
Accordingly, the hardware processing unit 20a may read the metadata of the target service from the memory corresponding to the serial processing unit 10 in a DMA manner according to the memory address information of the metadata of the target service recorded by the first IO vector. In the embodiment where the target service is disposed on the first serial processing unit 10a on the host S10 side, the serial processing unit 10 is the first serial processing unit 10a, and the corresponding memory is the memory 40 on the host side. In the embodiment of the second serial processing unit 10b with the target service disposed on the network card 20 side, the serial processing unit 10 is the first serial processing unit 10b, and the corresponding memory is the memory (not shown in the drawing) on the network card.
Further, the hardware processing unit 20a may calculate a check code of metadata of the target service. For the specific embodiment of calculating the check code of the metadata of the target service, reference may be made to the related content of the check code of the target application data, which is not described herein.
Further, referring to fig. 6 and fig. 7, the hardware processing unit 20a may assemble the network protocol header, the metadata of the target service, and the check code of the metadata of the target service into a message (defined as a second message) corresponding to the metadata of the target service; and sends the second message to the receiving end through the network interface 20 b.
For the receiving end when receiving the message, the serial processing unit 10 and the hardware processing unit 20a of the receiving end can both perform accuracy check on the Payload Data (Payload Data) of the message. The serial processing unit 10 may perform accuracy check on the payload data of the message through a network protocol stack or a target service. The process of checking the accuracy of the payload data of the message by the hardware processing unit 20a and the serial processing unit 10 is exemplarily described below.
Referring to fig. 8a, 8b and 6, for the receiving end, the hardware processing unit 20a or the serial processing unit 10 may receive a message (defined as a target message) sent by the sending end through the network interface 20 b. The target message may be a message in which the sending end encapsulates the target application data in the manner provided in the foregoing embodiment, or a message in which the data of the target application data is fragmented, or a message in which metadata of the target service is encapsulated. And the target application data, the data fragments of the target application data or the metadata of the target service encapsulated in the target message are the effective load data of the target message.
Further, the hardware processing unit 20a or the serial processing unit 10 may obtain the payload data and the check code of the payload data from the received target packet; and the check code of the payload data is utilized to check the accuracy of the payload data.
Specifically, the hardware processing unit 20a or the serial processing unit 10 may calculate the check code of the payload data using the same check algorithm as the transmitting end; comparing whether the calculated check code is consistent with the check code packaged in the message, and if so, determining that the effective load data passes the accuracy check; if the payload data does not accord with the accuracy check, determining that the payload data does not pass the accuracy check. Fig. 8a and 8b illustrate an example of the validity check of the payload data by the hardware processing unit 20a, but are not limited thereto.
For the serial processing unit 10, protocol analysis can be performed on the message received by the network card 20 through a network protocol stack operated by the serial processing unit 10 to obtain payload data and a check code of the payload data; and performs accuracy check on the payload data by using the check code of the payload data through the network protocol stack or the target service running on the serial processing unit 10.
Further, in the case where the payload data passes the accuracy check, the serial processing unit 10 may process the payload data through a target service running thereon, or the like. For example, for embodiments in which the computing system is implemented as a target service device, the target service is a storage service, and the serial processing unit 10 may store payload data to a storage medium or the like through the target service. For another example, the target service is a calculation service, and the serial processing unit 10 may calculate payload data by the target service, or the like.
For another example, for embodiments in which the computing system is implemented as a computing device on the user side, serial processing unit 10 may provide the payload data to an application program via a target service, perform related operations based on the payload data by the application program, and so forth.
In this embodiment, the serial processing unit 10 may be the second serial processing unit 10b on the network card 20 or the first serial processing unit 10a on the host side. Fig. 8a and 8b illustrate only an example of verifying the payload data of the target packet by the hardware processing unit 20a, but are not limited thereto. In particular, the computing system shown in FIG. 8a is typically implemented as a user-side computing device. The first serial processing unit 10a of the host S10 runs an application. When the computing system requests the data from the target service equipment, the computing system can read the data from the target service equipment and is realized as a receiving end. The computing system shown in fig. 8b is typically implemented as a target service device. When an application program running on the computing device at the user side needs to write data, the data can be sent to the target service device, and then the computing system is implemented as a receiving end.
As shown in fig. 8a and 8b, the hardware processing unit 20a may receive, through the network interface 20b, a target packet sent by the sender (corresponding to steps 1 and 2 in fig. 8a and 8 b). Further, as shown in step 3 of fig. 8a and 8b, the hardware processing unit 20a may obtain the payload data and the check code of the payload data from the received target packet; and the check code of the payload data is utilized to check the accuracy of the payload data. For a specific implementation of the hardware processing unit 20a for performing the accuracy check on the payload data, reference may be made to the related content of the foregoing embodiment, which is not described herein.
Further, in case the payload data passes the accuracy check, as shown in step 4 of fig. 8a and 8b, the hardware processing unit 20a may provide the target message to the DMA engine in the hardware processing unit 20a (i.e. the DMA in fig. 8a and 8 b).
Next, as shown in step 5 of fig. 8a and 8b, the network protocol stack running in the serial processing unit 10 may provide the memory address information of the payload data and the memory address information of the network protocol header to the DMA engine. Wherein the network protocol stack in fig. 8a runs in the second serial processing unit 10b of the network card; the network protocol stack in fig. 8b runs on the first serial processing unit 10a of the host.
Further, the hardware processing unit 20a may store the payload data to the memory 40 of the host S10 by DMA (corresponding to step 6 in fig. 8a and 8 b); and provides the network protocol header of the target message and the check code encapsulated in the target message to the network protocol stack in a DMA manner (corresponding to step 7 in fig. 8a and 8 b). The serial processing unit 10 may provide the memory address information of the payload data and the check code to the target service (corresponding to step 8 in fig. 8a and 8 b) via the network protocol stack running thereon. Further, the serial processing unit 10 may process payload data through a target service running thereon, or the like.
For embodiments in which the computing system is implemented as a target service device, the target service is a storage service, and the serial processing unit 10 may store payload data to a storage medium or the like through the target service. For another example, the target service is a calculation service, and the serial processing unit 10 may calculate payload data by the target service, or the like.
For embodiments in which the computing system is implemented as a user-side computing device, serial processing unit 10 may provide the payload data to an application program via a target service, perform related operations based on the payload data by the application program, and so forth. Wherein the target service in fig. 8a runs on the second serial processing unit 10b of the network card; the target service in fig. 8b runs on the first serial processing unit 10a of the host.
In the embodiment of the application, for the embodiment of adding the hardware processing unit on the network card of the receiving end for checking the payload data of the message received by the network card, on one hand, the checking of the payload data is unloaded to the hardware to be completed, so that the serial processing resource of the host can be released, the hardware acceleration of data checking can be realized, and the processing efficiency of data checking is improved. On the other hand, the network card is positioned on the transmission link of the target application data, so that the effective load data of the message is checked on the hardware processing unit of the network card, the function of checking the unloading data of the associated hardware is realized, the data link of the effective load data can be shortened, and the checking efficiency of the effective load data is further improved.
For the embodiment (not shown in fig. 8a and 8 b) in which the second serial processing unit on the network card at the receiving end performs verification on the payload data of the packet received by the network card, on one hand, the verification on the payload data is offloaded to the serial resource of the network card, so that the serial processing resource of the host can be released. On the other hand, the network card is positioned on the transmission link of the target application data, so that the effective load data of the message is checked on the serial processing unit of the network card, the function of checking the unloading data along with the path is realized, the data link of the effective load data can be shortened, and the checking efficiency of the effective load data is improved.
It should be noted that, the computing system provided in the foregoing embodiment may be implemented as a computing device on the user side, or may be implemented as a computing device on the service side for providing the target service. Preferably, the computing systems provided in fig. 3a, 3b and 8a are implemented as user-side computing devices. The application program runs on the first serial processing unit 10a of the host; the target service and network protocol stack runs on the second serial processing unit 10b of the network card. In this way, the target service and the network protocol stack can be all unloaded from the host to the network card, so that the resource consumption of the first serial processing unit 10a of the host S10 by the operation of the target service and the network protocol stack can be reduced, for example, the CPU resource consumption of the host S10 by the operation of the target service and the network protocol stack can be reduced, and the processing resource of the host S10 can be saved.
The computing systems provided in fig. 4a and 8b are implemented as user-side computing devices. The computing system is implemented as a computing device providing a target service (defined as a target service device), such as a storage device providing a storage service, etc., and the target service and network protocol stack run on a first serial processing unit 10a (shown in fig. 4a and 4 b) on a host S10. This is mainly because the CPU resources on the host S10 side of the target service device need not be provided to the user to run the application, and thus are used to run the target service and network protocol stack without preempting the computing resources required by the application. In addition to the above-described computing system, the embodiments of the present application further provide a data processing method, and the data processing method provided by the embodiments of the present application is described below as an example.
Fig. 8c is a flowchart of a data processing method according to an embodiment of the present application. The data processing method shown in fig. 8c is applicable to a serial processing unit on a network card or a host. The network card is in communication connection with the host. As shown in fig. 8c, the data processing method includes:
801. and acquiring the memory address information of the target application data of the application program.
802. And generating a network protocol header through the running network protocol stack.
803. Providing the memory address information of the target application data and the network protocol header to a hardware processing unit of the network card, so that the hardware processing unit can read the target application data from the memory of the host in a DMA mode according to the memory address information of the target application data and calculate the check code of the target application data; assembling the target application data, the check code of the target application data and the network protocol header into a first message; and transmitting the first message to the receiving end through a network interface of the network card.
Fig. 9 is a flowchart of another data processing method according to an embodiment of the present application. The data processing method shown in fig. 9 is applicable to a hardware processing unit in a network card. The network card is in communication connection with the host. As shown in fig. 9, the data processing method includes:
901. Acquiring memory address information and a network protocol header of target application data of an application program provided by a serial processing unit; the serial processing unit is arranged on the host or the network card.
902. And reading the target application data from the memory of the host by utilizing a DMA mode according to the memory address information of the target application data.
903. And calculating the check code of the target application data.
904. And assembling the target application data, the check code of the target application data and the network protocol header into a first message.
905. And sending the first message to the receiving end through a network interface of the network card.
The structure and implementation manner of the network card and the host can be referred to the relevant content of the above system embodiment, and will not be described herein. In this embodiment, for the serial processing unit, the target application and the network protocol stack may be deployed.
For the serial processing unit of the transmitting end, in step 801, memory address information of target application data of the application program may be obtained. The target application data is application data to be transmitted. Memory address information of the target application data, for identifying a storage location of the target application data in a memory of the host, may include: the memory start address of the target application data and the length of the target application data.
Further, in step 802, memory address information of target application data of the application program may be obtained through the target service; in step 803, a network protocol header may also be generated by the running network protocol stack. The network protocol header is obtained by processing the network protocol stack between different protocol layers.
Further, in step 804, the memory address information and the network protocol header of the target application data may be provided to the hardware processing unit of the network card.
For the hardware processing unit, in step 901, memory address information and a network protocol header of target application data provided by the serial processing unit may be obtained; in step 902, the target application data is read from the memory of the host by using a DMA method according to the memory address information of the target application data; and in step 903, a check code of the target application data is calculated.
In the embodiment of the present application, the specific implementation manner of calculating the check code of the target application data by the hardware processing unit is not limited. In some embodiments, a verification algorithm may be utilized to calculate a verification code for the target application data. The verification algorithm may be a CRC algorithm, a parity algorithm, an exclusive OR verification algorithm, or an MD algorithm, such as an MD5 verification algorithm, or the like.
For the CRC algorithm, the target application data can be encrypted by using the CRC algorithm to obtain a CRC check code of the target application data, namely, a check code of the target application data.
Further, in step 904, the target application data, the check code of the target application data, and the network protocol header may be assembled into a message (defined as a first message) to be sent. Further, in step 905, the first message may be sent to the receiving end through the network interface of the network card.
In this embodiment, a hardware processing unit is added to the network card, and is used for performing check code calculation on target application data to be transmitted. On the one hand, the check code calculation of the target application data is unloaded to hardware for completion, so that not only can the serial processing resources of the host be released, but also the hardware acceleration of the check code calculation can be realized, and the processing efficiency of the check code calculation is improved. On the other hand, the network card is positioned on the transmission link of the target application data, so that the check code of the target application data is calculated on the hardware processing unit of the network card, the calculation of the off-load check code of the associated hardware is realized, the data link of the application data can be shortened, and the check code calculation efficiency of the application data is further improved.
In the embodiment of the application, the data of the target service sent by the target service of the sending end to the target service of the receiving end, besides the application data, also includes metadata of the target service, such as a protocol header of a protocol followed by the target service, and the like. Thus, the target service also provides metadata of the target service to the network protocol stack. However, it is not possible for the network protocol stack and the hardware processing unit to perceive which part of the incoming data is application data and which part is metadata of the target service. Thus, metadata of the target service is mixed with target application data for check code calculation, but the target service generally does not want to do so.
On the other hand, when the network protocol layer performs data transmission, the metadata of the incoming target service and the target application data are cut together according to parameters such as MTU of the network card, and then a certain cut data slice may include: metadata and partial application data of the target service destroy the original data composition form of the upper layer. Therefore, if the network protocol stack and the hardware processing unit can distinguish the metadata of the target service from the application data, the technical problem can be solved.
In this embodiment, in order to make the network protocol stack and the hardware processing unit distinguish the metadata of the target service from the application data, an IO vector and a hash table (SGL) are introduced.
In this embodiment, for the serial processing unit, the memory address information of the metadata of the target service may also be obtained. The memory address information may include: the memory start address of the metadata of the target service and the length of the metadata of the target service. The memory address information of the metadata of the target service is the memory address information of the memory corresponding to the serial processing unit for deploying the target service.
Based on the IO vector and the SGL, in this embodiment, in order to distinguish metadata of the target service from target application data, for the serial processing unit, memory address information of the metadata of the target service may be added to the first IO vector; and adding the memory address information of the target application data to at least one second IO vector. The first IO vector and the second IO vector are different IO vectors.
In some embodiments, the length of the target application data may be longer, such as exceeding the MTU of the network card, so the target service is generally provided with data slicing requirements. Based on the data segmentation requirement of the target service, aiming at the serial processing unit, the memory address information of a plurality of data fragments to be segmented of the target application data can be determined according to the data segmentation requirement and the memory address information of the target application data. The plurality refers to 2 or more than 2, and the specific number is determined by the data segmentation requirement and the length of the target application data.
Further, memory address information of metadata of the target service may be added to the first IO vector; and adding the memory address information of the plurality of data slices to the plurality of second IO vectors. The second IO vectors are in one-to-one correspondence with the data fragments, namely, one second IO vector stores memory address information of one data fragment.
Further, for the serial processing unit, a hash set list (SGL) may be generated from the first IO vector and the second IO vector. Since the hash set list is a linked list of IO vectors. For a network protocol stack deployed by the serial processing unit, a single IO vector may be obtained through a hash table, and memory address information recorded by the IO vector may be provided to the hardware processing unit. Because the memory address information of the metadata of the target service and the memory address information of the data fragments to be segmented of the target application data are independently stored in different IO vectors. Therefore, when the serial processing unit provides the memory address information of the metadata of the target service and the memory address information of the target application data to the hardware processing unit through the network protocol stack, the memory address information of the metadata of the target service or the memory address information of the data fragments to be segmented of the target application data can be obtained by reading the corresponding IO vector, and the memory address information of the metadata of the target service and the memory address information of the data fragments to be segmented of the target application data are not mixed together, so that the distinction between the metadata of the target service and the target application data is realized. Furthermore, when the hardware processing unit reads corresponding data according to the memory address information, metadata and application data mixed together cannot be read, so that the upper-layer requirement of the target service can be met.
Specifically, for the serial processing unit, when the memory address information and the network protocol header of the target application data are provided to the hardware processing unit, the second IO vector may be obtained from a hash set list (SGL); and providing the memory address information and the network protocol header recorded by the second IO vector to the hardware processing unit. Correspondingly, for the hardware processing unit, target application data can be read from the memory of the host in a DMA mode according to the memory address information recorded by the second IO vector; and calculates a check code of the target application data. Then, assembling the network protocol header, the target application data and the check code of the target application data into a first message; and sending the first message to the receiving end through the network interface.
For the above embodiment where the target application data needs to be split into multiple data slices, the second IO vector is multiple. Correspondingly, for the serial processing unit, when the memory address information and the network protocol header of the target application data are provided for the hardware processing unit, a plurality of second IO vectors can be obtained from the hash set list; and providing the memory address information and the network protocol header recorded by the plurality of second IO vectors to the hardware processing unit.
Correspondingly, for the hardware processing unit, for any data fragment A in the plurality of data fragments, the data fragment A can be read from the memory of the host in a DMA mode according to the memory address information recorded by the second IO vector corresponding to the data fragment A; and calculates the check code of data fragment a. For the specific embodiment of calculating the check code of the data fragment a, reference may be made to the related content of the check code of the calculation target application data, which is not described herein.
Further, the network protocol header, the data fragment A and the check code of the data fragment A can be assembled into a first message corresponding to the data fragment A; and the first message corresponding to the data fragment A is sent to the receiving end through the network interface.
The processing procedure of the hardware processing unit is the same for each data slice, and the above embodiment only uses the data slice a as an example for explanation. The hardware processing unit can obtain first messages corresponding to each data fragment according to the same mode, namely a plurality of first messages are obtained; and sending the first messages to the receiving end in batches.
In this embodiment of the present application, since the metadata of the target service is also data required by the target service, the sending end also needs to perform check code calculation on the metadata of the target service. Specifically, for the serial processing unit, the first IO vector may also be obtained from the hash set list; and the memory address information of the metadata of the target service recorded by the first IO vector is sent to the hardware processing unit.
Correspondingly, for the hardware processing unit, according to the memory address information of the metadata of the target service recorded by the first IO vector, the metadata of the target service can be read from the memory corresponding to the serial processing unit in a DMA mode. In the embodiment of the first serial processing unit of the target service disposed on the host side, the serial processing unit is the first serial processing unit, and the corresponding memory is the memory of the host side. In an embodiment of the second serial processing unit disposed on the network card side by the target service, the serial processing unit is the second serial processing unit, and the corresponding memory is the memory on the network card.
Further, a check code of metadata of the target service may be calculated. For the specific embodiment of calculating the check code of the metadata of the target service, reference may be made to the related content of the check code of the target application data, which is not described herein.
Further, the network protocol header, the metadata of the target service and the check code of the metadata of the target service can be assembled into a message (defined as a second message) corresponding to the metadata of the target service; and sending the second message to the receiving end through the network interface.
When the receiving end receives the message, the serial processing unit and the hardware processing unit of the receiving end can carry out accuracy check on the Payload Data (Payload Data) of the message. Fig. 10 is a flowchart of another data processing method according to an embodiment of the present application. The data processing method shown in fig. 10 is applicable to a serial processing unit or a hardware processing unit on a network card. As shown in fig. 10, the data processing method is mainly used for performing accuracy check on the payload data of the received message by the receiving end. As shown in fig. 10, the data processing method may include:
1001. And obtaining the target message received by the network interface of the network card.
1002. And acquiring the effective load data and the check code of the effective load data from the target message.
1003. And carrying out accuracy check on the payload data by using the check code of the payload data.
For the receiving end, the hardware processing unit or the serial processing unit may acquire a message (defined as a target message) sent by the sending end and received by the network interface of the network card in step 1001. The target message may be a message provided by the sender in the foregoing embodiment and encapsulated with target application data, or a message encapsulated with data fragments of target application data, or a message encapsulated with metadata of a target service. Target application data, data fragments of the target application data or metadata of the target service encapsulated in the target message are payload data of the message.
Further, in step 1002, payload data and a check code of the payload data may be obtained from the received target packet; and in step 1003, the payload data is checked for accuracy using the check code of the payload data.
Specifically, the same check algorithm as that of the transmitting end can be utilized to calculate the check code of the payload data; comparing whether the calculated check code is consistent with the check code packaged in the message, and if so, determining that the effective load data passes the accuracy check; if the payload data does not accord with the accuracy check, determining that the payload data does not pass the accuracy check.
Aiming at the serial processing unit, protocol analysis can be carried out on the message received by the network card through a network protocol stack operated by the serial processing unit, so as to obtain effective load data and check codes of the effective load data; and the accuracy of the payload data is checked by using the check code of the payload data through a network protocol stack or a target service running on the serial processing unit.
Further, in the case where the payload data passes the accuracy check, the payload data may be processed through a target service running thereon, or the like.
In this embodiment, the serial processing unit may be a second serial processing unit on the network card, or may be a first serial processing unit on the host side.
In the embodiment of checking the payload data of the message received by the network card, on one hand, the checking of the payload data is unloaded to hardware to finish, so that the serial processing resource of the host can be released, the hardware acceleration of data checking can be realized, and the processing efficiency of data checking is improved. On the other hand, the network card is positioned on the transmission link of the target application data, so that the effective load data of the message is checked on the hardware processing unit of the network card, the function of checking the unloading data of the associated hardware is realized, the data link of the effective load data can be shortened, and the checking efficiency of the effective load data is further improved.
For the embodiment of checking the payload data of the message received by the network card by the second serial processing unit on the network card of the receiving end, on one hand, the checking of the payload data is unloaded to the serial resource of the network card, and the serial processing resource of the host can be released. On the other hand, the network card is positioned on the transmission link of the target application data, so that the effective load data of the message is checked on the serial processing unit of the network card, the function of checking the unloading data along with the path is realized, the data link of the effective load data can be shortened, and the checking efficiency of the effective load data is improved.
It is worth to describe that the data processing method in the data transmission process of the transmitting end and the data processing method in the data receiving process of the receiving end can be deployed in different devices; may be deployed on the same device.
It should be noted that, the execution subjects of each step of the method provided in the above embodiment may be the same device, or the method may also be executed by different devices. For example, the execution subject of steps 801 and 802 may be device a; for another example, the execution body of step 801 may be device a, and the execution body of step 802 may be device B; etc.
In addition, in some of the above embodiments and the flows described in the drawings, a plurality of operations appearing in a specific order are included, but it should be clearly understood that the operations may be performed out of the order in which they appear herein or performed in parallel, the sequence numbers of the operations, such as 801, 802, etc., are merely used to distinguish between the various operations, and the sequence numbers themselves do not represent any order of execution. In addition, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
Accordingly, embodiments of the present application also provide a computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform the steps in the data processing method provided in the above embodiments.
Fig. 11 is a schematic structural diagram of a network card according to an embodiment of the present application. As shown in fig. 11, the network card includes: a hardware processing unit 20a and a network interface 20b. The network card is used for being in communication connection with the host computer.
The hardware processing unit 20a is communicatively connected to the serial processing unit 10. The serial processing unit 10 is disposed on a network card or a host. Fig. 11 illustrates that only the serial processing unit 10 is provided in the network card.
In this embodiment, the serial processing unit 10 is configured to obtain memory address information of target application data of an application program; generating a network protocol header through an operating network protocol stack; and a hardware processing unit 20a for providing the memory address information of the target application data and the network protocol header to the network card.
The hardware processing unit 20a is configured to read the target application data from the memory of the host by using a DMA method according to the memory address information of the target application data, and calculate a check code of the target application data; assembling the target application data, the check code of the target application data and the network protocol header into a first message; the first message is sent to the receiving end through the network interface 20 b.
In some embodiments, the serial processing unit 10 is further configured to obtain memory address information of metadata of a target service operated by the serial processing unit; adding memory address information of metadata of the target service to the first IO vector; adding the memory address information of the target application data to a second IO vector; and generating a hash set list according to the first IO vector and the second IO vector.
The second IO vector is a plurality of. The serial processing unit 10 is also configured to: and determining the memory address information of the plurality of data fragments to be segmented of the target application data according to the data segmentation requirement and the memory address information of the target application data. Accordingly, when the memory address information of the target application data is added to the second IO vector, the serial processing unit 10 is specifically configured to: adding memory address information of the plurality of data fragments to a plurality of second IO vectors; the second IO vectors are in one-to-one correspondence with the data slices.
Accordingly, when the serial processing unit 10 provides the memory address information and the network protocol header of the target application data to the hardware processing unit, the serial processing unit is specifically configured to: obtaining a plurality of second IO vectors from the hash set list; and providing the memory address information and the network protocol header recorded by the plurality of second IO vectors to the hardware processing unit.
For any one of the plurality of data slices, the hardware processing unit 20a is configured to read any one of the data slices from the memory of the host in a DMA manner according to the memory address information recorded by the second IO vector corresponding to the any one of the data slices; calculating the check code of any data fragment; assembling any data fragment, check codes of any data fragment and a network protocol header into a first message corresponding to any data fragment; the first message corresponding to any data fragment is sent to the receiving end through the network interface 20 b.
In some embodiments, serial processing unit 10 is further configured to: acquiring a first IO vector from a hash set list; and providing the memory address information of the metadata of the target service recorded by the first IO vector to the hardware processing unit.
Accordingly, the hardware processing unit 20a is further configured to: reading the metadata of the target service from the memory corresponding to the serial processing unit according to the memory address information of the metadata of the target service; and calculating a check code of metadata of the target service; assembling the network protocol header, the metadata of the target service and the check code of the metadata of the target service into a second message; and sends the second message to the receiving end through the network interface 20 b.
Optionally, the hardware processing unit 20a is specifically configured to, when calculating the check code of the target application data: and encrypting the target application data by using a CRC encryption algorithm to obtain a check code of the target application data.
Accordingly, the hardware processing unit 20a is specifically configured to, when calculating the check code of the metadata of the target service: and encrypting the metadata of the target application data by using a CRC encryption algorithm to obtain a check code of the metadata of the target application data.
In some embodiments, the hardware processing unit 20a or the serial processing unit 10 is further configured to: a target message received through the network interface 20 b; acquiring effective load data and check codes of the effective load data from a target message; and carrying out accuracy check on the payload data by using the check code of the payload data.
Optionally, the hardware processing unit 20a or the serial processing unit 10 is further configured to: in case the payload data passes the accuracy check, the payload data is processed by the target service running on the serial processing unit 10.
The network card provided in this embodiment is additionally provided with a hardware processing unit, which is used for performing check code calculation on target application data to be transmitted. On the one hand, the check code calculation of the target application data is unloaded to hardware for completion, so that not only can the serial processing resources of the host be released, but also the hardware acceleration of the check code calculation can be realized, and the processing efficiency of the check code calculation is improved. On the other hand, the network card is positioned on the transmission link of the target application data, so that the check code of the target application data is calculated on the hardware processing unit of the network card, the calculation of the off-load check code of the associated hardware is realized, the data link of the application data can be shortened, and the check code calculation efficiency of the application data is further improved.
Fig. 12 is a schematic structural diagram of a host according to an embodiment of the present application. As shown in fig. 12, the host may include: a memory 120a and a processor 120b. Wherein the memory 120a is used for storing a computer program; the host is adapted to be communicatively coupled to the network card 20.
Processor 120b is coupled to memory 120a for executing a computer program for: acquiring memory address information of target application data of an application program; generating a network protocol header through an operating network protocol stack; providing the memory address information of the target application data and the network protocol header to a hardware processing unit of the network card, so that the hardware processing unit can read the target application data from the memory of the host by utilizing a Direct Memory Access (DMA) mode according to the memory address information of the target application data, and calculate a check code of the target application data; assembling the target application data, the check code of the target application data and the network protocol header into a first message; and transmitting the first message to the receiving end through a network interface of the network card.
In some embodiments, the processor 120b is further configured to: acquiring memory address information of metadata of a target service operated by the processor 120 b; adding memory address information of metadata of the target service to the first IO vector; adding memory address information of the target application data to at least one second IO vector; and generating a hash set list according to the first IO vector and at least one second IO vector.
In some embodiments, the second IO vector is a plurality. The processor 120b is further configured to: and determining the memory address information of the plurality of data fragments to be segmented of the target application data according to the data segmentation requirement and the memory address information of the target application data. Accordingly, when the memory address information of the target application data is added to the second IO vector, the processor 120b is specifically configured to: adding memory address information of the plurality of data fragments to a plurality of second IO vectors; the second IO vectors are in one-to-one correspondence with the data slices.
Accordingly, when the processor 120b provides the memory address information and the network protocol header of the target application data to the hardware processing unit of the network card, the processor is specifically configured to: obtaining a plurality of second IO vectors from the hash set list; and providing the memory address information and the network protocol header recorded by the plurality of second IO vectors to the hardware processing unit.
Optionally, the processor 120b is further configured to: acquiring a first IO vector from a hash set list; providing the memory address information of the metadata of the target service recorded by the first IO vector to the hardware processing unit, so that the hardware processing unit can read the metadata of the target service from the memory corresponding to the serial processing unit according to the memory address information of the metadata of the target service; and calculating a check code of metadata of the target service; and assembling the network protocol header, the metadata of the target service and the check code of the metadata of the target service into a second message; and sending the second message to the receiving end through the network interface.
In some embodiments, the processor 120b is further configured to: acquiring a target message received by a network interface of a network card; acquiring effective load data and check codes of the effective load data from a target message; and carrying out accuracy check on the payload data by using the check code of the payload data.
In some alternative embodiments, as shown in fig. 12, the host may further include: communication component 120c, power component 120d, etc. In some embodiments, the host may be implemented as a terminal device such as a computer, workstation, or the like. Accordingly, the host may further include: display component 120e, audio component 120f, etc. The illustration of only a portion of the components in fig. 12 is not intended to be limiting, nor is it intended that the host must contain all of the components shown in fig. 12, nor is it intended that the host include only the components shown in fig. 12.
When the host provided by the embodiment is connected with the network card provided with the hardware processing unit, the check code calculation of the target application data can be unloaded to the hardware processing unit of the network card to finish, so that the serial processing resource of the host can be released, the hardware acceleration of the check code calculation can be realized, and the processing efficiency of the check code calculation is improved. On the other hand, the network card is positioned on the transmission link of the target application data, so that the check code of the target application data is calculated on the hardware processing unit of the network card, the calculation of the off-load check code of the associated hardware is realized, the data link of the application data can be shortened, and the check code calculation efficiency of the application data is further improved.
In embodiments of the present application, the memory is used to store a computer program and may be configured to store various other data to support operations on the device on which it resides. Wherein the processor may execute a computer program stored in the memory to implement the corresponding control logic. The Memory may be implemented by any type or combination of volatile or non-volatile Memory devices, such as Static Random-Access Memory (SRAM), electrically erasable programmable Read-Only Memory (Electrically Erasable Programmable Read Only Memory, EEPROM), erasable programmable Read-Only Memory (Electrical Programmable Read Only Memory, EPROM), programmable Read-Only Memory (Programmable Read Only Memory, PROM), read Only Memory (ROM), magnetic Memory, flash Memory, magnetic or optical disk.
In the embodiments of the present application, the processor may be any hardware processing device that may execute the above-described method logic. Alternatively, the processor may be a central processing unit (Central Processing Unit, CPU), a graphics processor (Graphics Processing Unit, GPU) or a micro control unit (Microcontroller Unit, MCU); programmable devices such as Field programmable gate arrays (Field-Programmable Gate Array, FPGA), programmable array logic devices (Programmable Array Logic, PAL), general array logic devices (General Array Logic, GAL), complex programmable logic devices (Complex Programmable Logic Device, CPLD), and the like; or an advanced reduced instruction set (Reduced Instruction Set Compute, RISC) processor (Advanced RISC Machines, ARM) or System on Chip (SoC), etc., but is not limited thereto.
In embodiments of the present application, the communication component is configured to facilitate wired or wireless communication between the device in which it resides and other devices. The device in which the communication component is located may access a wireless network based on a communication standard, such as wireless fidelity (Wireless Fidelity, wiFi), 2G or 3G,4G,5G or a combination thereof. In one exemplary embodiment, the communication component receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component may also be implemented based on near field communication (Near Field Communication, NFC) technology, radio frequency identification (Radio Frequency Identification, RFID) technology, infrared data association (Infrared Data Association, irDA) technology, ultra Wide Band (UWB) technology, bluetooth (BT) technology, or other technologies.
In embodiments of the present application, the display assembly may include a liquid crystal display (Liquid Crystal Display, LCD) and a Touch Panel (TP). If the display assembly includes a touch panel, the display assembly may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or sliding action, but also the duration and pressure associated with the touch or sliding operation.
In embodiments of the present application, the power supply assembly is configured to provide power to the various components of the device in which it is located. The power components may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the devices in which the power components are located.
In embodiments of the present application, the audio component may be configured to output and/or input audio signals. For example, the audio component includes a Microphone (MIC) configured to receive external audio signals when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may be further stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals. For example, for a device with language interaction functionality, voice interaction with a user, etc., may be accomplished through an audio component.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and provide corresponding operation entries for the user to select authorization or rejection.
It should be further noted that, the descriptions of "first" and "second" herein are used to distinguish between different messages, devices, modules, etc., and do not represent a sequence, nor do they limit that "first" and "second" are different types.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk storage, CD-ROM (Compact Disc Read-Only Memory), optical storage, etc.) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs, etc.), input/output interfaces, network interfaces, and memory.
The Memory may include volatile Memory, random-Access Memory (RAM), and/or nonvolatile Memory in a computer-readable medium, such as read-only Memory (ROM) or flash RAM. Memory is an example of computer-readable media.
The storage medium of the computer is a readable storage medium, which may also be referred to as a readable medium. Readable storage media, including both permanent and non-permanent, removable and non-removable media, may be implemented in any method or technology for information storage. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase-Change Memory (PRAM), static Random Access Memory (SRAM), dynamic random access Memory (Dynamic Random Access Memory, DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash Memory or other Memory technology, compact disc read only Memory (CD-ROM), digital versatile disks (Digital Video Disc, DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by the computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (16)

1. A computing system, comprising: a host and a network card; the host is in communication connection with the network card; the network card comprises: a hardware processing unit and a network interface; the computing system further includes: a serial processing unit; the serial processing unit is arranged on the network card or the host; the serial processing unit is in communication connection with the hardware processing unit;
The serial processing unit is operated with a network protocol stack and is used for acquiring the memory address information of the target application data of the application program and generating a network protocol header through the network protocol stack; providing the memory address information of the target application data and the network protocol header to the hardware processing unit;
the hardware processing unit is used for reading the target application data from the memory of the host by utilizing a Direct Memory Access (DMA) mode according to the memory address information of the target application data, and calculating a check code of the target application data; assembling the target application data, the check code of the target application data and the network protocol header into a first message; and sending the first message to a receiving end through the network interface.
2. The system of claim 1, wherein the hardware processing unit or the serial processing unit is further configured to:
acquiring a target message received by the network interface;
acquiring effective load data and a check code of the effective load data from the target message;
and utilizing the check code of the payload data to check the accuracy of the payload data.
3. The data processing method is suitable for a hardware processing unit on a network card, and is characterized in that the network card is used for being in communication connection with a host; the method comprises the following steps:
acquiring memory address information and a network protocol header of target application data of an application program provided by a serial processing unit; the serial processing unit is arranged on the host or the network card;
reading the target application data from the memory of the host by utilizing a Direct Memory Access (DMA) mode according to the memory address information of the target application data;
calculating a check code of the target application data; assembling the target application data, the check code of the target application data and the network protocol header into a first message;
and sending the first message to a receiving end through a network interface of the network card.
4. The method of claim 3, wherein the memory address information of the metadata of the target service and the memory address information of the target application data run by the serial processing unit are added to the first IO vector and the second IO vector, respectively, by the serial processing unit; the first IO vector and the second IO vector form a hash set list;
The method for acquiring the memory address information of the target application data of the application program provided by the serial processing unit comprises the following steps:
and acquiring memory address information of a second IO vector record read from the hash set list and provided by the serial processing unit.
5. The method of claim 4, wherein the second IO vector is a plurality of data slices, each of which records memory address information of a plurality of data slices of the target application data; the memory address information of the plurality of data fragments is determined by the serial processing unit according to the data segmentation requirement and the memory address information of the target application data;
the reading the target application data from the memory of the host by using a direct memory access DMA mode according to the memory address information of the target application data includes:
for any one of the plurality of data fragments, reading the any one data fragment from the memory of the host in a DMA mode according to the memory address information recorded by the second IO vector corresponding to the any one data fragment;
the calculating the check code of the target application data comprises the following steps: calculating the check code of any data fragment;
The assembling the target application data, the check code of the target application data and the network protocol header into a first message includes:
assembling the any data fragment, the check code of the any data fragment and the network protocol header into a first message corresponding to the any data fragment;
the sending the first message to a receiving end through the network interface of the network card includes:
and sending the first message corresponding to any data fragment to a receiving end through the network interface.
6. The method according to claim 4, wherein the method further comprises:
acquiring memory address information of metadata of the target service recorded by a first IO vector in the hash set list provided by the serial processing unit;
reading the metadata of the target service from the memory corresponding to the serial processing unit according to the memory address information of the metadata of the target service; and calculating a check code of metadata of the target service;
assembling the network protocol header, the metadata of the target service and the check code of the metadata of the target service into a second message;
and sending the second message to the receiving end through the network interface.
7. The method according to any one of claims 3-6, further comprising:
receiving a target message sent by a sending end through the network interface;
acquiring effective load data and a check code of the effective load data from the target message;
and utilizing the check code of the payload data to check the accuracy of the payload data.
8. The data processing method is suitable for a serial processing unit on a network card or a host, and is characterized in that the network card is in communication connection with the host; the method comprises the following steps:
acquiring memory address information of target application data of an application program;
generating a network protocol header through an operating network protocol stack;
providing the memory address information of the target application data and the network protocol header to a hardware processing unit of the network card, so that the hardware processing unit can read the target application data from the memory of the host by utilizing a Direct Memory Access (DMA) mode according to the memory address information of the target application data and calculate a check code of the target application data; assembling the target application data, the check code of the target application data and the network protocol header into a first message; and transmitting the first message to a receiving end through a network interface of the network card.
9. The method as recited in claim 8, further comprising:
acquiring memory address information of metadata of a target service operated by the serial processing unit;
adding the memory address information of the metadata of the target service to a first IO vector; adding the memory address information of the target application data to at least one second IO vector;
generating a hash set list according to the first IO vector and the at least one second IO vector;
the hardware processing unit for providing the memory address information of the target application data and the network protocol header to the network card includes:
obtaining the at least one second IO vector from the hash set list; and providing the memory address information recorded by the at least one second IO vector and the network protocol header to the hardware processing unit.
10. The method of claim 9, wherein the second IO vector is a plurality of; the method further comprises the steps of:
determining memory address information of a plurality of data fragments to be segmented of the target application data according to data segmentation requirements and the memory address information of the target application data;
the adding the memory address information of the target application data to at least one second IO vector includes:
Adding the memory address information of the plurality of data fragments to a plurality of second IO vectors; the second IO vectors are in one-to-one correspondence with the data slices.
11. The method as recited in claim 9, further comprising:
acquiring the first IO vector from the hash set list; providing the memory address information of the metadata of the target service recorded by the first IO vector to the hardware processing unit, so that the hardware processing unit reads the metadata of the target service from the memory corresponding to the serial processing unit according to the memory address information of the metadata of the target service; and calculating a check code of metadata of the target service; and assembling the network protocol header, the metadata of the target service and the check code of the metadata of the target service into a second message; and sending the second message to the receiving end through the network interface.
12. The method according to any one of claims 8-11, further comprising:
acquiring a target message received by a network interface of the network card;
acquiring effective load data and a check code of the effective load data from the target message;
And utilizing the check code of the payload data to check the accuracy of the payload data.
13. The data processing method is suitable for a hardware processing unit on a network card and is characterized by comprising the following steps of:
acquiring a target message received by a network interface of the network card;
acquiring effective load data and a check code of the effective load data from the target message;
and utilizing the check code of the payload data to check the accuracy of the payload data.
14. A network card, the network card comprising: a hardware processing unit and a network interface; the network card is used for being in communication connection with a host;
the hardware processing unit is in communication connection with the serial processing unit; the serial processing unit is arranged on the network card or the host;
the hardware processing unit being adapted to perform the steps of the method of any of claims 3-7 and 13;
the serial processing unit is adapted to perform the steps of the method of any of claims 8-12.
15. A host, comprising: a memory and a processor; wherein the memory is used for storing a computer program; the host is used for being in communication connection with the network card;
The processor is coupled to the memory for executing the computer program for performing the steps in the method of any of claims 8-12.
16. A computer-readable storage medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform the steps in the method of any of claims 3-13.
CN202311598344.2A 2023-11-27 2023-11-27 Computing system, data processing method, network card, host computer and storage medium Active CN117318892B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311598344.2A CN117318892B (en) 2023-11-27 2023-11-27 Computing system, data processing method, network card, host computer and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311598344.2A CN117318892B (en) 2023-11-27 2023-11-27 Computing system, data processing method, network card, host computer and storage medium

Publications (2)

Publication Number Publication Date
CN117318892A true CN117318892A (en) 2023-12-29
CN117318892B CN117318892B (en) 2024-04-02

Family

ID=89260689

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311598344.2A Active CN117318892B (en) 2023-11-27 2023-11-27 Computing system, data processing method, network card, host computer and storage medium

Country Status (1)

Country Link
CN (1) CN117318892B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050065133A (en) * 2003-12-24 2005-06-29 한국전자통신연구원 Network card having zero-copy transmission function, server and method thereof
US20070115833A1 (en) * 2005-11-21 2007-05-24 Gerald Pepper Varying the position of test information in data units
CN111800223A (en) * 2019-08-15 2020-10-20 北京京东尚科信息技术有限公司 Method, device and system for generating sending message and processing receiving message
CN114503128A (en) * 2019-10-02 2022-05-13 谷歌有限责任公司 Accelerating embedded layer computations
CN115103036A (en) * 2022-05-20 2022-09-23 中国科学院计算技术研究所 Efficient TCP/IP datagram processing method and system
CN115150472A (en) * 2021-03-16 2022-10-04 华为技术有限公司 Concurrency control method, network card, computer device, and storage medium
CN115733832A (en) * 2022-10-31 2023-03-03 阿里云计算有限公司 Computing device, message receiving method, programmable network card and storage medium
WO2023116141A1 (en) * 2021-12-21 2023-06-29 阿里巴巴(中国)有限公司 Data processing method, system and device, and medium
CN116737069A (en) * 2023-05-19 2023-09-12 曙光信息产业股份有限公司 Data transmission method, device, system and computer equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050065133A (en) * 2003-12-24 2005-06-29 한국전자통신연구원 Network card having zero-copy transmission function, server and method thereof
US20070115833A1 (en) * 2005-11-21 2007-05-24 Gerald Pepper Varying the position of test information in data units
CN111800223A (en) * 2019-08-15 2020-10-20 北京京东尚科信息技术有限公司 Method, device and system for generating sending message and processing receiving message
CN114503128A (en) * 2019-10-02 2022-05-13 谷歌有限责任公司 Accelerating embedded layer computations
CN115150472A (en) * 2021-03-16 2022-10-04 华为技术有限公司 Concurrency control method, network card, computer device, and storage medium
WO2023116141A1 (en) * 2021-12-21 2023-06-29 阿里巴巴(中国)有限公司 Data processing method, system and device, and medium
CN115103036A (en) * 2022-05-20 2022-09-23 中国科学院计算技术研究所 Efficient TCP/IP datagram processing method and system
CN115733832A (en) * 2022-10-31 2023-03-03 阿里云计算有限公司 Computing device, message receiving method, programmable network card and storage medium
CN116737069A (en) * 2023-05-19 2023-09-12 曙光信息产业股份有限公司 Data transmission method, device, system and computer equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王立文;王友祥;唐雄燕;杨文聪;张雪贝;李沸乐;: "5G核心网UPF硬件加速技术", 移动通信, no. 01 *

Also Published As

Publication number Publication date
CN117318892B (en) 2024-04-02

Similar Documents

Publication Publication Date Title
KR101035302B1 (en) A cloud system and a method of compressing and transmtting files in a cloud system
CN113296718B (en) Data processing method and device
CN111628967B (en) Log data transmission method and device, computer equipment and storage medium
CN111930676A (en) Method, device, system and storage medium for communication among multiple processors
US20230118176A1 (en) Data transmission method and apparatus, computer-readable storage medium, electronic device, and computer program product
WO2020147403A1 (en) Cloud storage based file processing method, system and computer device
CN110888838A (en) Object storage based request processing method, device, equipment and storage medium
CN112615929B (en) Method and equipment for pushing messages
EP3273664B1 (en) Data processing method and device, server, and controller
CN111935227A (en) Method for uploading file through browser, browser and electronic equipment
CN112926059B (en) Data processing method, device, equipment and storage medium
CN113656364B (en) Sensor data processing method, device and computer readable storage medium
CN114788199A (en) Data verification method and device
CN117318892B (en) Computing system, data processing method, network card, host computer and storage medium
CN108833500B (en) Service calling method, service providing method, data transmission method and server
CN111800223A (en) Method, device and system for generating sending message and processing receiving message
CN109213737A (en) A kind of data compression method and apparatus
EP4280053A1 (en) Method and system for upgrading firmware of vehicle infotainment system
CN114780353B (en) File log monitoring method and system and computing device
CN114443525B (en) Data processing system, method, electronic equipment and storage medium
CN112379967B (en) Simulator detection method, device, equipment and medium
CN112688905B (en) Data transmission method, device, client, server and storage medium
CN112511522A (en) Method, device and equipment for reducing memory occupation in detection scanning
CN112650710A (en) Data migration sending method and device, storage medium and electronic device
CN110769027A (en) Service request processing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant