CN105871739B - A kind of method and calculating equipment of processing message - Google Patents

A kind of method and calculating equipment of processing message Download PDF

Info

Publication number
CN105871739B
CN105871739B CN201610439598.3A CN201610439598A CN105871739B CN 105871739 B CN105871739 B CN 105871739B CN 201610439598 A CN201610439598 A CN 201610439598A CN 105871739 B CN105871739 B CN 105871739B
Authority
CN
China
Prior art keywords
receiving queue
buffer
descriptor
payload data
message
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610439598.3A
Other languages
Chinese (zh)
Other versions
CN105871739A (en
Inventor
埃琳娜·古列维奇
维克多·吉辛
曲会春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201610439598.3A priority Critical patent/CN105871739B/en
Publication of CN105871739A publication Critical patent/CN105871739A/en
Application granted granted Critical
Publication of CN105871739B publication Critical patent/CN105871739B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/34Flow control; Congestion control ensuring sequence integrity, e.g. using sequence numbers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a kind of method for handling data message and equipment is calculated, is related to field of network transmission, the calculating equipment is established Business Stream with opposite end calculating equipment in advance and connect;The described method includes: receiving the message from Business Stream connection, the serial number carried in the message and payload data are obtained;The deviation post that the payload data is stored in corresponding receiving queue is connected in the Business Stream according to the determination of the serial number of the message;The payload data is stored into the receiving queue based on the deviation post.Message processing delay problem in the prior art is solved with this, improves the process performance for calculating equipment.

Description

A kind of method and calculating equipment of processing message
Technical field
The present invention relates to field of network transmission more particularly to a kind of method for handling message and calculate equipment.
Background technique
It is a kind of by transmission control protocol (Transmission Control in order to improve the processing capacity of network application Protocol, TCP) processing of data moves on to the technology realized on network interface card and comes into being, i.e. and TCP unloads engine (TCP Offload Engine, TOE) technology, transfer to network interface card to execute link layer, network layer, transition layer protocol processing task, such as link setup chain rupture, TCP response and the condition managings such as the out-of-order recombination of message.Network interface card is received in agreement process by receiving queue management slow Rush area;Due to Network Packet Loss, time delay etc., the sequence that will lead to network interface card reception message is different from the sequence of message is sent, and needs The reassembly buffer area of preset predefined capacity is wanted, the sequence for receiving message is completed, so as to transmission control protocol/Internet Interconnection association The protocol stack for discussing (Transmission Control Protocol/Inter Protocol, TCP/IP) completes the association of application layer View processing, is finally completed the processing of the data message of application program.
In the prior art, it for the multiple messages received, needs the multiple message being first stored to preset recombination and delay It rushes in area and it is ranked up according to the TCP serial number of each data message, then in sequence by the net of the message after sequence Lotus data are saved in the reception buffer area, then are read the payload data in the reception buffer area and handled.It is this Implementation causes the processing delay of message longer, and finally affects the processing capacity for calculating equipment.
Summary of the invention
The embodiment of the invention provides a kind of method for handling message and equipment is calculated, when can reduce the processing of message Prolong, improves the processing capacity for calculating equipment.
In order to achieve the above object, the present invention adopts the following technical scheme:
In a first aspect, providing a kind of method for handling message, this method is applied to calculate in equipment, and the calculating equipment is pre- First Business Stream is established with opposite end calculating equipment to connect;This method comprises:
The message connected from the Business Stream is received, the serial number carried in the message and payload data are obtained;
It is determined to connect in corresponding receiving queue in the Business Stream according to the serial number of the message and stores the payload number According to deviation post;
The payload data is stored into the receiving queue based on the deviation post.
Specifically, receiving end calculate equipment network interface card receive from Business Stream connection TCP message, complete link layer, Network layer, the protocol processes of transport layer, obtain TCP serial number and payload data entrained by the TCP message, and payload data is institute State the content-data in TCP message.
By foregoing description process, when receiving message, can be directly determined according to the TCP serial number of message in the reception The deviation post of the payload data of the message is stored in queue, and is directly stored the payload data empty to corresponding storage Between.In this way, additional reassembly buffer area is not needed in message processing method provided in an embodiment of the present invention, not first in weight yet The multiple messages received are ranked up in group buffer area and then in order read the payload data of the message after sequence The process of receiving queue, but directly arrived the payload data storage of the message according to the serial number of the message after receiving message In receiving queue, the processing delay of message is thus significantly reduced, improves the process performance for calculating equipment.
Possibility implementation with reference to first aspect, in the first possible implementation of first aspect, the reception Queue includes at least one buffer descriptor container, at least one each described buffer descriptor container includes at least one Buffer descriptor and a connection buffer descriptor, at least one each described buffer descriptor are received for managing Corresponding memory space in buffer area;The connection buffer descriptor is for being directed toward next buffer area in the receiving queue Descriptor container;
Then described determined according to the serial number of the message connects described in storage in corresponding receiving queue in the Business Stream The deviation post of payload data specifically:
It is retouched in the receiving queue for storing the buffer area of the payload data according to the confirmation of the serial number of the message State the buffer descriptor in symbol container;
Then the payload data is stored into the receiving queue based on the deviation post specifically:
The buffer descriptor that the payload data is stored in the buffer descriptor container of the confirmation is managed In memory space.
The possible implementation of with reference to first aspect the first, in second of possible implementation of first aspect, in advance Shared receiving queue is first created, the shared receiving queue includes multiple buffering area descriptor container;Then
If confirming that the receiving queue can not save the payload data according to the serial number of the message, or connect described in judgement The use ratio for receiving queue reaches predefined first threshold, then by least one buffer description in the shared receiving queue Container carry is accorded in the receiving queue.
By the description of the above content, by using shared receiving queue mechanism, i.e., a plurality of Business Stream connection is corresponding to be connect It receives queue to be associated in shared receiving queue with the shared storage resource for receiving buffer resource pond, solve in the prior art not The problem of buffer area is not utilized immediately is received in the corresponding receiving queue of active Business Stream, and it is slow to effectively raise reception Rush the utilization rate in area resource pond.On the other hand, it is connect when buffer descriptor container currently available in receiving queue can not save The payload data of receiving text judges that the buffer descriptor container usage amount in receiving queue reaches predefined first threshold When, it can be mentioned by least one buffer descriptor container carry in shared receiving queue in the receiving queue with this High business processing efficiency, reduces processing delay.
As multiple receiving queues are constantly to shared receiving queue application buffer descriptor container, when shared receiving queue The use ratio of middle buffer descriptor container meets predefined thresholds or is unable to satisfy receiving queue required buffer area descriptor When containers demand, the trawl performance for calculating equipment can use memory space idle in reception buffer resource pond and form newly Buffer descriptor is filled into new buffer descriptor container, and multiple new buffer descriptor recompositions are new Buffer descriptor container is mounted in shared receiving queue, is set to improve resource bid efficiency and the calculating of receiving queue Standby processing capacity.
It will be understood by those skilled in the art that receiving idle storage space in buffer resource pond can be reception buffering From the memory space being not used by area resource pond, it is also possible to the memory space discharged after receiving queue use;It is connect when shared The use ratio for receiving buffer descriptor container in queue, which reaches predefined thresholds but receives, deposits in buffer resource pond without the free time When storing up space, the new buffer descriptor container can not be created, administrator couple can be notified by alarm or event form It receives the corresponding memory space in buffer area and carries out dilatation.
Further, in conjunction with flow control algorithm, different business priority can be set during TCP traffic stream process, It is that every TCP connection selects best match to share receiving queue according to priority, service feature is optimal.On the other hand, The corresponding shared receiving queue of each TCP connection is adjusted according to the factors such as network flow, packet loss dynamic.
Description through the above can be pre-created the shared receiving queue of multiple and different depth, and calculate with flow control Method combines, and different business priority is arranged during TCP traffic stream process, is that every TCP connection selects most according to priority It is good to match shared receiving queue, service feature is optimal.
The possible implementation of with reference to first aspect the first, it is described in first aspect in the third possible implementation Method includes:
If there are first buffering area descriptors in the receiving queue, wherein managed in the first buffering area descriptor It has been stored in the memory space of reason in the first payload data and the receiving queue before the first buffering area descriptor The second payload data and second payload data have been stored in the memory space that all buffer descriptors are managed by institute It states and calculates equipment reading, then
The equipment that calculates reads first saved in the memory space that the first buffering area descriptor is managed only Lotus data.
Description through the above, after data are read by calculating equipment in the reception buffer area that receiving queue is managed, Memory space can be discharged, the memory space of release can be rejoined shared receiving queue by trawl performance, for receiving queue The payload data of stored messages completes the sequence of out-of-order message, improves resource utilization.
In conclusion receiving queue using buffer descriptor container as administrative unit, is completed to connect directly in receiving queue Receiving text sequence can solve out-of-order message sequence bring delay problem in the prior art, reduce Message processing time delay, mention The high processing capacity for calculating equipment.On the other hand, in conjunction with shared receiving queue mechanism, the TCP connection of a plurality of Business Stream is shared Buffer resource pond is received, when storage resource is insufficient in receiving queue or judges that the buffer descriptor container in receiving queue makes When reaching predefined first threshold with ratio, by least one buffer descriptor container carry in shared receiving queue in institute State in receiving queue, on the other hand, when network flow moment leap high or Network Packet Loss serious conditions under, can be connect to shared The available buffer descriptor container of queue application is received, traffic handing capacity is improved, reduces processing delay., with flow control algorithm In conjunction in conjunction with factors such as business processing priority, Network status, dynamic adjusts the associated shared receiving queue of receiving queue, is The shared receiving queue of each TCP connection selection best match, is optimal service feature.
Possible implementation with reference to first aspect, in the 4th kind of possible implementation of first aspect, the receiving queue Comprising at least one buffer descriptor, at least one each described buffer descriptor is corresponding in reception buffer area for managing Memory space;
Then described determined according to the serial number of the message connects described in storage in corresponding receiving queue in the Business Stream The deviation post of payload data specifically:
Confirmed in the receiving queue according to the serial number of the message and is retouched for storing the buffer area of the payload data State symbol;
Then the payload data is stored into the receiving queue based on the deviation post specifically:
In the memory space that the buffer descriptor that the payload data is stored in the confirmation is managed.
4th kind of possible implementation with reference to first aspect is created in advance in the 5th kind of possible implementation of first aspect Shared queue is built, the shared queue includes multiple buffering area descriptor;Then
If confirming the receiving queue that can not save described in the payload data or judgement according to the serial number of the message to connect The use ratio for receiving queue reaches predefined second threshold, then by least one buffer description in the shared receiving queue Carry is accorded in the receiving queue.
By the description of the above content, when storage resource is insufficient in receiving queue or judges buffering in the receiving queue When the use ratio of area's descriptor reaches predefined second threshold, at least one buffer area in shared receiving queue can also be retouched Symbol carry is stated on the buffer descriptor chain of the receiving queue, so as to improve business processing efficiency, reduces processing delay.
In addition, receiving team when shared as multiple receiving queues are constantly to shared receiving queue application buffer descriptor The use ratio of buffer descriptor meets predefined thresholds or is unable to satisfy receiving queue required buffer area descriptor and wants in column When asking, the trawl performance for calculating equipment can use memory space idle in reception buffer resource pond and form new buffer area Descriptor is mounted in shared receiving queue.To improve the resource bid efficiency of receiving queue and to calculate equipment disposal ability.
Optionally, the trawl performance that receiving end calculates equipment can also create the shared receiving queue of multiple and different depth, Buffer descriptor corresponds to the memory space of same size, buffer area in different sharing receiving queue in each shared receiving queue Storage size corresponding to descriptor may be the same or different.
It further, in the specific implementation process, can also be every according to different business priority in conjunction with flow control algorithm The shared receiving queue of a receiving queue selection best match, optimization calculate the service process performance of equipment.On the other hand, according to The factors such as network flow, packet loss dynamic adjusts the corresponding shared receiving queue of each TCP connection.
The method described by above embodiments, receiving queue is by buffer descriptor set at equally can solve existing Reassembly buffer area leads to message data processing delay problem in technology, does not need additionally to configure reassembly buffer area again, and utilizes weight Message sequence is completed in group buffer area, then the payload data of the message after sequence is read to the process of receiving queue in order, directly It connects using the out-of-order message sequence of buffer area completion is received, improves the process performance for calculating equipment.On the other hand, shared reception is utilized Queue mechanism, the connections of multiple Business Streams is shared to receive buffer resource pond, when judging that the buffer area in the receiving queue retouches It states symbol container use ratio and reaches predefined first threshold, it can also be by least one buffer descriptor in shared receiving queue Carry in the receiving queue, network flow moment leap high or Network Packet Loss serious conditions under, can be connect by shared It receives queue and applies for available reception buffer area in advance, improve resource utilization and traffic handing capacity, reduce processing delay. Different business priority can also be set in conjunction with flow control algorithm during TCP traffic stream process, be every according to priority TCP connection selects best match to share receiving queue, and service feature is optimal.
4th kind of possible implementation with reference to first aspect, in the 6th kind of possible implementation of first aspect,
If there are second buffering area descriptors in the receiving queue, wherein managed in the second buffering area descriptor Third payload data is stored in the memory space of reason, and in the receiving queue before the second buffering area descriptor The 4th payload data and the 4th payload data have been stored in the memory space that all buffer descriptors are managed by institute It states and calculates equipment reading, then
It is net that the calculating equipment reads the third saved in the memory space that the second buffering area descriptor is managed Lotus data.
Description through the above, after data are read by calculating equipment in the reception buffer area that receiving queue is managed, Memory space can be discharged, the memory space of release can be rejoined shared receiving queue by trawl performance, for receiving queue The payload data of stored messages completes the sequence of out-of-order message, improves resource utilization.
In conclusion receiving queue is by buffer descriptor container or buffer descriptor set at directly utilization receives team The managed recombination for receiving buffer area and completing out-of-order message is arranged, equally can solve reassembly buffer area in the prior art causes to report Literary data processing delay problem improves the process performance for calculating equipment.On the other hand, a plurality of in conjunction with shared receiving queue mechanism Business Stream receives buffer resource pond by the way that shared receiving queue is shared, solves in the prior art because of sluggish Business Stream TCP connection be also required to distribution and the corresponding reception buffer area of reception window, actually these buffer areas are possible without immediately It is used, leads to the problem that resource utilization is low, improve the resource utilization for receiving buffer resource pond.And works as and judge institute The buffer descriptor container or buffer descriptor use ratio stated in receiving queue reach predefined thresholds, can also will be total to Enjoy in receiving queue that at least one buffer descriptor container or buffer descriptor carry are in the receiving queue, in network Flow moment leap high or Network Packet Loss serious conditions under, can apply for available receptions buffering in advance by shared receiving queue Area improves resource utilization and traffic handing capacity, reduces processing delay.It can also be in conjunction with flow control algorithm, in TCP traffic Different business priority is set during stream process, is that every TCP connection selects best match is shared to receive team according to priority Column, service feature is optimal.
Second aspect, provides a kind of calculating equipment, and the calculating equipment includes processor, memory, TCP unloading engine Network interface card, bus, wherein the processor, memory, TCP unloading engine network interface card are in communication with each other by the bus;
Comprising instruction in the memory, described instruction is executed by the processor to complete to act as follows:
Business Stream is established with opposite end calculating equipment to connect, and generates receiving queue for Business Stream connection;
The TCP unloading engine network interface card is used for: being received the message connected from the Business Stream, is obtained and take in the message The serial number and payload data of band;The inclined of the payload data is stored in the receiving queue according to the determination of the serial number of the message Pan position;The payload data is stored into the receiving queue based on the deviation post.
By above-mentioned apparatus, when receiving message, can be directly determined according to the TCP serial number of message in the receiving queue The deviation post of the payload data of the middle storage message, and directly store the payload data to corresponding memory space.This Sample does not need additional reassembly buffer area in message processing method provided in an embodiment of the present invention, first unslow in recombination yet It rushes and the multiple messages received is ranked up in area and then the payload data of the message after sequence is read into reception in order The process of queue, but according to the serial number of the message directly by the payload data storage of the message to reception after receiving message In queue, the processing delay of message is thus significantly reduced, improves the process performance for calculating equipment.
In conjunction with the possibility implementation of second aspect, in second aspect in the first possible implementation, the reception team Column include at least one buffer descriptor container, and each buffer descriptor container is by least one buffer descriptor And a connection buffer descriptor set is at each buffer descriptor receives corresponding in buffer area deposit for managing Store up space;The connection buffer descriptor is for being directed toward next buffer descriptor container in the receiving queue;
It then is determined to connect in corresponding receiving queue in the Business Stream according to the serial number of the message and stores the payload The deviation post of data specifically:
It is retouched in the receiving queue for storing the buffer area of the payload data according to the confirmation of the serial number of the message State the buffer descriptor in symbol container;
Then the payload data is stored into the receiving queue based on the deviation post specifically:
Corresponding to the buffer descriptor that the payload data is stored in the buffer descriptor container of the confirmation In the memory space for receiving buffer area.
By foregoing description process, buffer area is received using buffer descriptor Container Management in receiving queue, works as reception When to message, determined according to the TCP serial number of message slow in the buffer descriptor container for storing the payload data of the message Area's descriptor is rushed, and the payload data is stored to corresponding memory space.Equipment is calculated in this way, providing in the embodiment of the present invention In, out-of-order Packet reassembling can be completed according to TCP serial number directly in receiving queue, do not needed as the prior art separately Reassembly buffer area is divided, not also being ranked up in reassembly buffer area to the multiple messages received first then in order will row The payload data of message after sequence reads the process of receiving queue, thus reduces the processing delay of message, and improves meter Calculate the process performance of equipment.
The first possible implementation in conjunction with second aspect, it is described to deposit in second of second aspect possible implementation Reservoir further includes that instruction is executed by the processor to complete to act as follows:
Shared receiving queue is pre-created, the shared receiving queue includes multiple buffering area descriptor container;Then:
If confirming that the receiving queue can not save the payload data according to the serial number of the message, or connect described in judgement The use ratio for receiving queue reaches predefined first threshold, then by least one buffer description in the shared receiving queue Container carry is accorded in the receiving queue.
By the description of the above content, by using shared receiving queue mechanism, i.e., a plurality of Business Stream connection is corresponding to be connect It receives queue to be associated in shared receiving queue with the shared storage resource for receiving buffer resource pond, solve in the prior art not The problem of buffer area is not utilized immediately is received in the corresponding receiving queue of active Business Stream, and it is slow to effectively raise reception Rush the utilization rate in area resource pond.On the other hand, it is connect when buffer descriptor container currently available in receiving queue can not save The payload data of receiving text judges that the buffer descriptor container usage amount in receiving queue reaches predefined first threshold When, it can be mentioned by least one buffer descriptor container carry in shared receiving queue in the receiving queue with this High business processing efficiency, reduces processing delay.
The first possible implementation in conjunction with second aspect, it is described to deposit in second aspect in the third possible implementation Reservoir further includes that instruction is executed by the processor to complete to act as follows:
If there are first buffering area descriptors in the receiving queue, wherein managed in the first buffering area descriptor It has been stored in the memory space of reason in the first payload data and the receiving queue before the first buffering area descriptor The second payload data and second payload data have been stored in the memory space that all buffer descriptors are managed by institute It states and calculates equipment reading, then
The equipment that calculates reads first saved in the memory space that the first buffering area descriptor is managed only Lotus data.
Description through the above, after data are read by calculating equipment in the reception buffer area that receiving queue is managed, Memory space can be discharged, the memory space of release can be rejoined shared receiving queue by trawl performance, for receiving queue The payload data of stored messages completes the sequence of out-of-order message, improves resource utilization.
In conclusion determining that the buffer descriptor for storing the payload data of the message holds according to the TCP serial number of message Buffer descriptor in device, and by payload data storage to corresponding memory space, can directly in receiving queue root Out-of-order Packet reassembling is completed according to TCP serial number, compared with prior art, does not need separately to divide reassembly buffer area, also without first The multiple messages received are ranked up in reassembly buffer area and then in order read the payload data of the message after sequence The process for getting receiving queue thus reduces the processing delay of message, and improves the process performance for calculating equipment.Another party Face, by sharing receiving queue mechanism, the TCP connection of a plurality of Business Stream is shared to receive buffer resource pond, when in receiving queue When the buffer descriptor container use ratio that storage resource is insufficient or judges in receiving queue reaches predefined thresholds, it will share At least one buffer descriptor container or buffer descriptor carry in receiving queue is in the receiving queue, to mention High receiving queue storage resource uses and applies efficiency.It can also be in conjunction with flow control algorithm, during TCP traffic stream process Different business priority is set, is that every TCP connection selects best match to share receiving queue according to priority, by service feature It is optimal.
In conjunction with the possibility implementation of second aspect, in the 4th kind of possible implementation of second aspect, the reception team Column include at least one buffer descriptor, at least one each described buffer descriptor is right in reception buffer area for managing The memory space answered;
Then described determined according to the serial number of the message connects described in storage in corresponding receiving queue in the Business Stream The deviation post of payload data specifically:
Confirmed in the receiving queue according to the serial number of the message and is retouched for storing the buffer area of the payload data State symbol;
Then the payload data is stored into the receiving queue based on the deviation post specifically:
In the memory space that the buffer descriptor that the payload data is stored in the confirmation is managed.
By the description of the above content, when storage resource is insufficient in receiving queue or judges buffering in the receiving queue When the use ratio of area's descriptor reaches predefined second threshold, at least one buffer area in shared receiving queue can also be retouched Symbol carry is stated on the buffer descriptor chain of the receiving queue, so as to improve business processing efficiency, reduces processing delay.
It is described to deposit in the 5th kind of possible implementation of second aspect in conjunction with the 4th kind of possible implementation of second aspect Reservoir further includes that instruction is executed by the processor to complete to act as follows:
If confirming that the receiving queue can not save the payload data according to the serial number of the message, or connect described in judgement The use ratio for receiving queue reaches predefined second threshold, then by least one buffer description in the shared receiving queue Carry is accorded in the receiving queue.
By the description of the above content, when storage resource is insufficient in receiving queue or judges buffering in the receiving queue When the use ratio of area's descriptor reaches predefined second threshold, at least one buffer area in shared receiving queue can also be retouched Symbol carry is stated on the buffer descriptor chain of the receiving queue, so as to improve business processing efficiency, reduces processing delay.
In addition, receiving team when shared as multiple receiving queues are constantly to shared receiving queue application buffer descriptor The use ratio of buffer descriptor meets predefined thresholds or judges available buffer description in shared receiving queue in column When the quantity of symbol is unable to satisfy the descriptor requirement of receiving queue required buffer area, the trawl performance for calculating equipment can use reception Idle memory space forms new buffer descriptor in buffer resource pond, is mounted in shared receiving queue, to mention The resource bid efficiency and calculating equipment disposal ability of high receiving queue.
Optionally, the trawl performance that receiving end calculates equipment can also create the shared receiving queue of multiple and different depth, Buffer descriptor corresponds to the memory space of same size, buffer area in different sharing receiving queue in each shared receiving queue Storage size corresponding to descriptor may be the same or different.
It further, in the specific implementation process, can also be every according to different business priority in conjunction with flow control algorithm The shared receiving queue of a receiving queue selection best match, optimization calculate the service process performance of equipment.On the other hand, according to The factors such as network flow, packet loss dynamic adjusts the corresponding shared receiving queue of each TCP connection.
The method described by above embodiments, receiving queue is by buffer descriptor set at equally can solve existing Reassembly buffer area leads to message data processing delay problem in technology, improves the process performance for calculating equipment.On the other hand, when sentencing Buffer descriptor container use ratio in the receiving queue of breaking reaches predefined first threshold, can also be by shared reception For at least one buffer descriptor carry in the receiving queue, buffer area can be received with dynamic capacity-expanding by calculating equipment in queue, Network flow moment leap high or Network Packet Loss serious conditions under, available connect can be applied for by shared receiving queue in advance Buffer area is received, traffic handing capacity is improved, reduces processing delay.It can also be in conjunction with flow control algorithm, in TCP traffic stream process Different business priority is set in the process, is that every TCP connection selects best match to share receiving queue according to priority, by industry Business performance is optimal.In another aspect, receiving buffer resource pond using shared receiving queue management, pass through a plurality of Business Stream Shared receiving queue is shared to receive buffer resource pond, solves in the prior art because of the TCP connection of sluggish Business Stream It is also required to distribution reception buffer area corresponding with window is received, actually these buffer areas are possible without is used immediately, is led The problem for causing resource utilization low improves the resource utilization for receiving buffer resource pond.
It is described to deposit in the 6th kind of possible implementation of second aspect in conjunction with the 4th kind of possible implementation of second aspect Reservoir further includes that instruction is executed by the processor to complete to act as follows:
If there are second buffering area descriptors in the receiving queue, wherein managed in the second buffering area descriptor Third payload data is stored in the memory space of reason, and in the receiving queue before the second buffering area descriptor The 4th payload data and the 4th payload data have been stored in the memory space that all buffer descriptors are managed by institute It states and calculates equipment reading, then
It is net that the calculating equipment reads the third saved in the memory space that the second buffering area descriptor is managed Lotus data.
Description through the above, after data are read by calculating equipment in the reception buffer area that receiving queue is managed, Memory space can be discharged, the memory space of release can be rejoined shared receiving queue by trawl performance, for receiving queue The payload data of stored messages completes the sequence of out-of-order message, improves resource utilization.
In conclusion receiving queue using buffer descriptor as administrative unit, is completed to receive message using buffer area is received Sequence, equally can solve out-of-order message sequence bring delay problem in the prior art, reduce Message processing time delay, mention The high processing capacity for calculating equipment.On the other hand, more by creating at least one buffer descriptor in shared receiving queue A receiving queue is shared to receive buffer resource pond, improves resource utilization;Meanwhile in conjunction with flow control algorithm, in conjunction with business The factors such as processing priority, Network status, dynamic adjust the associated shared receiving queue of receiving queue, select for each TCP connection The shared receiving queue for selecting best match, is optimal service feature.
The third aspect provides a kind of computer-readable medium, and for storing computer program, which includes The instruction of the method in any possible implementation for executing first aspect or first aspect.
In conclusion receiving queue utilizes reception using buffer descriptor or buffer descriptor container as administrative unit The sequence of out-of-order message is completed in buffer area, can solve and carries out out-of-order message sequence institute's band using reassembly buffer area in the prior art The delay problem come, reduces Message processing duration, improves the processing capacity for calculating equipment.On the other hand, it is connect by shared It receives and creates at least one buffer descriptor or buffer descriptor container in queue, multiple receiving queues are shared to receive buffer area Resource pool improves resource utilization;In another aspect, in conjunction with flow control algorithm, in conjunction with business processing priority, Network status etc. Factor, dynamic adjust the associated shared receiving queue of receiving queue, and the shared reception team of best match is selected for each TCP connection Column, are optimal service feature.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, will make below to required in the embodiment of the present invention Attached drawing is briefly described, it should be apparent that, drawings described below is only some embodiments of the present invention, for For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other Attached drawing.
Fig. 1 is the schematic diagram that a kind of receiving end calculates equipment in the prior art;
Fig. 2 is a kind of flow diagram of method for handling message provided in an embodiment of the present invention;
Fig. 2A is a kind of schematic diagram for handling message provided in an embodiment of the present invention;
Fig. 3 is a kind of schematic diagram of receiving queue expansion method provided in an embodiment of the present invention;
Fig. 4 A is the schematic diagram of another processing message provided in an embodiment of the present invention;
Fig. 4 B is the schematic diagram of another receiving queue expansion method provided in an embodiment of the present invention;
Fig. 5 is a kind of schematic diagram for calculating equipment provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiment is a part of the embodiments of the present invention, rather than whole embodiments.Based on this hair Embodiment in bright, those of ordinary skill in the art's every other reality obtained without making creative work Example is applied, all should belong to the scope of protection of the invention.
The present invention is suitable for different calculating equipment and is based on transmission control protocol/Internet Protocol (Transmission Control Protocol/Internet Protocol, TCP/IP) scene that carries out data transmission, Fig. 1 is the meter of receiving end The schematic diagram of equipment is calculated, as shown, calculating equipment includes that application software, operating system, CPU, memory, network interface card and network interface card drive It is dynamic, wherein network interface card, which can be, unloads engine (TCP Offload Engine, TOE) network interface card comprising TCP;The receiving end calculates Equipment can receive transmitting terminal and calculate the message that equipment is sent by network.The calculating equipment can be server or other realities Body.
Receiving end calculating equipment pre-establishes a Business Stream with transmitting terminal calculating equipment and connects, and such as TCP connection is described TCP connection has contextual information, which is used to record the parameter and state of the TCP connection, wherein including with institute State the contextual information of the corresponding receiving queue of TCP connection (Receive Queue, RQ), the receiving queue for manage with The corresponding memory space for receiving buffer area of the receiving queue.
It will be understood by those skilled in the art that transmitting terminal calculates equipment meeting basis and connects in TCP message transmission process Receiving end calculates the result that equipment is negotiated and data to be sent is cut into several data segments, and using the data of each data segment as Payload data is encapsulated as a TCP message and is then sent to the receiving end calculating equipment.The length of each data segment is by receiving End calculates equipment and transmitting terminal calculates the result that equipment is negotiated and determines, may be the same or different, therefore receiving end calculates and sets It is standby to receive the message comprising same size or different size of payload data.The embodiment of the present invention in detail below In description, is calculated for equipment receives several messages of isometric size and be described in detail by receiving end.
Wherein, each TCP message includes TCP stem and payload data two parts, and TCP stem includes serial number again, the sequence Number for identify from transmitting terminal calculate equipment to receiving end calculate equipment send data word throttling, i.e., serial number expression transmitting terminal meter Calculate byte count value of first data byte in entire data flow to be sent in each data segment transmitted by equipment.Example Ground, it is assumed that transmitting terminal is calculated equipment and is segmented data flow to be sent as unit of 1000 bytes, every one piece of data carrying It is sent to receiving end in 1 TCP message and calculates equipment, then the serial number 0 of first message, the serial number of second message are 1000, the serial number of third message is 2000 ..., and so on.
After the completion of the TCP connection of Business Stream is established, receiving end calculates equipment can receive the message of each segmentation successively, this Field technical staff causes receiving end to calculate equipment and receives message it is to be understood that due to Network Packet Loss, time delay etc. It is different that the sequence of the message is sequentially sent from transmitting terminal calculating equipment, that is, there are problems that " random ordering ".As shown in Fig. 2, being this hair A kind of message processing method that bright embodiment provides is receiving random ordering for solving receiving end calculating equipment in the prior art Message post-processes the problem of time delay is longer and finally influence calculates equipment disposal ability.This method comprises:
The message that S201, reception are connected from Business Stream, obtains the serial number carried in the message and payload data.
Specifically, receiving end calculate equipment network interface card receive from Business Stream connection TCP message, complete link layer, Network layer, the protocol processes of transport layer, obtain TCP serial number and payload data entrained by the TCP message, and payload data is institute State the content-data in TCP message.
S202, according to the serial number of the message determine the Business Stream connect stored in corresponding receiving queue it is described net The deviation post of lotus data.
S203, the payload data is stored into the receiving queue based on the deviation post.
Wherein, receiving queue can be by least one the buffer descriptor container (Buffer of chain together Descriptor Container, BDC) composition, each buffer descriptor container is by least one buffer descriptor (Buffer Descriptor, BD) and 1 connection buffer descriptor (Linking Buffer Descriptor, LBD) Composition, each buffer descriptor receive corresponding a part of memory space in buffer area for managing;The connection is slow Area's descriptor is rushed for being directed toward next buffer descriptor container in the receiving queue.
Deviation post, which is used to indicate, to be held in the receiving queue for storing the buffer descriptor of the payload data Buffer descriptor in device.It can be determined according to the TCP serial number determined in step S201 and be used in the receiving queue In the deviation post for storing the payload data, it that is to say and determined in the receiving queue with being included in which BDC The memory space which BD is managed stores the payload data.
Illustratively, Fig. 2A is the schematic diagram of message processing method, as shown, buffering of the receiving queue 1 by chain together Area's descriptor container 1 and buffer descriptor container 2 form, and each buffer descriptor container includes 3 buffer descriptors, Each buffer descriptor management receives the memory space that 4KB is corresponded in buffer area;Assuming that the payload data of each message is 4KB can then use n, n+4K, n+8K, n+12K, n+16K, n+20K ... successively as the TCP serial number of each message respectively;Its In, it is assumed that the message of TCP serial number n is that data to be sent are cut into the first section data in several data segments as payload data A message being packaged into;T1 moment, T2 moment, T3 moment, T4 moment, T5 moment and T6 moment respectively indicate receiving end calculating At the time of equipment receives message, wherein T1 to T6 is 6 moment sorted in chronological order;The T1 moment receives TCP serial number n Message, obtain the payload data of the message, according to TCP serial number can be confirmed its deviation post be buffer descriptor container Payload data in the message is then stored in the reception that buffer descriptor 11 is managed and buffered by buffer descriptor 11 in 1 Memory space corresponding to area;The T2 moment receives and takes TCP serial number n+4K, can determine that deviation post is answered according to TCP serial number For buffer descriptor 12 in buffer descriptor container 1, then the packet storage is managed in buffer descriptor 12 Receive memory space corresponding to buffer area;The T3 moment receives the message of TCP serial number n+12K, according to its TCP serial number n+ 12K can determine that its deviation post is buffer descriptor 21 in buffer descriptor container 2, then by the payload in the message Memory space corresponding to the reception buffer area that data storage is managed to buffer descriptor 21;The T4 moment receives TCP sequence Number be n+16K message, according to TCP serial number n+16K can determine its deviation post be buffer descriptor container 2 in buffer area Descriptor 22 then stores the payload data in the message corresponding to the reception buffer area managed to buffer descriptor 22 Memory space;Similarly, T5 moment and T6 moment receive the message of TCP serial number n+8K and n+20K respectively, then according to it TCP serial number is respectively stored into corresponding to the reception buffer area that buffer descriptor 13 in buffer descriptor container 1 is managed Memory space corresponding to the reception buffer area that buffer descriptor 23 is managed in memory space, buffer descriptor container 2. In conjunction with above-mentioned example, receiving end calculates equipment after receiving message, directly will be in the message according to the TCP serial number of message The corresponding memory space in reception buffer area that is managed to receiving queue of payload data storage in, this completes report to random ordering The sequence of text.
By foregoing description process, buffer area is received using buffer descriptor Container Management in receiving queue, works as reception When to message, determined according to the TCP serial number of message slow in the buffer descriptor container for storing the payload data of the message Area's descriptor is rushed, and the payload data is stored to corresponding memory space.In this way, at message provided in an embodiment of the present invention In reason method, out-of-order Packet reassembling can be completed according to TCP serial number directly in receiving queue, do not need as the prior art that Sample separately divides reassembly buffer area, is not also first ranked up then the multiple messages received by suitable in reassembly buffer area The payload data of message after sequence is read the process of receiving queue by sequence, thus reduces the processing delay of message, and mention The process performance for calculating equipment is risen.
In the above-described embodiments, it is connected for each Business Stream and pre-generates 1 corresponding receiving queue, it is described to connect It is consistent with the receiving window size that receiving end calculates equipment and transmitting terminal calculating equipment negotiation to receive queue.The receiving queue is managed The storage resource that the memory space of the reception buffer area of reason is managed from reception buffer resource pond, in specific implementation process In, even if the connection of sluggish Business Stream is also required to distribution receiving queue corresponding with window is received, but actually these are received The reception buffer area that queue is managed is possible without to be used immediately, be easy to cause the wasting of resources in this way.Therefore, the present invention is real Example is applied to advanced optimize as using shared receiving queue (Shared Receive Queue, SRQ) mechanism, i.e., a plurality of Business Stream connects Corresponding receiving queue is connect to be associated in shared receiving queue with the shared storage resource for receiving buffer resource pond.
Optionally, for method described in step S201 to step S203, step S201 especially in the method is operated Before, the method also includes:
At least one can be pre-created in receiving buffer resource pond and share receiving queue (Shared Receive Queue, SRQ), each shared receiving queue includes multiple buffering area descriptor container.
Specifically, 1 shared receiving queue can be pre-created in the trawl performance for calculating equipment, share receiving queue by chain At least one buffer descriptor container composition together, each buffer descriptor container includes the buffer area of same number Descriptor, and in the shared message queue, each buffer descriptor management receives in buffer area and corresponds to memory space Size is identical.
Then receiving end calculates equipment when establishing Business Stream connection, and the receiving queue that the Business Stream connects is corresponded to institute The shared receiving queue stated can be by the predefined quantity in the shared receiving queue when receiving queue initializes Buffer descriptor container is mounted in the receiving queue.
Further, with the message for calculating equipment from transmitting terminal is constantly received, these messages are stored in described In receiving queue, then when judging that the buffer descriptor container use ratio in the receiving queue reaches predefined first threshold When, can again by least one buffer descriptor container in the shared receiving queue be mounted in the receiving queue so as to Store the payload data of the subsequent message that will be received.
Illustratively, as shown in Figure 3, it is assumed that predefined first threshold is 50%, when buffer descriptor holds in receiving queue Corresponding storage in the reception buffer area that buffer descriptor 11, buffer descriptor 12, buffer descriptor 13 are managed in device 1 After space is all filled up by payload data, then buffer descriptor in the corresponding shared receiving queue 100 of the receiving queue is held Device 11 is mounted on the buffer descriptor container chain of receiving queue 1, and the connection buffer area of buffer descriptor container 2 is retouched State the buffer descriptor 111 that symbol is directed toward in the buffer descriptor container 11.
Optionally, if according to receiving end calculate equipment network interface card institute received message TCP serial number confirm receiving queue in when Preceding available buffer descriptor container can not save the payload data of the message, then by the shared receiving queue extremely Few 1 buffer descriptor container carry is in the receiving queue.
Illustratively, confirm that there is no confirm according to the TCP serial number in receiving queue by receiving the TCP serial number of message When deviation post, at least one available buffer descriptor container to shared receiving queue application is needed to be mounted to the reception In queue, to store the payload data of the message.As included 2 buffer descriptor containers, Mei Gehuan in receiving queue Rushing area's descriptor container includes 3 buffer descriptors, and each buffer descriptor management receives in buffer area and corresponds to depositing for 4KB Space is stored up, then receiving queue management receives the memory space that 24KB is corresponded in buffer area, it is assumed that the message of TCP serial number n is original The first section message for sending data is originated, when receiving end, which calculates equipment, receives the message of TCP serial number n+40K, nothing in receiving queue Method stores the payload data of the message, then needs to hang to shared receiving queue application 2 available buffer descriptor containers It is downloaded on the buffer descriptor container chain of receiving queue 1, the payload data in message to store TCP serial number n+40K.
By the description of the above content, by using shared receiving queue mechanism, i.e., a plurality of Business Stream connection is corresponding to be connect It receives queue to be associated in shared receiving queue with the shared storage resource for receiving buffer resource pond, solve in the prior art not The problem of buffer area is not utilized immediately is received in the corresponding receiving queue of active Business Stream, and it is slow to effectively raise reception Rush the utilization rate in area resource pond.On the other hand, it is connect when buffer descriptor container currently available in receiving queue can not save The payload data of receiving text judges that the buffer descriptor container usage amount in receiving queue reaches predefined first threshold When, it can be mentioned by least one buffer descriptor container carry in shared receiving queue in the receiving queue with this High business processing efficiency, reduces processing delay.
As multiple receiving queues are constantly to shared receiving queue application buffer descriptor container, when shared receiving queue When the use ratio of middle buffer descriptor container reaches predefined thresholds, the trawl performance for calculating equipment can use reception and delay It rushes memory space idle in area resource pond and forms new buffer descriptor, be filled into new buffer descriptor container, The new buffer descriptor container of multiple new buffer descriptor recompositions, is mounted in shared receiving queue, so as to It improves the resource bid efficiency of receiving queue and calculates equipment disposal ability.
Illustratively, it when the use ratio of buffer descriptor container in shared receiving queue reaches 70%, can use Idle storage space constructs new buffer descriptor, the buffer descriptor of multiple new buffer descriptor set Cheng Xin The new buffer descriptor container is mounted in the shared receiving queue by container, so as to for receiving queue application and It uses, so as to improve business processing efficiency, reduces Message processing time delay.
Optionally, when in shared receiving queue available buffer descriptor container be unable to satisfy receiving queue required buffer When the number of area's descriptor container requires, the trawl performance for calculating equipment, which can use, receives idle in buffer resource pond deposit Storage space forms new buffer descriptor, is filled into new buffer descriptor container, multiple new buffer areas are retouched The new buffer descriptor container of symbol recomposition is stated, is mounted in shared receiving queue, to improve the resource Shen of receiving queue It please efficiency and calculating equipment disposal ability.
Illustratively, when receiving queue needs to apply 3 buffer descriptor containers, but it is available slow in shared receiving queue When rushing area's descriptor container and only having 2, trawl performance, which can use, receives memory space composition idle in buffer resource pond New buffer descriptor container, and be mounted on the chain of buffer descriptor container of the receiving queue.
It will be understood by those skilled in the art that receiving idle storage space in buffer resource pond can be reception buffering From the memory space being not used by area resource pond, it is also possible to the memory space discharged after receiving queue use;It is connect when shared The use ratio for receiving buffer descriptor container in queue, which reaches predefined thresholds but receives, deposits in buffer resource pond without the free time When storing up space, the new buffer descriptor container can not be created, administrator couple can be notified by alarm or event form It receives the corresponding memory space in buffer area and carries out dilatation.
Optionally, the shared receiving queue of multiple and different depth can also be created simultaneously.
Illustratively, trawl performance can also establish multiple shared receiving queues simultaneously, every in the same shared receiving queue The buffer descriptor number that a buffer descriptor container is included is identical, and each buffer area is retouched in different sharing receiving queue Stating the buffer descriptor number that symbol container is included may be the same or different.
It is worth noting that the storage size that each buffer descriptor is managed in the same shared receiving queue Identical, the storage size that buffer descriptor is managed in different sharing receiving queue may be the same or different, this Invention is with no restriction.
Further, in conjunction with flow control algorithm, different business priority can be set during TCP traffic stream process, It is that every TCP connection selects best match to share receiving queue according to priority, service feature is optimal.On the other hand, The corresponding shared receiving queue of each TCP connection is adjusted according to the factors such as network flow, packet loss dynamic.
Illustratively, after receiving end calculating equipment and transmitting terminal calculating equipment establish TCP connection, according to the business of TCP connection The corresponding receiving queue 1 of the TCP connection is associated with by priority with shared receiving queue 100, is buffered in shared receiving queue 100 Area's descriptor container includes that 3 buffer descriptors hold, and each buffer descriptor management receives depositing for the corresponding 4KB in buffer area Store up space;When transmitting terminal calculating equipment and receiving end calculating equipment room network environment are preferable, as receiving end calculating equipment is received For message there are when less out-of-order message, it is shared connect that receiving end, which calculates the corresponding shared receiving queue of equipment adjustment receiving queue 1, Receive queue 200, wherein each buffer descriptor container includes 2 buffer descriptors in shared receiving queue 200, each Buffer descriptor management receives the memory space of the corresponding 2KB in buffer area, to achieve the purpose that service feature is optimal.
Description through the above can be reached according to the use ratio of buffer descriptor container in shared receiving queue To predefined thresholds or when being unable to satisfy the number requirement of receiving queue required buffer area descriptor container, trawl performance can benefit New buffer descriptor container is created with idle storage resource, is mounted to the buffer descriptor container of shared receiving queue On chain, time-consuming of the receiving queue to shared receiving queue application storage resource is reduced with this, improves business processing efficiency.Another party The shared receiving queue of multiple and different depth can be pre-created in face, and in conjunction with flow control algorithm, in TCP traffic stream process mistake Different business priority is set in journey, is that every TCP connection selects best match to share receiving queue according to priority, by business Performance is optimal.
In conclusion determining that the buffer descriptor for storing the payload data of the message holds according to the TCP serial number of message Buffer descriptor in device, and by payload data storage to corresponding memory space, can directly in receiving queue root Out-of-order Packet reassembling is completed according to TCP serial number, compared with prior art, does not need separately to divide reassembly buffer area, also without first The multiple messages received are ranked up in reassembly buffer area and then in order read the payload data of the message after sequence The process for getting receiving queue thus reduces the processing delay of message, and improves the process performance for calculating equipment.Another party Face, by sharing receiving queue mechanism, the TCP connection of a plurality of Business Stream is shared to receive buffer resource pond, solves existing skill Because the TCP connection of sluggish Business Stream is also required to distribution and the corresponding reception buffer area of reception window in art, actually this A little buffer areas are possible without to be used immediately, is led to the problem that resource utilization is low, is improved and receive buffer resource pond Resource utilization.And when storage resource is insufficient in receiving queue or judges that the buffer descriptor container in receiving queue uses When ratio reaches predefined first threshold, by least one buffer descriptor container carry in shared receiving queue described In receiving queue, on the other hand, when network flow moment leap high or Network Packet Loss serious conditions under, can be to shared reception The available buffer descriptor container of queue application improves traffic handing capacity, reduces processing delay.It can also be calculated with flow control Method combines, and different business priority is arranged during TCP traffic stream process, is that every TCP connection selects most according to priority It is good to match shared receiving queue, service feature is optimal.
In another possible embodiment of the invention, when the network interface card for calculating equipment completes link layer, network layer, transport After layer protocol processing task, calculating equipment can read payload data, to complete the processing of application layer.Receive buffer resource with Buffer descriptor is unit release, which comprises
If any buffer descriptor is managed on the chain for the buffer descriptor container that the receiving queue is managed The corresponding memory space in reception buffer area complete payload data filling, and in any buffer descriptor to the reception The reception buffer area that all buffer descriptors of the first-in-chain(FIC) for the buffer descriptor container that queue is managed are managed is corresponding Payload data filling, and connecing of being managed of the buffer descriptor before any buffer descriptor is completed in memory space The payload data received in the corresponding memory space in buffer area is read by the calculating equipment, then described in the calculating equipment reading The payload data stored in the corresponding memory space in reception buffer area that any buffer descriptor is managed.
Illustratively, Message processing flow diagram as shown in Figure 2 A, at the T1 moment, what buffer descriptor 11 was managed deposits It stores up space to be filled by the payload data in the message of TCP serial number n, buffer descriptor 11 is the buffering of first-in-chain(FIC) in receiving queue Area's descriptor, then after receiving end calculates the equipment reading payload data, the reception that release buffer descriptor 11 is managed is slow Rush the corresponding memory space in area;At the T2 moment, the corresponding memory space in reception buffer area that buffer descriptor 12 is managed is by TCP Payload data in the message of serial number n+4K fills up, and on the buffer descriptor container chain of receiving queue, buffer area is retouched The payload data in the corresponding memory space in reception buffer area that buffer descriptor 11 before stating symbol 12 is managed is read It takes, then receiving end calculates equipment and reads the payload data, and discharges the reception buffer area pair that buffer descriptor 12 is managed The memory space answered;At the T3 moment, the memory space that buffer descriptor 21 is managed is by the message of TCP serial number n+12K Payload data filling, but at this point, buffering on the buffer descriptor container chain of receiving queue, before buffer descriptor 21 The corresponding memory space in reception buffer area that area's descriptor 13 is managed is not filled up by payload data also, then is not dealt with;When T4 It carves, the memory space that buffer descriptor 22 is managed is filled up by the payload data in the message of TCP serial number n+16K, but this When, on the buffer descriptor container chain of receiving queue, the buffer descriptor 13 before buffer descriptor 22 is managed The corresponding memory space in reception buffer area do not filled up by payload data, the reception buffer area pair that buffer descriptor 21 is managed Though the memory space answered is filled up by payload data, receiving end calculates the reception that the unread buffer descriptor 21 of equipment is managed Payload data in the corresponding memory space in buffer area, then do not deal with;T5 moment, the storage that buffer descriptor 13 is managed Space is filled up by the payload data in the message of TCP serial number n+8K, at this point, all buffer areas before buffer descriptor 13 The memory space that descriptor is managed has been filled with payload data, and receiving end has calculated equipment reading to the payload data, Then receiving end calculates the payload data in the memory space that equipment reading buffer descriptor 13 is managed, and discharges buffer area and retouch State the corresponding memory space in reception buffer area that symbol 13 is managed;T6 moment, the memory space that buffer descriptor 23 is managed It is filled up by the payload data of TCP serial number n+20K, at this point, delaying on the buffer descriptor container chain that receiving queue is managed The storage corresponding with the reception buffer area that buffer descriptor 22 is managed of buffer descriptor 21 before rushing area's descriptor 23 Space is filled up respectively by payload data but non-receiving end calculates equipment and reads, and buffer descriptor 11, buffer descriptor 12, receiving end calculates equipment reading to the payload data in the corresponding memory space of buffer descriptor 13, then receiving end calculates Equipment reads the reception buffer area pair that buffer descriptor 21, buffer descriptor 22 and buffer descriptor 23 are managed respectively The payload data stored in the memory space answered, and discharge buffer descriptor 21, buffer descriptor 22 and buffer description Memory space corresponding to symbol 23.At this point, the reception that all buffer descriptors are managed in entire buffer descriptor container 1 Corresponding memory space is released in buffer area, and buffer descriptor container 1 is invalid buffer descriptor container.Network interface card drives It is dynamic to refill buffer descriptor container using idle storage space in buffer resource pond is received, it completes after filling again It is mounted on the chain of shared receiving queue, for receiving queue application and use.
It is worth noting that the reception buffer area that buffer descriptor is managed corresponds to the release process in space, it is specific real Shi Shi, can be according to the predefined period, on the buffer descriptor container chain that receiving queue is managed, from first-in-chain(FIC) to last-of-chain Each buffer descriptor is detected one by one, when the corresponding memory space quilt in reception buffer area that any buffer descriptor is managed Payload data fills up, and all buffer descriptors before any buffer descriptor described in receiving queue are managed When the memory space for receiving buffer area is filled and receiving end calculates equipment reading, that is, discharge any reception The reception buffer area that all buffer descriptors before buffer descriptor and any reception buffer descriptor are managed Memory space.
It should also be noted that, the corresponding memory space in reception buffer area that equipment release receiving queue is managed is calculated, Specially trawl performance is discharged as unit of buffer descriptor receives buffer area, and discharges the processing for receiving buffer controlling power Process has differences in different operating system, for example, trawl performance transfers buffer controlling in Windows operating system It weighs to the operating system protocol stack for calculating equipment, and constructs new buffer descriptor with idle memory space and delay with new Rush area's descriptor container.And in (SuSE) Linux OS, after trawl performance transfers buffer controlling power, CPU will not be again by control It returns, but release is responsible for by operating system after use, need to apply again that new buffer area is associated with corresponding buffering In area's descriptor.
Description through the above, after data are read by calculating equipment in the reception buffer area that receiving queue is managed, Memory space can be discharged, the memory space of release can be rejoined shared receiving queue by trawl performance, for receiving queue The payload data of stored messages completes the sequence of out-of-order message, improves resource utilization.
In conclusion receiving queue is using buffer descriptor container as administrative unit, it is possible to reduce calculate equipment CPU and providing Consumption in source control is directly completed to receive message sequence in receiving buffer queue, can solve out-of-order in the prior art Message sequence bring delay problem, reduces Message processing time delay, improves the processing capacity for calculating equipment.On the other hand, By creating the shared receiving queue of multiple and different depth, in conjunction with flow control algorithm, in conjunction with business processing priority, Network status Etc. factors, dynamic adjust the associated shared receiving queue of receiving queue, for each TCP connection select best match shared reception Queue is optimal service feature.
In alternatively possible embodiment of the invention, in conjunction with the method that step S201 is described to step S203, receive Queue can also include at least one buffer descriptor, and each buffer descriptor is right in reception buffer area for managing The memory space answered.
Confirmed in the receiving queue according to the TCP serial number that network interface card receives message for storing the slow of the payload data Rush area's descriptor;The payload data is stored in again corresponding to the buffer descriptor in the buffer descriptor of the confirmation In the memory space for receiving buffer area.
Illustratively, as shown in Figure 4 A, receiving queue 2 is retouched including buffer descriptor 21, buffer descriptor 22, buffer area Symbol 23, buffer descriptor 24 are stated, each buffer descriptor management receives the memory space that 4KB is corresponded in buffer area, it is assumed that The payload data of each message is 4KB, then can use n, n+4K, n+8K, n+12K successively as the TCP sequence of each message respectively Number, it is assumed that the message of TCP serial number n is that data to be sent are cut into the first section data in several data segments as payload data A TCP message being packaged into;T1 moment, T2 moment, T3 moment, T4 moment respectively indicate four moment of time sequencing;T1 Moment receives the message of TCP serial number n, obtains TCP serial number and the payload data in the message, confirm it according to TCP serial number Deviation post is buffer descriptor 21, then connects what payload data in the message was stored in that buffer descriptor 21 managed Receive memory space corresponding to buffer area;The T2 moment receives the message of TCP serial number n+8K, obtains the TCP sequence in the message Number and payload data, according to TCP serial number confirm its deviation post be buffer descriptor 23, then by payload data in the message It is stored in memory space corresponding to the reception buffer area that buffer descriptor 23 is managed;The T3 moment receives TCP serial number n+ The message of 4K obtains TCP serial number and payload data in the message, confirms its deviation post for buffering according to the TCP serial number It is empty then to be stored the corresponding storage in reception buffer area managed to buffer descriptor 22 by area's descriptor 22 for the payload data Between;The T4 moment receives the message of TCP serial number n+12K, TCP serial number and the payload data in the message is obtained, according to TCP Serial number confirms that its deviation post is buffer descriptor 24, then payload data in the message is stored in buffer descriptor 24 Memory space corresponding to the reception buffer area managed.In conjunction with above-mentioned example, receiving end calculates equipment after receiving message, Payload data in the message is directly stored to the reception buffer area pair managed to receiving queue according to the TCP serial number of message In the memory space answered, the sequence to out-of-order message is completed as a result,.
Optionally, be solve the TCP connection application reception buffer resource of sluggish Business Stream, but cannot immediately by The case where use, at least one shared queue can also be pre-created, each shared queue describes comprising multiple buffering area Symbol.
Then receiving end calculates equipment when establishing Business Stream connection, and the receiving queue that the Business Stream connects is corresponded to institute Shared receiving queue is stated, it, can be by the slow of the predefined quantity in the shared receiving queue when receiving queue initializes Area's descriptor container is rushed to be mounted in the receiving queue.
Further, with the message for calculating equipment from transmitting terminal is constantly received, these messages are stored in described In receiving queue, then when in the receiving queue buffer descriptor use ratio reach predefined second threshold, then to described The new buffer descriptor of receiving queue application at least one and carry are shared in the receiving queue.
Illustratively, as shown in Figure 4 B, it is assumed that predefined second threshold is 50%, is buffered included in the receiving queue 2 When the memory space corresponding with the reception buffer area that buffer descriptor 22 is managed of area's descriptor 21 is all filled up by payload data, Buffer descriptor use ratio in receiving queue reaches 50%, then retouches at least one buffer area in shared receiving queue 200 It states symbol to be mounted in receiving queue 2, and can be by 201 chain of buffer descriptor to 24 tail portion of buffer descriptor.
Optionally, if the TCP serial number according to the message confirms buffer description currently available in the receiving queue Symbol can not save the payload data, then to the new buffer descriptor of described shared receiving queue application at least one and carry In the receiving queue.
Illustratively, confirm that there is no confirm according to the TCP serial number in receiving queue by receiving the TCP serial number of message When deviation post, at least one available buffer descriptor container to shared receiving queue application is needed to be mounted to the reception In queue, to store the payload data of the message.As included 4 buffer descriptors, each buffer area in receiving queue Descriptor management receives the memory space that 4KB is corresponded in buffer area, then the reception buffer area that receiving queue currently manages is corresponding Memory space is 16KB, it is assumed that the message of TCP serial number n is to send first section message in data, when receiving TCP serial number n+ When the message of 20K, receiving queue can not save payload data entrained in the message of TCP serial number n+20K, then connect to shared 2 buffer descriptors of queue application are received, to store payload data entrained by the message of TCP serial number n+20K.
By the description of the above content, when storage resource is insufficient in receiving queue or judges buffering in the receiving queue When the use ratio of area's descriptor reaches predefined second threshold, at least one buffer area in shared receiving queue can also be retouched Symbol carry is stated on the buffer descriptor chain of the receiving queue, so as to improve business processing efficiency, when reducing processing Prolong.
In addition, receiving team when shared as multiple receiving queues are constantly to shared receiving queue application buffer descriptor The use ratio of buffer descriptor meets predefined thresholds in column, and the trawl performance for calculating equipment, which can use, receives buffer area Idle memory space forms new buffer descriptor in resource pool, is mounted in shared receiving queue, receives to improve The resource bid efficiency and calculating equipment disposal ability of queue.
Illustratively, when the use ratio of buffer descriptor in shared receiving queue reaches 70%, it can use the free time Memory space constructs new buffer descriptor, and the new buffer descriptor is mounted in the shared receiving queue, So as to for receiving queue application and use.
Optionally, delay when the quantity of available buffer descriptor in shared receiving queue is unable to satisfy needed for receiving queue When rushing area's descriptor requirement, the trawl performance for calculating equipment, which can use, receives memory space group idle in buffer resource pond The buffer descriptor of Cheng Xin is mounted in shared receiving queue, to improve the resource bid efficiency and calculating of receiving queue Equipment disposal ability.
Illustratively, when receiving queue needs to apply 3 buffer descriptors, but available buffer is retouched in shared receiving queue When stating symbol only 2 available buffer descriptors, trawl performance, which can use, receives storage idle in buffer resource pond Space forms new buffer descriptor, is mounted in shared receiving queue, is used for stored payload data.
It will be understood by those skilled in the art that receiving idle storage space in buffer resource pond can be reception buffering From the memory space being not used by area resource pond, it is also possible to the memory space discharged after receiving queue use;It is connect when shared The use ratio for receiving buffer descriptor in queue reaches predefined thresholds or is unable to satisfy the description of receiving queue required buffer area Symbol is when requiring, but when receiving in buffer resource pond without idle storage space, can not create the new buffer descriptor, can To notify administrator to carry out dilatation to the corresponding memory space in buffer area is received by alarm or event form.
Optionally, the trawl performance that receiving end calculates equipment can also create the shared receiving queue of multiple and different depth, Buffer descriptor corresponds to the memory space of same size, buffer area in different sharing receiving queue in each shared receiving queue Storage size corresponding to descriptor may be the same or different.
It further, in the specific implementation process, can also be every according to different business priority in conjunction with flow control algorithm The shared receiving queue of a receiving queue selection best match, optimization calculate the service process performance of equipment.On the other hand, according to The factors such as network flow, packet loss dynamic adjusts the corresponding shared receiving queue of each TCP connection.
Illustratively, after receiving end calculating equipment and transmitting terminal calculating equipment establish TCP connection, according to the business of TCP connection The corresponding receiving queue 2 of the TCP connection is associated with by priority with shared receiving queue 200, is shared each in receiving queue 200 Buffer descriptor management receives the memory space of the corresponding 4KB in buffer area;It is set when transmitting terminal calculates equipment with receiving end calculating Network environment is preferable between standby, and if receiving end calculates the received message of equipment there are when less out-of-order message, receiving end calculates equipment Adjusting the corresponding shared receiving queue of the receiving queue 2 is shared receiving queue 200, shares each buffering in receiving queue 200 Area's descriptor management receives the memory space of the corresponding 2KB in buffer area, to achieve the purpose that service feature is optimal.
The method described by above embodiments, receiving queue is by buffer descriptor set at equally can solve existing Reassembly buffer area leads to message data processing delay problem in technology, improves the process performance for calculating equipment.On the other hand, when sentencing Buffer descriptor container use ratio in the receiving queue of breaking reaches predefined first threshold, can also be by shared reception At least one buffer descriptor carry leaps high in network flow moment in the receiving queue or Network Packet Loss is tight in queue In weight situation, available reception buffer area can be applied for by shared receiving queue in advance, improve traffic handing capacity, reduce Processing delay.Different business priority can also be set during TCP traffic stream process in conjunction with flow control algorithm, according to Priority is that every TCP connection selects best match to share receiving queue, and service feature is optimal.In another aspect, utilizing Shared receiving queue management receives buffer resource pond, and a plurality of Business Stream is made to receive buffer area money by the way that shared receiving queue is shared Source pond, solves in the prior art that window is corresponding connects with reception because the TCP connection of sluggish Business Stream is also required to distribution Buffer area is received, actually these buffer areas are possible without is used immediately, is led to the problem that resource utilization is low, is improved and connect Receive the resource utilization in buffer resource pond.
In another possible embodiment of the invention, when the network interface card for calculating equipment completes link layer, network layer, transport After layer protocol processing task, calculating equipment can read the payload data, to complete the processing of application layer.Receive buffer area money Source is discharged as unit of buffer descriptor, which comprises
If the reception that any buffer descriptor is managed is slow on the chain for the buffer descriptor that receiving queue is managed It rushes the corresponding memory space in area and completes payload data filling, and managed in any buffer descriptor to the receiving queue The corresponding memory space in reception buffer area that all buffer descriptors of the first-in-chain(FIC) of the buffer descriptor of reason are managed is complete It is filled at payload data, and the reception that the buffer descriptor before any buffer descriptor described in receiving queue is managed Payload data in the corresponding memory space in buffer area is read by the calculating equipment, then the calculating equipment reads described appoint The payload data stored in the corresponding memory space in reception buffer area that one buffer descriptor is managed.
Illustratively, as shown in Figure 4 A, the T1 moment completes the corresponding storage in reception buffer area that buffer descriptor 21 is managed The filling in space, and buffer descriptor 21 in the first-in-chain(FIC) of the buffer descriptor chain of receiving queue 2, then receiving end calculates equipment The payload data for receiving the corresponding memory space in buffer area and storing that buffer descriptor 21 is managed is read, and discharges buffer area The corresponding memory space in reception buffer area that descriptor 21 is managed;T2 moment, buffer descriptor 23 are filled by payload data, At this point, on the buffer descriptor chain of receiving queue, what the buffer descriptor 22 before buffer descriptor 23 was managed It receives the corresponding memory space in buffer area to be filled by payload data not yet, does not then deal with;T3 moment, buffer descriptor 22 The corresponding memory space in reception buffer area managed is filled by payload data, and in the buffer descriptor chain of receiving queue On, the corresponding memory space in reception buffer area that the buffer descriptor 21 before buffer descriptor 22 is managed is by payload number It is read according to filling and by calculating equipment, then receiving end calculates equipment and reads in the corresponding memory space of buffer descriptor 22 Payload data, and discharge the corresponding memory space in reception buffer area that buffer descriptor 22 is managed;T4 moment, buffer area are retouched It states the corresponding memory space in reception buffer area that symbol 24 is managed to be filled by payload data, and in the buffer description of receiving queue It accords on chain, the reception buffer area that buffer descriptor 21, buffer descriptor 22 before buffer descriptor 24 are managed Receiving end calculates equipment and reads for payload data in corresponding memory space, and the connecing of being managed of buffer descriptor 23 It receives the corresponding memory space in buffer area to be filled up by payload data, then calculates equipment and read buffer descriptor 23 and buffering respectively What area's descriptor 24 was managed receives the payload data of the corresponding memory space storage in buffer area, and discharges buffer descriptor 23 Memory space corresponding with the reception buffer area that buffer descriptor 24 is managed, so that the memory space of release is for handling it He requests at Message processing.
It is worth noting that the reception buffer area that buffer descriptor is managed corresponds to the release process in space, it is specific real Shi Shi, can be according to the predefined period, on the buffer descriptor chain that receiving queue is managed, from first-in-chain(FIC) to last-of-chain one by one Each buffer descriptor is detected, when the corresponding memory space in reception buffer area that any buffer descriptor is managed is by payload Data are filled up, and the reception that all buffer descriptors before any buffer descriptor described in receiving queue are managed When the memory space of buffer area is filled and receiving end calculates equipment reading, that is, discharge any reception buffering The storage for the reception buffer area that all buffer descriptors before area's descriptor and any reception buffer descriptor are managed Space.
It should also be noted that, the corresponding memory space in reception buffer area that equipment release receiving queue is managed is calculated, Specially trawl performance is discharged as unit of buffer descriptor receives buffer area, and discharges the processing for receiving buffer controlling power Process has differences in different operating system, for example, trawl performance transfers buffer controlling in Windows operating system It weighs to the operating system protocol stack for calculating equipment, and constructs new buffer descriptor with idle memory space and delay with new Rush area's descriptor container.And in (SuSE) Linux OS, after trawl performance transfers buffer controlling power, CPU will not be again by control It returns, but release is responsible for by operating system after use, need to apply again that new buffer area is associated with corresponding buffering In area's descriptor.
Description through the above, after data are read by calculating equipment in the reception buffer area that receiving queue is managed, Memory space can be discharged, the memory space of release can be rejoined shared receiving queue by trawl performance, for receiving queue The payload data of stored messages completes the sequence of out-of-order message, improves resource utilization.
In conclusion receiving queue using buffer descriptor as administrative unit, is completed to receive message using buffer area is received Sequence equally can solve out-of-order message sequence bring delay problem in the prior art, reduce Message processing time delay, improve Calculate the processing capacity of equipment.On the other hand, multiple by creating at least one buffer descriptor in shared receiving queue Receiving queue is shared to receive buffer resource pond, improves resource utilization;Meanwhile in conjunction with flow control algorithm, at business The factors such as priority, Network status are managed, dynamic adjusts the associated shared receiving queue of receiving queue, selects for each TCP connection The shared receiving queue of best match, is optimal service feature.
It is worth noting that can satisfy TCP, internet in conjunction with the above method provided by the present invention and treatment process Small computer system interface (Internet Small Computer System Interface, iSCSI) unloads engine Remote direct memory access protocol (the internet Wide- of (iSCSI Offload Engine, IOE), wide area on internet Area RDMA Protocol, iWRAP) etc. in network transmission technologies, can be effective in conjunction with the shared receiving queue of different depth Out-of-order message processing capability is improved, resource utilization is optimal.
It is worth noting that for simple description, therefore, it is stated as a series of dynamic for above method embodiment It combines, but those skilled in the art should understand that, the present invention is not limited by the sequence of acts described, secondly, this Field technical staff also should be aware of, and the embodiments described in the specification are all preferred embodiments, and related movement is simultaneously It is not necessarily necessary to the present invention.
The content of those skilled in the art as described above, alternatively it is conceivable to other reasonable steps combination, also belong to In in protection scope of the present invention.Secondly, those skilled in the art should also be familiar with, embodiment described in this description belongs to In preferred embodiment, related movement is not necessarily essential to the invention.
Above in conjunction with Fig. 1 to Fig. 4, provided a kind of side for handling message according to embodiments of the present invention is described in detail Method, below in conjunction with another schematic diagram for calculating equipment 500 of Fig. 5 body of the embodiment of the present invention, as shown, the calculating Equipment 500 includes processor 501, memory 502, TCP unloading engine network interface card 503 and bus 504.Processor 501, memory 502, TCP unloading engine network interface card 503 passes through the communication between the realization of bus 504.
Processor 501 can use general central processing unit (Central Processing Unit, CPU), micro process Device.
Memory 502 can be read-only memory (Read Only Memory, ROM), static storage device, dynamic memory Equipment or random access memory (Random Access Memory, RAM).Memory 502 can store an operating system and Instruction, wherein described instruction is the program of any optional technical solution provided for realizing preceding method embodiment of the present invention Code, and executed by processor 501.
Specifically, described instruction is executed by the processor 501 to complete to act as follows:
Business Stream is established with opposite end calculating equipment to connect, and generates receiving queue for Business Stream connection;
The TCP unloading engine network interface card is used for: being received the message connected from the Business Stream, is obtained and take in the message The serial number and payload data of band;The inclined of the payload data is stored in the receiving queue according to the determination of the serial number of the message Pan position;The payload data is stored into the receiving queue based on the deviation post.
By above-mentioned apparatus, when receiving message, can be directly determined according to the TCP serial number of message in the receiving queue The deviation post of the payload data of the middle storage message, and directly store the payload data to corresponding memory space.This Sample does not need additional reassembly buffer area in message processing method provided in an embodiment of the present invention, first unslow in recombination yet It rushes and the multiple messages received is ranked up in area and then the payload data of the message after sequence is read into reception in order The process of queue, but according to the serial number of the message directly by the payload data storage of the message to reception after receiving message In queue, the processing delay of message is thus significantly reduced, improves the process performance for calculating equipment.
Optionally, the receiving queue includes at least one buffer descriptor container, each buffer descriptor Container is by least one buffer descriptor and a connection buffer descriptor set at each buffer descriptor is used Corresponding memory space in buffer area is received in management;The connection buffer descriptor is under being directed toward in the receiving queue One buffer descriptor container;
It then is determined to connect in corresponding receiving queue in the Business Stream according to the serial number of the message and stores the payload The deviation post of data specifically:
It is retouched in the receiving queue for storing the buffer area of the payload data according to the confirmation of the serial number of the message State the buffer descriptor in symbol container;
Then the payload data is stored into the receiving queue based on the deviation post specifically:
Corresponding to the buffer descriptor that the payload data is stored in the buffer descriptor container of the confirmation In the memory space for receiving buffer area.
By foregoing description process, buffer area is received using buffer descriptor Container Management in receiving queue, works as reception When to message, determined according to the TCP serial number of message slow in the buffer descriptor container for storing the payload data of the message Area's descriptor is rushed, and the payload data is stored to corresponding memory space.In this way, being set in calculating provided in an embodiment of the present invention In standby, out-of-order Packet reassembling can be completed according to TCP serial number directly in receiving queue, do not needed another as the prior art Row divides reassembly buffer area, is not also ranked up in reassembly buffer area to the multiple messages received first and then in order will The payload data of message after sequence reads the process of receiving queue, thus reduces the processing delay of message, and improve Calculate the process performance of equipment.
Optionally, the memory further includes that instruction is executed by the processor 501 to complete to act as follows::
Shared receiving queue is pre-created, the shared receiving queue includes multiple buffering area descriptor container;Then
If confirming that the receiving queue can not save the payload data according to the serial number of the message, or connect described in judgement The use ratio for receiving queue reaches predefined first threshold, then by least one buffer description in the shared receiving queue Container carry is accorded in the receiving queue.
By the description of the above content, by using shared receiving queue mechanism, i.e., a plurality of Business Stream connection is corresponding to be connect It receives queue to be associated in shared receiving queue with the shared storage resource for receiving buffer resource pond, solve in the prior art not The problem of buffer area is not utilized immediately is received in the corresponding receiving queue of active Business Stream, and it is slow to effectively raise reception Rush the utilization rate in area resource pond.On the other hand, it is connect when buffer descriptor container currently available in receiving queue can not save The payload data of receiving text judges that the buffer descriptor container usage amount in receiving queue reaches predefined first threshold When, it can be mentioned by least one buffer descriptor container carry in shared receiving queue in the receiving queue with this High business processing efficiency, reduces processing delay.
Optionally, the memory further includes that instruction is executed by the processor 501 to complete to act as follows:
If there are first buffering area descriptors in the receiving queue, wherein managed in the first buffering area descriptor It has been stored in the memory space of reason in the first payload data and the receiving queue before the first buffering area descriptor The second payload data and second payload data have been stored in the memory space that all buffer descriptors are managed by institute It states and calculates equipment reading, then
The equipment that calculates reads first saved in the memory space that the first buffering area descriptor is managed only Lotus data.
Description through the above, after data are read by calculating equipment in the reception buffer area that receiving queue is managed, Memory space can be discharged, the memory space of release can be rejoined shared receiving queue by trawl performance, for receiving queue The payload data of stored messages completes the sequence of out-of-order message, improves resource utilization.
In conclusion determining that the buffer descriptor for storing the payload data of the message holds according to the TCP serial number of message Buffer descriptor in device, and by payload data storage to corresponding memory space, can directly in receiving queue root Out-of-order Packet reassembling is completed according to TCP serial number, compared with prior art, does not need separately to divide reassembly buffer area, also without first The multiple messages received are ranked up in reassembly buffer area and then in order read the payload data of the message after sequence The process for getting receiving queue thus reduces the processing delay of message, and improves the process performance for calculating equipment.Another party Face, by sharing receiving queue mechanism, the TCP connection of a plurality of Business Stream is shared to receive buffer resource pond, when in receiving queue When the buffer descriptor container use ratio that storage resource is insufficient or judge in receiving queue reaches predefined first threshold, general At least one buffer descriptor container carry in shared receiving queue is in the receiving queue, on the other hand, when in net Network flow moment leap high or Network Packet Loss serious conditions under, can be to the shared available buffer descriptor of receiving queue application Container improves traffic handing capacity, reduces processing delay.It can also be in conjunction with flow control algorithm, in TCP traffic stream process process Middle setting different business priority is that every TCP connection selects best match to share receiving queue according to priority, by business It can be optimal.
Optionally, the receiving queue includes at least one buffer descriptor, at least one each described buffer area is retouched It states symbol and receives corresponding memory space in buffer area for managing;
Then described determined according to the serial number of the message connects described in storage in corresponding receiving queue in the Business Stream The deviation post of payload data specifically:
Confirmed in the receiving queue according to the serial number of the message and is retouched for storing the buffer area of the payload data State symbol;
Then the payload data is stored into the receiving queue based on the deviation post specifically:
In the memory space that the buffer descriptor that the payload data is stored in the confirmation is managed.
Optionally, the memory further includes that instruction is executed by the processor 501 to complete to act as follows:
If confirming that the receiving queue can not save the payload data according to the serial number of the message, or connect described in judgement The use ratio for receiving queue reaches predefined second threshold, then by least one buffer description in the shared receiving queue Carry is accorded in the receiving queue.
By the description of the above content, when storage resource is insufficient in receiving queue or judges buffering in the receiving queue When the use ratio of area's descriptor reaches predefined second threshold, at least one buffer area in shared receiving queue can also be retouched Symbol carry is stated on the buffer descriptor chain of the receiving queue, so as to improve business processing efficiency, reduces processing delay.
In addition, receiving team when shared as multiple receiving queues are constantly to shared receiving queue application buffer descriptor The use ratio of buffer descriptor meets predefined thresholds or judges available buffer description in shared receiving queue in column When the quantity of symbol is unable to satisfy the descriptor requirement of receiving queue required buffer area, the trawl performance for calculating equipment can use reception Idle memory space forms new buffer descriptor in buffer resource pond, is mounted in shared receiving queue, to mention The resource bid efficiency and calculating equipment disposal ability of high receiving queue.
Optionally, the trawl performance that receiving end calculates equipment can also create the shared receiving queue of multiple and different depth, Buffer descriptor corresponds to the memory space of same size, buffer area in different sharing receiving queue in each shared receiving queue Storage size corresponding to descriptor may be the same or different.
It further, in the specific implementation process, can also be every according to different business priority in conjunction with flow control algorithm The shared receiving queue of a receiving queue selection best match, optimization calculate the service process performance of equipment.On the other hand, according to The factors such as network flow, packet loss dynamic adjusts the corresponding shared receiving queue of each TCP connection.
The method described by above embodiments, receiving queue is by buffer descriptor set at equally can solve existing Reassembly buffer area leads to message data processing delay problem in technology, improves the process performance for calculating equipment.On the other hand, together The combination of receiving queue mechanism is enjoyed, a plurality of Business Stream receives buffer resource pond by the way that shared receiving queue is shared, solves existing Because the TCP connection of sluggish Business Stream is also required to distribution reception buffer area corresponding with window is received, actually in technology These buffer areas are possible without to be used immediately, leads to the problem that resource utilization is low, is improved and is received buffer resource pond Resource utilization.And works as and judge that the buffer descriptor container use ratio in the receiving queue reaches predefined first Threshold value, can also be by least one buffer descriptor carry in shared receiving queue in the receiving queue, and calculating equipment can Receive buffer area with dynamic capacity-expanding, network flow moment leap high or Network Packet Loss serious conditions under, can be connect by shared It receives queue and applies for available reception buffer area in advance, improve traffic handing capacity, reduce processing delay.It can also be calculated with flow control Method combines, and different business priority is arranged during TCP traffic stream process, is that every TCP connection selects most according to priority It is good to match shared receiving queue, service feature is optimal.,
Optionally, the memory further includes that instruction is executed by the processor 501 to complete to act as follows:
If there are second buffering area descriptors in the receiving queue, wherein managed in the second buffering area descriptor Existing third payload data in the memory space of reason, and in the receiving queue before the second buffering area descriptor The 4th payload data and the 4th payload data have been stored in the memory space that all buffer descriptors are managed by institute It states and calculates equipment reading, then
It is net that the calculating equipment reads the third saved in the memory space that the second buffering area descriptor is managed Lotus data.
Description through the above, after data are read by calculating equipment in the reception buffer area that receiving queue is managed, Memory space can be discharged, the memory space of release can be rejoined shared receiving queue by trawl performance, for receiving queue The payload data of stored messages completes the sequence of out-of-order message, improves resource utilization.
Device 500 according to an embodiment of the present invention can correspond to execute method described in the embodiment of the present invention, and fill Above and other operation and/or function of each unit in 500 is set respectively to realize each method in Fig. 1 to Fig. 4 B Corresponding process, for sake of simplicity, details are not described herein.
In conclusion receiving queue using buffer descriptor as administrative unit, is completed to receive message using buffer area is received Sequence, equally can solve out-of-order message sequence bring delay problem in the prior art, reduce Message processing time delay, mention The high processing capacity for calculating equipment.On the other hand, more by creating at least one buffer descriptor in shared receiving queue A receiving queue is shared to receive buffer resource pond, improves resource utilization;Meanwhile in conjunction with flow control algorithm, in conjunction with business The factors such as processing priority, Network status, dynamic adjust the associated shared receiving queue of receiving queue, select for each TCP connection The shared receiving queue for selecting best match, is optimal service feature.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed The scope of the present invention.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, it should be understood that disclosed systems, devices and methods, it can be with It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or The mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, the indirect coupling of device or unit It closes or communicates to connect, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product It is stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially in other words The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a People's computer, server or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention. And storage medium above-mentioned includes: that USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited The various media that can store program code such as reservoir (RAM, Random Access Memory), magnetic or disk.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain Lid is within protection scope of the present invention.Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (14)

1. a kind of method for handling message, which is characterized in that the method is applied to calculate in equipment, and the calculating equipment is preparatory Business Stream is established with opposite end calculating equipment to connect;The described method includes:
The message connected from the Business Stream is received, the serial number carried in the message and payload data are obtained;
It determines to connect in the Business Stream according to the serial number of the message and stores the payload data in corresponding receiving queue Deviation post;
The payload data is stored into the receiving queue based on the deviation post.
2. the method according to claim 1, wherein the receiving queue includes at least one buffer descriptor Container, at least one each described buffer descriptor container include at least one buffer descriptor and a connection buffering Area's descriptor, at least one each described buffer descriptor receive corresponding memory space in buffer area for managing;It is described Connection buffer descriptor is for being directed toward next buffer descriptor container in the receiving queue;
Then described determined according to the serial number of the message stores the payload in the corresponding receiving queue of Business Stream connection The deviation post of data specifically:
According to the confirmation of the serial number of the message for storing the buffer descriptor of the payload data in the receiving queue Buffer descriptor in container;
Then the payload data is stored into the receiving queue based on the deviation post specifically:
The storage that the buffer descriptor that the payload data is stored in the buffer descriptor container of the confirmation is managed In space.
3. according to the method described in claim 2, it is characterized in that, shared receiving queue is pre-created, the shared reception team Column include multiple buffering area descriptor container;Then
If confirming that the receiving queue can not save the payload data according to the serial number of the message, or judge the reception team The use ratio of column reaches predefined first threshold, then holds at least one buffer descriptor in the shared receiving queue Device carry is in the receiving queue.
4. according to the method described in claim 2, it is characterized in that, which comprises
If there are first buffering area descriptors in the receiving queue, wherein managed in the first buffering area descriptor The first payload data is stored in memory space, and all before the first buffering area descriptor in the receiving queue The second payload data and second payload data have been stored in the memory space that buffer descriptor is managed by the meter It calculates equipment to read, then
The equipment that calculates reads the first payload number saved in the memory space that the first buffering area descriptor is managed According to.
5. the method according to claim 1, wherein the receiving queue includes at least one buffer description Symbol, at least one each described buffer descriptor receive corresponding memory space in buffer area for managing;
Then described determined according to the serial number of the message stores the payload in the corresponding receiving queue of Business Stream connection The deviation post of data specifically:
The buffer descriptor for storing the payload data is confirmed in the receiving queue according to the serial number of the message;
Then the payload data is stored into the receiving queue based on the deviation post specifically:
In the memory space that the buffer descriptor that the payload data is stored in the confirmation is managed.
6. according to the method described in claim 5, it is characterized in that, shared receiving queue is pre-created, the shared reception team Column include multiple buffering area descriptor;Then
If confirming that the receiving queue can not save the payload data according to the serial number of the message, or judge the reception team The use ratio of column reaches predefined second threshold, then hangs at least one buffer descriptor in the shared receiving queue It is loaded in the receiving queue.
7. according to the method described in claim 5, it is characterized in that, which comprises
If there are second buffering area descriptors in the receiving queue, wherein managed in the second buffering area descriptor Third payload data is stored in memory space, and all before the second buffering area descriptor in the receiving queue The 4th payload data and the 4th payload data have been stored in the memory space that buffer descriptor is managed by the meter It calculates equipment to read, then
The equipment that calculates reads the third payload number saved in the memory space that the second buffering area descriptor is managed According to.
8. a kind of calculating equipment, the calculating equipment includes processor, memory, TCP unloading engine network interface card, bus, wherein institute Processor, memory, TCP unloading engine network interface card is stated to be in communication with each other by the bus;
Comprising instruction in the memory, described instruction is executed by the processor to complete to act as follows:
Business Stream is established with opposite end calculating equipment to connect, and generates receiving queue for Business Stream connection;
TCP unloading engine network interface card is used for: receiving the message connected from the Business Stream, obtains and carry in the message Serial number and payload data;The bits of offset that the payload data is stored in the receiving queue is determined according to the serial number of the message It sets;The payload data is stored into the receiving queue based on the deviation post.
9. calculating equipment according to claim 8, which is characterized in that the receiving queue includes at least one buffer description Container is accorded with, each buffer descriptor container is by least one buffer descriptor and a connection buffer descriptor Composition, each buffer descriptor receive corresponding memory space in buffer area for managing;It retouches the connection buffer area Symbol is stated for being directed toward next buffer descriptor container in the receiving queue;
It then is determined to connect in corresponding receiving queue in the Business Stream according to the serial number of the message and stores the payload data Deviation post specifically:
According to the confirmation of the serial number of the message for storing the buffer descriptor of the payload data in the receiving queue Buffer descriptor in container;
Then the payload data is stored into the receiving queue based on the deviation post specifically:
It will connect corresponding to buffer descriptor that the payload data is stored in the buffer descriptor container of the confirmation In the memory space for receiving buffer area.
10. calculating equipment according to claim 9, which is characterized in that the memory further includes instruction by the processor It executes to complete to act as follows:
Shared receiving queue is pre-created, the shared receiving queue includes multiple buffering area descriptor container;Then
If confirming that the receiving queue can not save the payload data according to the serial number of the message, or judge the reception team The use ratio of column reaches predefined first threshold, then holds at least one buffer descriptor in the shared receiving queue Device carry is in the receiving queue.
11. calculating equipment according to claim 9, which is characterized in that the memory further includes instruction by the processor It executes to complete to act as follows:
If there are first buffering area descriptors in the receiving queue, wherein managed in the first buffering area descriptor The first payload data is stored in memory space, and all before the first buffering area descriptor in the receiving queue The second payload data and second payload data have been stored in the memory space that buffer descriptor is managed by the meter It calculates equipment to read, then
The equipment that calculates reads the first payload number saved in the memory space that the first buffering area descriptor is managed According to.
12. calculating equipment according to claim 8, which is characterized in that the receiving queue includes that at least one buffer area is retouched Symbol is stated, at least one each described buffer descriptor receives corresponding memory space in buffer area for managing;
Then described determined according to the serial number of the message stores the payload in the corresponding receiving queue of Business Stream connection The deviation post of data specifically:
The buffer descriptor for storing the payload data is confirmed in the receiving queue according to the serial number of the message;
Then the payload data is stored into the receiving queue based on the deviation post specifically:
In the memory space that the buffer descriptor that the payload data is stored in the confirmation is managed.
13. calculating equipment according to claim 12, which is characterized in that the memory further includes instruction by the processor It executes to complete to act as follows:
If confirming that the receiving queue can not save the payload data according to the serial number of the message, or judge the reception team The use ratio of column reaches predefined second threshold, then at least one buffer descriptor carry in shared receiving queue exists In the receiving queue.
14. calculating equipment according to claim 12, which is characterized in that the memory further includes instruction by the processor It executes to complete to act as follows:
If there are second buffering area descriptors in the receiving queue, wherein managed in the second buffering area descriptor Third payload data is stored in memory space, and all before the second buffering area descriptor in the receiving queue The 4th payload data and the 4th payload data have been stored in the memory space that buffer descriptor is managed by the meter It calculates equipment to read, then
The equipment that calculates reads the third payload number saved in the memory space that the second buffering area descriptor is managed According to.
CN201610439598.3A 2016-06-17 2016-06-17 A kind of method and calculating equipment of processing message Active CN105871739B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610439598.3A CN105871739B (en) 2016-06-17 2016-06-17 A kind of method and calculating equipment of processing message

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610439598.3A CN105871739B (en) 2016-06-17 2016-06-17 A kind of method and calculating equipment of processing message

Publications (2)

Publication Number Publication Date
CN105871739A CN105871739A (en) 2016-08-17
CN105871739B true CN105871739B (en) 2018-12-07

Family

ID=56650730

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610439598.3A Active CN105871739B (en) 2016-06-17 2016-06-17 A kind of method and calculating equipment of processing message

Country Status (1)

Country Link
CN (1) CN105871739B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109309615B (en) * 2017-07-27 2020-07-07 华为技术有限公司 Method and device for transmitting data
CN108009032A (en) * 2017-12-14 2018-05-08 郑州云海信息技术有限公司 A kind of Infiniband network communication methods, node, equipment and storage medium
CN109587112B (en) * 2018-10-25 2021-02-12 华为技术有限公司 Data sending method, data receiving method, equipment and system
CN110048960B (en) * 2019-04-17 2021-03-30 北京理工大学 Distributed flow generation system with message response
CN111083075B (en) * 2019-12-20 2022-01-11 苏州盛科通信股份有限公司 Method for processing message by multi-core SoC and SoC applying same
CN112799840B (en) * 2021-01-29 2024-07-05 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for transmitting data
CN112953687B (en) * 2021-02-20 2022-07-22 杭州卯方科技有限公司 Packet loss retransmission method based on statistical prediction
CN116711282A (en) * 2021-04-30 2023-09-05 华为技术有限公司 Communication apparatus and communication method
CN115529286A (en) * 2022-08-29 2022-12-27 北京明朝万达科技股份有限公司 Message reassembly method, message reassembly device, electronic equipment and storage medium
CN117527937B (en) * 2024-01-08 2024-03-15 柏科数据技术(深圳)股份有限公司 Ordering window-based data message ordering method, device and equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101253745A (en) * 2005-07-18 2008-08-27 博通以色列研发公司 Method and system for transparent TCP offload
CN101662461A (en) * 2008-08-27 2010-03-03 华为技术有限公司 Method, device and system for recombining multilink protocol fragment data
CN101917472A (en) * 2010-08-12 2010-12-15 北京星网锐捷网络技术有限公司 Method, device and equipment for recombining multi-link messages
CN103546424A (en) * 2012-07-10 2014-01-29 华为技术有限公司 TCP (transmission control protocol) data transmission method and TCP unloading engine and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101253745A (en) * 2005-07-18 2008-08-27 博通以色列研发公司 Method and system for transparent TCP offload
CN101662461A (en) * 2008-08-27 2010-03-03 华为技术有限公司 Method, device and system for recombining multilink protocol fragment data
CN101917472A (en) * 2010-08-12 2010-12-15 北京星网锐捷网络技术有限公司 Method, device and equipment for recombining multi-link messages
CN103546424A (en) * 2012-07-10 2014-01-29 华为技术有限公司 TCP (transmission control protocol) data transmission method and TCP unloading engine and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种TCP/IP 卸载的数据零拷贝传输方法;王小峰, 时向泉, 苏金树;《计算机工程与科学》;20081231;第135-138页 *
一种TCP/IP卸载结构的设计与实现;陈续喜,曾文海,程扬军;《计算机***应用》;20091231;第125-128页 *

Also Published As

Publication number Publication date
CN105871739A (en) 2016-08-17

Similar Documents

Publication Publication Date Title
CN105871739B (en) A kind of method and calculating equipment of processing message
CN104539440B (en) Traffic management with in-let dimple
US8144719B2 (en) Methods and system to manage data traffic
CN105577576B (en) Distribution switch framework
CN108536543A (en) With the receiving queue based on the data dispersion to stride
CN108259383A (en) The transmission method and the network equipment of a kind of data
CN109117270A (en) The method for improving network packet treatment effeciency
CN105101305A (en) Network-side buffer management
CN106850402A (en) The transmission method and device of message
CN109039936A (en) Transmission rate control, device, sending device and receiving device
US20180321981A1 (en) System and method for self organizing data center
CN112041826B (en) Fine-grained traffic shaping offload for network interface cards
CN106095572A (en) The Dispatching System of a kind of big data process and method
CN109714268A (en) A kind of flow control methods and relevant apparatus of virtual private cloud
CN108768873A (en) A kind of flow control methods and relevant device
CN108540380A (en) More subflow network transfer methods and device
US11042414B1 (en) Hardware accelerated compute kernels
WO2020142867A1 (en) Traffic shaping method and related device
CN109002363A (en) A kind of event-handling method, device, equipment and readable storage medium storing program for executing
CN109450701A (en) A kind of virtual switch switching method, device, host and storage medium
CN103441879A (en) Method and device for switching network channels
US20050063308A1 (en) Method of transmitter oriented link flow control
CN104780118A (en) Fluid control method and device based on tokens
CN103701721B (en) Message transmitting method and device
US11223569B2 (en) Device, method, and system that virtualize a network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant