CN105610814A - Method and system for reducing packet forwarding delay - Google Patents

Method and system for reducing packet forwarding delay Download PDF

Info

Publication number
CN105610814A
CN105610814A CN201510993048.1A CN201510993048A CN105610814A CN 105610814 A CN105610814 A CN 105610814A CN 201510993048 A CN201510993048 A CN 201510993048A CN 105610814 A CN105610814 A CN 105610814A
Authority
CN
China
Prior art keywords
data
level
cache device
buffer
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510993048.1A
Other languages
Chinese (zh)
Other versions
CN105610814B (en
Inventor
夏杰
徐子轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Centec Communications Co Ltd
Original Assignee
Centec Networks Suzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Centec Networks Suzhou Co Ltd filed Critical Centec Networks Suzhou Co Ltd
Priority to CN201510993048.1A priority Critical patent/CN105610814B/en
Publication of CN105610814A publication Critical patent/CN105610814A/en
Application granted granted Critical
Publication of CN105610814B publication Critical patent/CN105610814B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements

Abstract

The invention provides a method and a system for reducing packet forwarding delay. The method comprises the steps of receiving data from each data channel, and storing the data into a first level cache; generating a first level scheduling request command and/or a first level analysis request command for the data in the first level cache according to the current storage state of the first level cache and attribute information carried by the data; if the first level scheduling request command is received, uploading all data in the current first level cache to a first level scheduling cache; if the first level analysis request command is received, analyzing the data in the current first level cache, and storing the analysis result into the first level analysis cache; matching the data stored in the first level scheduling cache with the analysis result of the data stored in the first level analysis cache, and determining whether second-level scheduling is to be performed on the current data according to the attribute and matching result of each data. According to the method and system, the packet forwarding delay is reduced.

Description

Reduce the method and system of message repeating time delay
Technical field
The present invention relates to network communication field, relate in particular to a kind of method and system that reduce message repeating time delay.
Background technology
Transmission delay is one of key parameter of network chip, and low delay is the important references performance of network chip design. In high density network chip, multichannel convergence is to data reception module and be dispatched to chip internal and process, and finally data processed result is stored in data memory module together with data itself.
Traditional handling process as shown in Figure 1, the external multiple data channel groups of data reception module, control, stored logic between passage group are separate; Each data channel group comprises multiple data channel, the bit wide of data channel is W, each data channel group is used a bit wide, and to be n × W 1 read the 1 memory buffer data of writing, splice logic by data, the serial data that in passage, an original n width is W is spliced into the data that a width is n × W, and spliced data can be expressed as: Datan×W={Data1,Data2,…Datan}。
In the time that the data in certain passage are positioned at the ending of message or spell the data of a full n × W, the request that produces these data of scheduling. In the time that these data are chosen in scheduling arbitration, from memory, sense data sends to data memory module. If scheduling data are out positioned at start of message (SOM) position, need these data additionally to send to packet parsing module, after analysis result to be generated, write into next stage memory wait and further process together with being buffered in data in memory, in above-mentioned data handling procedure, due to very long because of packet parsing time delay, need to be first data cached in the porch of data memory module.
In above-mentioned framework, the message that length is n × W from the time that starts to receive data and need to the memory of finally writing into data memory module is:
T total time delay=T convergence+T data reception module internal delay time+T packet parsing inside modules time delay+T data memory module internal delay time; By the anatomy to above-mentioned framework, the Forwarding Latency of data in existing framework is longer.
Summary of the invention
The object of the present invention is to provide a kind of reduce message repeating time delay method and system.
One of for achieving the above object, an embodiment of the present invention provides a kind of method that reduces message repeating time delay, and described method comprises:
S1, multiple messages are decomposed into respectively to the data segment that several bit wides are identical, each packet train data channel transfer in a data channel that works too much;
S2, reception come from the data of each data channel, and are stored to level cache device, and the bit wide of described level cache device is the integral multiple of each data channel bit wide;
S3, the attribute information carrying according to the current store status of described level cache device and described data, generate schedule level one request command and/or generate the order of one-level analysis request the data in described level cache device;
The attribute information that described data are carried comprises: data are that header information or data are footer information;
If S4 receives schedule level one request command, by the total data unloading in current described level cache device to schedule level one buffer;
If receive the order of one-level analysis request, the data in current level cache device are resolved, and analysis result is stored to one-level parsing buffer;
S5, the analysis result that the data of storing in described schedule level one buffer and described one-level are resolved to the data of storing in buffer mate, and determine according to the attribute of each data and matching result whether current data is carried out second-level dispatching.
As the further improvement of an embodiment of the present invention, described method also comprises:
Configuration one scheduling counter;
Described scheduling counter is for recording the quantity of level cache device data to be scheduled, and in described level cache device, data to be scheduled of every increase, add 1 by the numerical value of described scheduling counter; Producing after schedule level one request command, by the numerical value zero clearing of described scheduling counter.
As the further improvement of an embodiment of the present invention, described step S3 specifically comprises:
If current data is stored to after described level cache device, the total quantity of the not data dispatching of storing in described level cache device equals systemic presupposition scheduling threshold value, generates schedule level one request command;
If current data is stored to after described level cache device, the total quantity of the not data dispatching of storing in described level cache device is less than systemic presupposition scheduling threshold value, judges whether current data is footer information;
If so, generate schedule level one request command;
If not, keep the current store status of described level cache device.
As the further improvement of an embodiment of the present invention, described step S3 specifically comprises:
M1, current data are stored to after described level cache device, judge whether described current data is header information;
If so, perform step M2;
If not, keep the current store status of described level cache device;
M2, whether the total quantity that judges the not data dispatching of storing in current level cache device is less than systemic presupposition resolves threshold value,
If so, enter step M3;
If not, enter step M4;
M3, judge whether current data is last data of message,
If so, generate the order of one-level analysis request;
If not, keep the current store status of described level cache device;
M4, judge whether the not total quantity of data of scheduling of storing in current level cache device equals systemic presupposition and resolve threshold value,
If so, generate the order of one-level analysis request;
If not, keep the current store status of described level cache device.
As the further improvement of an embodiment of the present invention, described step S5 also comprises:
If storing the data of described schedule level one buffer into is header information, search for described one-level resolve in buffer, whether exist with described schedule level one buffer in the analysis result that matches of current data;
If so, the analysis result mating by current data with it carries out second-level dispatching simultaneously;
If not, keep described schedule level one buffer and described one-level to resolve the current store status of buffer;
If storing the data of described schedule level one buffer into is not header information, directly the data in schedule level one buffer are carried out to second-level dispatching;
If described one-level is resolved buffer and has been stored the analysis result of header information, search for described schedule level one buffer, judge whether it exists the header information matching with described analysis result;
If so, the analysis result mating by current preamble information with it carries out second-level dispatching simultaneously;
If not, keep described schedule level one buffer and described one-level to resolve the current store status of buffer.
One of for achieving the above object, an embodiment of the present invention provides a kind of system that reduces message repeating time delay, and described system comprises: data transmission blocks, data reception module, data processing module and data memory module;
Described data memory module comprises: level cache device, schedule level one buffer, one-level are resolved buffer;
Described data transmission blocks is used for: multiple messages are decomposed into respectively to the data segment that several bit wides are identical, each packet train data channel transfer in a data channel that works too much;
Described data reception module is used for: receive and come from the data of each data channel, and be stored to level cache device, the bit wide of described level cache device is the integral multiple of each data channel bit wide;
The attribute information carrying according to the current store status of described level cache device and described data, generates schedule level one request command and/or generates the order of one-level analysis request the data in described level cache device;
The attribute information that described data are carried comprises: data are that header information or data are footer information;
If receive schedule level one request command, by the total data unloading in current described level cache device to schedule level one buffer;
If receive the order of one-level analysis request, the data in current level cache device are resolved, and analysis result is stored to one-level parsing buffer;
The analysis result of the data of storing in the data of storing in described schedule level one buffer and described one-level parsing buffer is mated, and determine according to the attribute of each data and matching result whether current data is carried out second-level dispatching.
As the further improvement of an embodiment of the present invention, described system also comprises: scheduling counter;
Described scheduling counter is for recording the quantity of level cache device data to be scheduled, and in described level cache device, data to be scheduled of every increase, add 1 by the numerical value of described scheduling counter; Producing after schedule level one request command, by the numerical value zero clearing of described scheduling counter.
As the further improvement of an embodiment of the present invention, described data processing module also for:
If current data is stored to after described level cache device, the total quantity of the not data dispatching of storing in described level cache device equals systemic presupposition scheduling threshold value, generates schedule level one request command;
If current data is stored to after described level cache device, the total quantity of the not data dispatching of storing in described level cache device is less than systemic presupposition scheduling threshold value, judges whether current data is footer information;
If so, generate schedule level one request command;
If not, keep the current store status of described level cache device.
As the further improvement of an embodiment of the present invention, described data processing module also for:
Current data is stored to after described level cache device, if described current data is header information, judges whether the total quantity of the not data dispatching of storing in current level cache device is less than systemic presupposition parsing threshold value,
If so, continue to judge whether current data is last data of message, if last data of message generate the order of one-level analysis request, otherwise keep the current store status of described level cache device;
If not, continue to judge whether the not total quantity of the data of scheduling of storing in current level cache device equals systemic presupposition and resolve threshold value, resolve threshold value if equal systemic presupposition, generate the order of one-level analysis request, otherwise keep the current store status of described level cache device.
As the further improvement of an embodiment of the present invention, described data processing module also for:
If storing the data of described schedule level one buffer into is header information, search for described one-level resolve in buffer, whether exist with described schedule level one buffer in the analysis result that matches of current data;
If so, the analysis result mating by current data with it carries out second-level dispatching simultaneously;
If not, keep described schedule level one buffer and described one-level to resolve the current store status of buffer;
If storing the data of described schedule level one buffer into is not header information, directly the data in schedule level one buffer are carried out to second-level dispatching;
If described one-level is resolved buffer and has been stored the analysis result of header information, search for described schedule level one buffer, judge whether it exists the header information matching with described analysis result;
If so, the analysis result mating by current preamble information with it carries out second-level dispatching simultaneously;
If not, keep described schedule level one buffer and described one-level to resolve the current store status of buffer.
Compared with prior art, the invention has the beneficial effects as follows: the method and system of reduction message repeating provided by the invention time delay, reduce and converge the needed necessary time of analytic message, reduce message repeating time delay.
Brief description of the drawings
Fig. 1 is data storage/patch bay composition traditional in background technology of the present invention;
Fig. 2 is the flow chart of the method for the reduction message repeating time delay in an embodiment of the present invention;
Fig. 3 is the module map that reduces the system of message repeating time delay in an embodiment of the present invention.
Detailed description of the invention
Describe the present invention below with reference to detailed description of the invention shown in the drawings. But these embodiments do not limit the present invention, the conversion in structure, method or function that those of ordinary skill in the art makes according to these embodiments is all included in protection scope of the present invention.
As shown in Figure 2, in one embodiment of the present invention, the method that reduces message repeating time delay comprises:
S1, multiple messages are decomposed into respectively to the data segment that several bit wides are identical, each packet train data channel transfer in a data channel that works too much.
In the concrete example of the present invention one, switch has multiple coffrets, the corresponding data channel of each coffret; Each message is all broken down into the data segment that several bit wides are identical, and data segment serial in each message is by a data channel transfer.
In the concrete example of the present invention, each described data data channel has identical bit wide, and the width of the data that described message is decomposed equals the width of each data channel of its correspondence.
Accordingly, each data are all carried attribute information, and this attribute information is the determining positions in message according to described data, and wherein, described attribute information comprises: data are that header information or data are footer information; Be understandable that, in each message, the not concrete restriction of the quantity of the described data under different attribute, it can be determined according to information such as the length of message, not be described in detail at this.
Further, in an embodiment of the present invention, described method also comprises:
S2, reception come from the data of each data channel, and are stored to level cache device, and the bit wide of described level cache device is the integral multiple of each data channel bit wide.
In the concrete example of the present invention one, according to the sequencing of each data, write data into level cache device, be not described in detail at this.
Further, in an embodiment of the present invention, described method also comprises:
S3, the attribute information carrying according to the current store status of described level cache device and described data, generate schedule level one request command and/or generate the order of one-level analysis request the data in described level cache device;
The attribute information that described data are carried comprises: data are that header information or data are footer information.
In the present invention's one preferred embodiment, described method also comprises: configuration one scheduling counter;
Described scheduling counter is for recording the quantity of level cache device data to be scheduled, and in described level cache device, data to be scheduled of every increase, add 1 by the numerical value of described scheduling counter; Producing after schedule level one request command, by the numerical value zero clearing of described scheduling counter.
Certainly, in other embodiments of the present invention, can also record in several ways the data dispatching for the treatment of of current storage in described level cache device, not be described in detail at this.
Invent in a preferred embodiment, in described step S3, the determination methods that generates schedule level one request command specifically comprises:
If current data is stored to after described level cache device, the total quantity of the not data dispatching of storing in described level cache device equals systemic presupposition scheduling threshold value, generates schedule level one request command;
If current data is stored to after described level cache device, the total quantity of the not data dispatching of storing in described level cache device is less than systemic presupposition scheduling threshold value, judges whether current data is that if so, footer information generates schedule level one request command;
If so, generate schedule level one request command;
If not, keep the current store status of described level cache device.
Wherein, systemic presupposition scheduling threshold value is the multiple of total bit wide and the data input bit wide of level cache device.
For example: the length of described message is 10bit, the entrance bit wide of each data channel is 1bit, and described buffer bit wide is 3bit, systemic presupposition scheduling threshold value=3/1=3.
In this example, described message is divided into 10 data, the bit wide of each data is 1bit, and meanwhile, front 3 data of described message are header information, and middle 4 data are message average information, and remaining 3 data are footer information.
Further, when the 1st data store into after level cache device, known according to the current value of described scheduling counter, the total quantity of described not data dispatching is 1, and after further judging, current data is not the footer information of message, therefore, keep the current store status of described level cache device, that is, do not generate schedule level one request command.
When the 3rd data of message store into after level cache device, known according to the current value of described scheduling counter, the total quantity of described not data dispatching is 3, now, generate schedule level one request command, and according to described schedule level one request command, change the store status of described level cache device, in following content, will describe in detail.
When the 10th data of message store into after level cache device, known according to the current value of described scheduling counter, the total quantity of described not data dispatching is 1, after further judging, the footer information that current data is message, now, generate schedule level one request command, and according to described schedule level one request command, change the store status of described level cache device, in following content, will describe in detail.
In the concrete example of the present invention one, in described step S3, the determination methods that generates the order of one-level analysis request specifically comprises:
M1, current data are stored to after described level cache device, judge whether described current data is header information;
If so, perform step M2;
If not, keep the current store status of described level cache device;
M2, whether the total quantity that judges the not data dispatching of storing in current level cache device is less than systemic presupposition resolves threshold value,
If so, enter step M3;
If not, enter step M4;
M3, judge whether current data is last data of message,
If so, generate the order of one-level analysis request;
If not, keep the current store status of described level cache device;
M4, judge whether the not total quantity of data of scheduling of storing in current level cache device equals systemic presupposition and resolve threshold value,
If so, generate the order of one-level analysis request;
If not, keep the current store status of described level cache device.
Wherein, it is systemic presupposition value that systemic presupposition is resolved threshold value, and this systemic presupposition value can configure according to the required length of packet parsing, and generally, described systemic presupposition is resolved threshold value and is less than systemic presupposition scheduling threshold value, is not described in detail at this.
The above-mentioned example that continues is further described, and supposes, it is 2 that systemic presupposition is resolved threshold value.
In this example, when the 1st data store into after level cache device, judge that it is header information, known according to the current value of described scheduling counter, the total quantity of described not data dispatching is 1, and it is less than systemic presupposition and resolves threshold value; Meanwhile, judge that these data are not last data of message, therefore, keep the current store status of described level cache device, do not generate one-level resolve command, simultaneously, known in conjunction with upper example, the first data are stored to after described level cache device, also do not generate one-level and transfer order.
When the 2nd data store into after level cache device, judge that it is header information, known according to the current value of described scheduling counter, the total quantity of described not data dispatching is 2, it equals systemic presupposition and resolves threshold value; Now, generate the order of one-level analysis request, change the store status of described level cache device, in following content, will describe in detail. Meanwhile, known in conjunction with upper example, the 2nd data are stored to after described level cache device, do not generate one-level and transfer order, only generate one-level resolve command.
When the 3rd data store into after level cache device, judge that it is header information, known according to the current value of described scheduling counter, the total quantity of described not data dispatching is 3, it is greater than systemic presupposition and resolves threshold value; Therefore, keep the current store status of described level cache device, do not generate one-level resolve command; Meanwhile, known in conjunction with upper example, the 3rd data are stored to after described level cache device, only generate one-level and transfer order, do not generate one-level resolve command.
Be understandable that, in above-mentioned example, only do concrete introduction as an example of the partial data in message example, but by description above, can derive the store status of other data in message, be not described in detail at this.
Further, in an embodiment of the present invention, described method also comprises:
If S4 receives schedule level one request command, by the total data unloading in current described level cache device to schedule level one buffer;
If receive the order of one-level analysis request, the data in current level cache device are resolved, and analysis result is stored to one-level parsing buffer.
In described step S4, receiving after schedule level one request command, by data conversion storage to schedule level one buffer; ; the buffer of having dispatched in level cache device is emptied; certainly; if the data of having dispatched in described level cache device are header information, need the analysis result of waiting until described data to be stored to after one-level parsing buffer, then the buffer of having dispatched in level cache device is emptied; so; be beneficial to save the memory space of level cache device, and be beneficial to the transmission of data, be not described in detail at this.
In an embodiment of the present invention, if desired meet scheduling and the parsing of data, described level cache device need to be configured to the type of 2R1W simultaneously, two read a memory write, under this kind of mode, consider from area and the bandwidth processing ability of chip, can produce great waste.
In the present invention's one preferred embodiment, same data are taked to ping-pong operation, i.e. the processing procedure preset rules to data.
Described preset rules is: current data is stored to after level cache device, if same data are generated to described schedule level one request command and the order of one-level analysis request simultaneously, and priority treatment schedule level one request command or the order of one-level analysis request.
In the preferred embodiment for the present invention, if same data are generated to described schedule level one request command and the order of one-level analysis request simultaneously, priority treatment schedule level one request command, is not described in detail at this.
Further, in an embodiment of the present invention, described method also comprises:
S5, the data of storing in described schedule level one buffer and described one-level are resolved to the data of storing in buffer contrast, and determine according to the attribute of each data whether current data is carried out second-level dispatching.
In the preferred embodiment for the present invention, described step S5 specifically comprises:
If storing the data of described schedule level one buffer into is header information, search for described one-level resolve in buffer, whether exist with described schedule level one buffer in the analysis result that matches of current data;
If so, the analysis result mating by current data with it carries out second-level dispatching simultaneously;
If not, keep described schedule level one buffer and described one-level to resolve the current store status of buffer;
If storing the data of described schedule level one buffer into is not header information, directly the data in schedule level one buffer are carried out to second-level dispatching;
If described one-level is resolved buffer and has been stored the analysis result of header information, search for described schedule level one buffer, judge whether it exists the header information matching with described analysis result;
If so, the analysis result mating by current preamble information with it carries out second-level dispatching simultaneously;
If not, keep described schedule level one buffer and described one-level to resolve the current store status of buffer.
Step S5 as mentioned above, can solve the out of order problem of data in the time of second-level dispatching, in above-mentioned steps S5, in message transmittance process, each header information is aided with to passage pairing logic, in the time that header information unloading to the analysis result of described schedule level one buffer or described header information is stored to described one-level parsing buffer, the pairing logic of carrying by header information, search schedule level one buffer and one-level and resolve buffer wherein after another, when it matches corresponding information, carry out second-level dispatching.
Be understandable that, owing to only header information being produced to analysis result, therefore, when the data that receive at schedule level one buffer are not header information, resolve buffer without search one-level, directly current data is carried out to second-level dispatching, be not described in detail at this.
Shown in Fig. 3, in an embodiment of the present invention, the system that reduces message repeating time delay comprises: data transmission blocks 100, data reception module 200, data processing module 300, data memory module 400, scheduling counter 500;
Data memory module 400 comprises: level cache device 401, schedule level one buffer 403, one-level are resolved buffer 405.
In an embodiment of the present invention, data transmission blocks 100 for: multiple messages are decomposed into respectively to the data segment that several bit wides are identical, each packet train data channel transfer in a data channel that works too much;
In the concrete example of the present invention one, switch has multiple coffrets, the corresponding data channel of each coffret, and every group of data channel group has multiple data channel. Each message is all broken down into the data segment that several bit wides are identical, and data segment serial in each message is by a data channel transfer in multiple data channel.
In the concrete example of the present invention, each described data data channel has identical bit wide, and the width of the data that described message is decomposed equals the width of each data channel of its correspondence.
Accordingly, each data are all carried attribute information, and this attribute information is the determining positions in message according to described data, and wherein, described attribute information comprises: data are that header information or data are footer information; Be understandable that, in each message, the not concrete restriction of the quantity of the described data under different attribute, it can be determined according to information such as the length of message, not be described in detail at this.
Further, in an embodiment of the present invention, data reception module 200 is for receiving the data that come from each data channel, and is stored to level cache device 401, and the bit wide of described level cache device is the integral multiple of each data channel bit wide.
In the concrete example of the present invention one, according to the sequencing of each data, write data into level cache device 401, be not described in detail at this.
Further, in an embodiment of the present invention, the attribute information of data processing module 300 for carrying according to the current store status of described level cache device 401 and described data, generates schedule level one request command and/or generates the order of one-level analysis request the data in described level cache device 401;
The attribute information that described data are carried comprises: data are that header information or data are footer information.
In an embodiment of the present invention, dispatch counter 500 for recording the quantity of level cache device 401 data to be scheduled, data to be scheduled of every increase in level cache device 401, data processing module 300 adds 1 by the numerical value of described scheduling counter 500; Producing after schedule level one request command, data processing module 300 is by the numerical value zero clearing of described scheduling counter 500;
Certainly, in other embodiments of the present invention, can also record in several ways the quantity for the treatment of data dispatching and data to be resolved of current storage in described level cache device 401, not be described in detail at this.
Invent in a preferred embodiment, data processing module 300 also for: if current data is stored to after described level cache device 401, in described level cache device 401, the total quantity of the not data dispatching of storage equals systemic presupposition scheduling threshold value, generates schedule level one request command;
If current data is stored to after described level cache device 401, in described level cache device 401, the total quantity of the not data dispatching of storage is less than systemic presupposition scheduling threshold value, judges whether current data is that if so, footer information generates schedule level one request command;
If so, generate schedule level one request command;
If not, keep the current store status of described level cache device 401.
Wherein, systemic presupposition scheduling threshold value is systemic presupposition value, and this systemic presupposition value is the multiple of total bit wide and the data input bit wide of level cache device 401.
In the concrete example of the present invention one, data processing module 300 also for:
Current data is stored to after described level cache device, if described current data is header information, judges whether the total quantity of the not data dispatching of storing in current level cache device is less than systemic presupposition parsing threshold value,
If so, continue to judge whether current data is last data of message, if last data of message generate the order of one-level analysis request, otherwise keep the current store status of described level cache device;
If not, continue to judge whether the not total quantity of the data of scheduling of storing in current level cache device equals systemic presupposition and resolve threshold value, resolve threshold value if equal systemic presupposition, generate the order of one-level analysis request, otherwise keep the current store status of described level cache device.
Further, in an embodiment of the present invention, data processing module 300 also for: if receive schedule level one request command, by the total data unloading in current described level cache device 401 to schedule level one buffer 403;
If receive the order of one-level analysis request, the data in current level cache device 401 are resolved, and analysis result is stored to one-level parsing buffer 405.
Data processing module 300 is receiving after schedule level one request command, by data conversion storage to schedule level one buffer 403; ; the buffer of having dispatched in level cache device 401 is emptied; certainly; if the data of having dispatched in described level cache device 401 are header information, need the analysis result of waiting until described data to be stored to after one-level parsing buffer 405, then the buffer of having dispatched in level cache device 401 is emptied; so; be beneficial to save the memory space of level cache device 401, and be beneficial to the transmission of data, be not described in detail at this.
In an embodiment of the present invention, if desired meet scheduling and the parsing of data, described level cache device 401 need to be configured to the type of 2R1W simultaneously, two read a memory write, under this kind of mode, consider from area and the bit wide disposal ability of chip, can produce great waste.
In the present invention's one preferred embodiment, same data are taked to ping-pong operation, i.e. the processing procedure preset rules to data.
Described preset rules is: current data is stored to after level cache device 401, if same data are generated to described schedule level one request command and the order of one-level analysis request simultaneously, 300 priority treatment schedule level one request commands of data processing module or the order of one-level analysis request.
In the preferred embodiment for the present invention, if same data are generated to described schedule level one request command and the order of one-level analysis request simultaneously, 300 priority treatment schedule level one request commands of data processing module, are not described in detail at this.
Further, in an embodiment of the present invention, data processing module 300 also contrasts for the data of the data of described schedule level one buffer 403 storages and described one-level being resolved to storage in buffer 405, and determines according to the attribute of each data whether current data is carried out second-level dispatching.
In the preferred embodiment for the present invention, data processing module 300 specifically for:
If storing the data of described schedule level one buffer 403 into is header information, search for described one-level resolve in buffer 405, whether exist with described schedule level one buffer 403 in the analysis result that matches of current data;
If so, the analysis result mating by current data with it carries out second-level dispatching simultaneously;
If not, keep described schedule level one buffer 403 and described one-level to resolve the current store status of buffer 405;
If storing the data of described schedule level one buffer 403 into is not header information, directly the data in schedule level one buffer 403 are carried out to second-level dispatching;
If one-level is resolved buffer 405 and has been stored the analysis result of header information, search for described schedule level one buffer 403, judge whether it exists the header information matching with described analysis result;
If so, the analysis result mating by current preamble information with it carries out second-level dispatching simultaneously;
If not, keep described schedule level one buffer 403 and described one-level to resolve the current store status of buffer 405.
As mentioned above, said system in use, in order to solve the out of order problem of data when the second-level dispatching, in message transmittance process, can also be aided with passage pairing logic to each header information, in the time that header information unloading to the analysis result of schedule level one buffer 403 or described header information is stored to one-level parsing buffer 405, the pairing logic of carrying by header information, search schedule level one buffer 403 and one-level and resolve buffer 405 wherein after another, when it matches corresponding information, carry out second-level dispatching.
Be understandable that, owing to only header information being produced to analysis result, therefore, when the data that receive at schedule level one buffer 403 are not header information, resolve buffer 405 without search one-level, directly current data is carried out to second-level dispatching, be not described in detail at this.
Certainly, in further embodiment of the present invention, memory module 300 also comprises L2 cache device 407, its for the data after store secondary scheduling with and corresponding analysis result, be not described in detail at this.
Those skilled in the art can be well understood to, and for convenience and simplicity of description, the specific works process of the system of foregoing description, can, with reference to the corresponding process in preceding method embodiment, not repeat them here.
In sum, the method and system of reduction message repeating provided by the invention time delay, reduce and converge the needed necessary time of analytic message, reduce message repeating time delay.
For convenience of description, while describing above device, being divided into various modules with function describes respectively. Certainly, in the time implementing the application, the function of each module can be realized in same or multiple software and/or hardware.
As seen through the above description of the embodiments, those skilled in the art can be well understood to the application and can realize by the mode of general hardware platform. Based on such understanding, the part that the application's technical scheme contributes to prior art in essence in other words embodies with the form of the product of switch.
Device embodiments described above is only schematic, the wherein said module as separating component explanation can or can not be also physically to separate, the parts that show as module can be or can not be also physical modules, can be positioned at a place, or also can be distributed on multiple mixed-media network modules mixed-medias. Can select according to the actual needs some or all of module wherein to realize the object of present embodiment scheme. Those of ordinary skill in the art, in the situation that not paying creative work, are appreciated that and implement.
The application can describe in the general context of computer executable instructions, for example program module. Usually, program module comprises and carries out particular task or realize routine, program, object, assembly, data structure of particular abstract data type etc. Also can in DCE, put into practice the application, in these DCEs, be executed the task by the teleprocessing equipment being connected by communication network. In DCE, program module can be arranged in the local and remote computer Protector including preservation equipment.
Be to be understood that, although this description is described according to embodiment, but be not that each embodiment only comprises an independently technical scheme, this narrating mode of description is only for clarity sake, those skilled in the art should make description as a whole, technical scheme in each embodiment also can, through appropriately combined, form other embodiments that it will be appreciated by those skilled in the art that.
Listed a series of detailed description is above only illustrating for feasibility embodiment of the present invention; they are not in order to limit the scope of the invention, all do not depart from the equivalent embodiment that skill spirit of the present invention does or change and all should be included in protection scope of the present invention within.

Claims (10)

1. a method that reduces message repeating time delay, is characterized in that, described method comprises:
S1, multiple messages are decomposed into respectively to the data segment that several bit wides are identical, each packet train data channel transfer in a data channel that works too much;
S2, reception come from the data of each data channel, and are stored to level cache device, and the bit wide of described level cache device is the integral multiple of each data channel bit wide;
S3, the attribute information carrying according to the current store status of described level cache device and described data, generate schedule level one request command and/or generate the order of one-level analysis request the data in described level cache device;
The attribute information that described data are carried comprises: data are that header information or data are footer information;
If S4 receives schedule level one request command, by the total data unloading in current described level cache device to schedule level one buffer;
If receive the order of one-level analysis request, the data in current level cache device are resolved, and analysis result is stored to one-level parsing buffer;
S5, the analysis result that the data of storing in described schedule level one buffer and described one-level are resolved to the data of storing in buffer mate, and determine according to the attribute of each data and matching result whether current data is carried out second-level dispatching.
2. the method for reduction message repeating according to claim 1 time delay, is characterized in that, described method also comprises: configuration one scheduling counter;
Described scheduling counter is for recording the quantity of level cache device data to be scheduled, and in described level cache device, data to be scheduled of every increase, add 1 by the numerical value of described scheduling counter; Producing after schedule level one request command, by the numerical value zero clearing of described scheduling counter.
3. the method for reduction message repeating according to claim 1 time delay, is characterized in that, described step S3 specifically comprises:
If current data is stored to after described level cache device, the total quantity of the not data dispatching of storing in described level cache device equals systemic presupposition scheduling threshold value, generates schedule level one request command;
If current data is stored to after described level cache device, the total quantity of the not data dispatching of storing in described level cache device is less than systemic presupposition scheduling threshold value, judges whether current data is footer information;
If so, generate schedule level one request command;
If not, keep the current store status of described level cache device.
4. the method for reduction message repeating according to claim 1 time delay, is characterized in that, described step S3 specifically comprises:
M1, current data are stored to after described level cache device, judge whether described current data is header information;
If so, perform step M2;
If not, keep the current store status of described level cache device;
M2, whether the total quantity that judges the not data dispatching of storing in current level cache device is less than systemic presupposition resolves threshold value,
If so, enter step M3;
If not, enter step M4;
M3, judge whether current data is last data of message,
If so, generate the order of one-level analysis request;
If not, keep the current store status of described level cache device;
M4, judge whether the not total quantity of data of scheduling of storing in current level cache device equals systemic presupposition and resolve threshold value,
If so, generate the order of one-level analysis request;
If not, keep the current store status of described level cache device.
5. the method for reduction message repeating according to claim 1 time delay, is characterized in that, described step S5 also comprises:
If storing the data of described schedule level one buffer into is header information, search for described one-level resolve in buffer, whether exist with described schedule level one buffer in the analysis result that matches of current data;
If so, the analysis result mating by current data with it carries out second-level dispatching simultaneously;
If not, keep described schedule level one buffer and described one-level to resolve the current store status of buffer;
If storing the data of described schedule level one buffer into is not header information, directly the data in schedule level one buffer are carried out to second-level dispatching;
If described one-level is resolved buffer and has been stored the analysis result of header information, search for described schedule level one buffer, judge whether it exists the header information matching with described analysis result;
If so, the analysis result mating by current preamble information with it carries out second-level dispatching simultaneously;
If not, keep described schedule level one buffer and described one-level to resolve the current store status of buffer.
6. a system that reduces message repeating time delay, is characterized in that, described system comprises:
Data transmission blocks, data reception module, data processing module and data memory module;
Described data memory module comprises: level cache device, schedule level one buffer, one-level are resolved buffer;
Described data transmission blocks is used for: multiple messages are decomposed into respectively to the data segment that several bit wides are identical, each packet train data channel transfer in a data channel that works too much;
Described data reception module is used for: receive and come from the data of each data channel, and be stored to level cache device, the bit wide of described level cache device is the integral multiple of each data channel bit wide;
The attribute information carrying according to the current store status of described level cache device and described data, generates schedule level one request command and/or generates the order of one-level analysis request the data in described level cache device;
The attribute information that described data are carried comprises: data are that header information or data are footer information;
If receive schedule level one request command, by the total data unloading in current described level cache device to schedule level one buffer;
If receive the order of one-level analysis request, the data in current level cache device are resolved, and analysis result is stored to one-level parsing buffer;
The analysis result of the data of storing in the data of storing in described schedule level one buffer and described one-level parsing buffer is mated, and determine according to the attribute of each data and matching result whether current data is carried out second-level dispatching.
7. the system of reduction message repeating according to claim 6 time delay, is characterized in that, described system also comprises: scheduling counter;
Described scheduling counter is for recording the quantity of level cache device data to be scheduled, and in described level cache device, data to be scheduled of every increase, add 1 by the numerical value of described scheduling counter; Producing after schedule level one request command, by the numerical value zero clearing of described scheduling counter.
8. the system of reduction message repeating according to claim 6 time delay, is characterized in that, described data processing module also for:
If current data is stored to after described level cache device, the total quantity of the not data dispatching of storing in described level cache device equals systemic presupposition scheduling threshold value, generates schedule level one request command;
If current data is stored to after described level cache device, the total quantity of the not data dispatching of storing in described level cache device is less than systemic presupposition scheduling threshold value, judges whether current data is footer information;
If so, generate schedule level one request command;
If not, keep the current store status of described level cache device.
9. the system of reduction message repeating according to claim 6 time delay, is characterized in that, described data processing module also for:
Current data is stored to after described level cache device, if described current data is header information, judges whether the total quantity of the not data dispatching of storing in current level cache device is less than systemic presupposition parsing threshold value,
If so, continue to judge whether current data is last data of message, if last data of message generate the order of one-level analysis request, otherwise keep the current store status of described level cache device;
If not, continue to judge whether the not total quantity of the data of scheduling of storing in current level cache device equals systemic presupposition and resolve threshold value, resolve threshold value if equal systemic presupposition, generate the order of one-level analysis request, otherwise keep the current store status of described level cache device.
10. the system of reduction message repeating according to claim 6 time delay, is characterized in that, described data processing module also for:
If storing the data of described schedule level one buffer into is header information, search for described one-level resolve in buffer, whether exist with described schedule level one buffer in the analysis result that matches of current data;
If so, the analysis result mating by current data with it carries out second-level dispatching simultaneously;
If not, keep described schedule level one buffer and described one-level to resolve the current store status of buffer;
If storing the data of described schedule level one buffer into is not header information, directly the data in schedule level one buffer are carried out to second-level dispatching;
If described one-level is resolved buffer and has been stored the analysis result of header information, search for described schedule level one buffer, judge whether it exists the header information matching with described analysis result;
If so, the analysis result mating by current preamble information with it carries out second-level dispatching simultaneously;
If not, keep described schedule level one buffer and described one-level to resolve the current store status of buffer.
CN201510993048.1A 2015-12-25 2015-12-25 Reduce the method and system of message Forwarding Latency Active CN105610814B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510993048.1A CN105610814B (en) 2015-12-25 2015-12-25 Reduce the method and system of message Forwarding Latency

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510993048.1A CN105610814B (en) 2015-12-25 2015-12-25 Reduce the method and system of message Forwarding Latency

Publications (2)

Publication Number Publication Date
CN105610814A true CN105610814A (en) 2016-05-25
CN105610814B CN105610814B (en) 2018-09-21

Family

ID=55990352

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510993048.1A Active CN105610814B (en) 2015-12-25 2015-12-25 Reduce the method and system of message Forwarding Latency

Country Status (1)

Country Link
CN (1) CN105610814B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101309208A (en) * 2008-06-21 2008-11-19 华中科技大学 Job scheduling system suitable for grid environment and based on reliable expense
CN102497419A (en) * 2011-12-09 2012-06-13 电子科技大学 ESB (Enterprise Service Bus) task scheduling method orienting to large-scale requests and scheduling system
CN103001893A (en) * 2012-12-12 2013-03-27 华为技术有限公司 Multipath pass-through method and switching chip
US8917740B2 (en) * 2007-05-01 2014-12-23 Net Navigation Systems, Llc Channel service manager
CN104618266A (en) * 2015-02-09 2015-05-13 浪潮集团有限公司 Method and device for transferring messages among a plurality of ports

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8917740B2 (en) * 2007-05-01 2014-12-23 Net Navigation Systems, Llc Channel service manager
CN101309208A (en) * 2008-06-21 2008-11-19 华中科技大学 Job scheduling system suitable for grid environment and based on reliable expense
CN102497419A (en) * 2011-12-09 2012-06-13 电子科技大学 ESB (Enterprise Service Bus) task scheduling method orienting to large-scale requests and scheduling system
CN103001893A (en) * 2012-12-12 2013-03-27 华为技术有限公司 Multipath pass-through method and switching chip
CN104618266A (en) * 2015-02-09 2015-05-13 浪潮集团有限公司 Method and device for transferring messages among a plurality of ports

Also Published As

Publication number Publication date
CN105610814B (en) 2018-09-21

Similar Documents

Publication Publication Date Title
US10705974B2 (en) Data processing method and NVME storage device
CN102866971B (en) Device, the system and method for transmission data
US10120820B2 (en) Direct memory access transmission control method and apparatus
KR101745456B1 (en) Ecu for transmitting large data in hil test environment, system including the same and method thereof
CN107391317A (en) A kind of method, apparatus of data recovery, equipment and computer-readable recording medium
CN102647323B (en) Flow control method and device as well as clustering system
CN103078933B (en) A kind of method and apparatus determining data migration time
CN109101662B (en) Block generation method, device, equipment and storage medium
CN110737401B (en) Method, apparatus and computer program product for managing redundant array of independent disks
CN103838659A (en) Method and device for controlling system logs
CN105574141A (en) Method and device for migrating data of database
CN104765701A (en) Data access method and device
US9008070B2 (en) Information communication apparatus and program storage medium
CN102118361A (en) Method and device for controlling data transmission based on network protocol
US20160205210A1 (en) Control method and apparatus for publish/subscribe systems
US9118597B2 (en) Method and system for requester virtual cut through
CN102214360B (en) Bitmap constructing method and device, intersection method and intersection device
CN105610814A (en) Method and system for reducing packet forwarding delay
US9021123B2 (en) Method and system for responder side cut through of received data
US10250515B2 (en) Method and device for forwarding data messages
CN103117955A (en) Method, device and system of message transmission
CN112468317A (en) Cluster topology updating method, system, equipment and computer storage medium
CN105512090B (en) The method for organizing of data buffering in a kind of network node based on FPGA
CN1901535B (en) Method for increasing and removing member in link volume regulating plan
KR101706201B1 (en) Direct memory access controller and operating method thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 215000 unit 13 / 16, 4th floor, building B, No.5 Xinghan street, Suzhou Industrial Park, Jiangsu Province

Patentee after: Suzhou Shengke Communication Co.,Ltd.

Address before: Xinghan Street Industrial Park of Suzhou city in Jiangsu province 215021 B No. 5 Building 4 floor 13/16 unit

Patentee before: CENTEC NETWORKS (SU ZHOU) Co.,Ltd.

CP03 Change of name, title or address