CN111783378A - Data processing method and FPGA - Google Patents

Data processing method and FPGA Download PDF

Info

Publication number
CN111783378A
CN111783378A CN202010620075.5A CN202010620075A CN111783378A CN 111783378 A CN111783378 A CN 111783378A CN 202010620075 A CN202010620075 A CN 202010620075A CN 111783378 A CN111783378 A CN 111783378A
Authority
CN
China
Prior art keywords
data
data processing
module
fpga
modules
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010620075.5A
Other languages
Chinese (zh)
Other versions
CN111783378B (en
Inventor
***
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Maipu Communication Technology Co Ltd
Original Assignee
Maipu Communication Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Maipu Communication Technology Co Ltd filed Critical Maipu Communication Technology Co Ltd
Priority to CN202010620075.5A priority Critical patent/CN111783378B/en
Publication of CN111783378A publication Critical patent/CN111783378A/en
Application granted granted Critical
Publication of CN111783378B publication Critical patent/CN111783378B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/30Circuit design
    • G06F30/34Circuit design for reconfigurable circuits, e.g. field programmable gate arrays [FPGA] or programmable logic devices [PLD]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The application provides a data processing method and an FPGA, comprising the following steps: the FPGA receives data from a corresponding channel through each data cache module in the n data cache modules; the FPGA polls the n data cache modules through a polling module, judges whether the data cached in each data cache module is larger than or equal to a preset threshold value or not, and obtains an inquiry result of each data cache module; and the FPGA controls the data processing modules in the m data processing modules to execute actions corresponding to the inquiry result on the data caching modules corresponding to the inquiry result according to the inquiry result, wherein the two data caching modules which are sequentially inquired by the polling module execute the actions corresponding to the inquiry result by different data processing modules. By increasing the number of the data processing modules in the FPGA, the time for each data processing module to process data can be prolonged, so that the FPGA with lower performance can also realize the data processing process, and the cost for processing the data of the router is reduced.

Description

Data processing method and FPGA
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a data processing method and an FPGA.
Background
In the prior art, when the FPGA is used to receive data from a plurality of physical channels of the router, time division multiplexing is often required, and the time division multiplexing usually requires selecting a high-performance FPGA to meet the data processing requirement.
The high-performance FPGA has a high cost, and the cost of processing data of the router is greatly increased.
Disclosure of Invention
An object of the embodiments of the present application is to provide a data processing method and an FPGA, so as to solve the problem of high cost caused by the need of using a high-performance FPGA in the prior art.
In a first aspect, an embodiment of the present application provides a data processing method, which is used for polling n channels of a router through a field programmable gate array FPGA, where the FPGA is configured with n data cache modules, m data processing modules, and a polling module, and the n data cache modules correspond to the n channels of the router one to one; the method comprises the following steps: the FPGA receives data from a corresponding channel through each data cache module in the n data cache modules; the FPGA polls the n data cache modules through the polling module, judges whether the data cached in each data cache module is larger than or equal to a preset threshold value or not, and obtains an inquiry result of each data cache module; and the FPGA controls a data processing module in the m data processing modules to execute an action corresponding to the inquiry result on a data caching module corresponding to the inquiry result according to the inquiry result, wherein the two data caching modules which are sequentially inquired by the polling module execute the action corresponding to the inquiry result by different data processing modules.
In the foregoing embodiment, the polling module sequentially obtains the data amount of the data cached in each data caching module, and determines whether the data amount is greater than or equal to a preset threshold, so as to obtain an inquiry result that is greater than or equal to the preset threshold, or obtain an inquiry result that is less than the preset threshold. And the FPGA controls a certain data processing module in the m data processing modules to cache the data queried this time, and executes corresponding action according to the query result. The polling module inquires the data caching module twice continuously, and the different data processing modules execute corresponding actions. By increasing the number of the data processing modules in the FPGA, the time for each data processing module to process data can be prolonged, so that the FPGA with lower performance can also realize the data processing process, and the cost for processing the data of the router is reduced.
In one possible design, the FPGA polls the n data cache modules through the polling module, determines whether the data cached in each data cache module is greater than or equal to a preset threshold, and obtains an inquiry result of each data cache module, including: the FPGA polls the n data cache modules by the polling module by taking a first time length as a time interval, and judges whether the data cached in each data cache module is greater than or equal to a preset threshold value or not; and obtaining the inquiry result of each data cache module, wherein the inquiry result is greater than or equal to the preset threshold value or is smaller than the preset threshold value.
In the above embodiment, the polling module may use the first time length as the time interval, and query the next data cache module of the n data cache modules every the first time length. Due to the fact that the plurality of data processing modules are arranged, when the polling module inquires the next data cache module after the first time length, one of the plurality of data processing modules can still perform corresponding data processing, namely the plurality of data processing modules exist, and the processing time of the data processing module is enabled to be separated from the limit of the inquiry interval time of the polling module.
In one possible design, each of the m data processing modules has a respective corresponding RAM cache space; the FPGA controls a data processing module in the m data processing modules to execute actions corresponding to the inquiry result on a data cache module corresponding to the inquiry result according to the inquiry result, and the actions comprise: if the inquiry result is greater than or equal to the preset threshold, the FPGA controls an idle data processing module in the m data processing modules to acquire data of which the quantity is the preset threshold from the data cache module corresponding to the inquiry result, and the data processing module is made to process the acquired data.
In the above embodiments, each data processing module has its corresponding RAM cache space, which means that each data processing module obtains data from the RAM cache space or stores data into the RAM cache space independently without affecting each other. Therefore, in this case, when the query result is greater than or equal to the preset threshold, a currently idle data processing module may be randomly selected from the m data processing modules to process the data caching module to which the query result belongs.
In one possible design, the m data processing modules share the same RAM cache space; each data processing module in the m data processing modules has an identification number, and the inquiry sequence inquired by the polling module has a corresponding relation with the identification number; the FPGA controls a data processing module in the m data processing modules to execute actions corresponding to the inquiry result on a data cache module corresponding to the inquiry result according to the inquiry result, and the actions comprise: and the FPGA executes the action corresponding to the query result within a second time span through the data processing module represented by the identification number according to the corresponding relation among the query result, the query sequence and the identification number, wherein the second time span is longer than the first time span.
In the foregoing embodiment, the m data processing modules share the same RAM cache space, and therefore, in order to avoid that more than two data processing modules in the m data processing modules access the RAM cache space at the same time, corresponding limitations need to be made on the processing order and the processing duration of the data processing modules, so that time periods for the m data processing modules to access the RAM cache space can be staggered, and errors can be avoided.
In one possible design, the FPGA executes an action corresponding to the query result within a second time period by the data processing module characterized by the identification number according to the corresponding relationship between the query result, the query order, and the identification number, and includes: if the data query result is greater than or equal to the preset threshold, the FPGA makes the corresponding data processing module obtain data with the quantity of the preset threshold from the queried data cache module according to the query sequence corresponding to the query result, the corresponding relation between the query sequence and the identification number, and the data processing module represented by the identification number, and makes the data processing module process the obtained data in a second time length.
In the foregoing embodiment, if the data query result is greater than or equal to the preset threshold, the FPGA acquires a query order corresponding to the query result this time, and acquires the data processing module configured to process the query order according to a correspondence between the query order and the identification number of the data processing module, where the data processing module acquires, from the data cache module, data whose number is equal to the preset threshold, and completes processing within the second time length. The access to the RAM cache space in the data processing process is staggered in time by limiting the identification number of the data processing module and the time for processing data, so that errors are avoided.
In one possible design, the second time length sequentially includes a recovery sub-period, a processing sub-period, and a preservation sub-period; the causing the data processing module to process the acquired data for a second length of time includes: enabling the data processing module to recover the last processed legacy data and the state information corresponding to the legacy data from a Random Access Memory (RAM) in the recovery sub-time period; the data processing module is enabled to process the data with the quantity being the preset threshold value and the last processed legacy data within the processing sub-time period to obtain a processing result, wherein the processing result comprises data to be output, current legacy data and state information corresponding to the current legacy data, and the current legacy data and the state information corresponding to the current legacy data are data to be stored; and the data processing module is enabled to store the data to be stored into the RAM in the storage sub-time period.
In the foregoing embodiment, the second time length includes three sub-time periods, when the data processing module processes data within the second time length, the last processed legacy data and the state information corresponding to the legacy data are recovered from the RAM within the recovery sub-time period, the data processing module performs data processing on the data with the quantity equal to the preset threshold and the last processed legacy data within the processing sub-time period to obtain a processing result, where the processing result includes the data to be output, the current legacy data and the state information corresponding to the current legacy data, and the current legacy data and the state information corresponding to the current legacy data are stored in the RAM within the storage sub-time period. Therefore, when the time for accessing the RAM is divided, it is sufficient to ensure that the recovery sub-period and the retention sub-period of each of the plurality of data processing modules are not in the same period.
In one possible design, the FPGA executes an action corresponding to the query result within a second time period by the data processing module characterized by the identification number according to the corresponding relationship between the query result, the query order, and the identification number, and includes: and if the data inquiry result is smaller than the preset threshold, the FPGA enables the corresponding data processing module to maintain the current situation in a second time span according to the inquiry sequence corresponding to the inquiry result, the corresponding relation between the inquiry sequence and the identification number and the data processing module represented by the identification number.
In the above embodiment, in order to make the accesses to the RAM buffer space staggered in time during the data processing process and avoid errors, even if a certain data processing module does not need to process data currently, it can be made to maintain the current status in the second time period.
In a second aspect, an FPGA is configured to poll n channels of a router, where the FPGA is configured with n data cache modules, m data processing modules, and a polling module, and the n data cache modules correspond to the n channels of the router one to one; the FPGA is used for receiving data from a corresponding channel through each data cache module in the n data cache modules; the FPGA is used for polling the n data cache modules through the polling module, judging whether the data cached in each data cache module is greater than or equal to a preset threshold value or not, and obtaining an inquiry result of each data cache module; the FPGA is used for controlling a data processing module in the m data processing modules to execute an action corresponding to the inquiry result on a data cache module corresponding to the inquiry result according to the inquiry result, wherein the two data cache modules which are sequentially inquired by the polling module are executed by different data processing modules.
In one possible design, the FPGA is specifically configured to poll the n data cache modules through the polling module with a first time length as a time interval, and determine whether data cached in each data cache module is greater than or equal to a preset threshold; and obtaining the inquiry result of each data cache module, wherein the inquiry result is greater than or equal to the preset threshold value or is smaller than the preset threshold value.
In one possible design, each of the m data processing modules has a respective corresponding RAM cache space; if the inquiry result is greater than or equal to the preset threshold, the FPGA is specifically configured to control an idle data processing module of the m data processing modules to obtain data of which the number is the preset threshold from the data cache module corresponding to the inquiry result, and enable the data processing module to process the obtained data.
In one possible design, the m data processing modules share the same RAM cache space; each data processing module in the m data processing modules has an identification number, and the inquiry sequence inquired by the polling module has a corresponding relation with the identification number; the FPGA is specifically configured to execute, according to the query result, the query order, and the corresponding relationship between the identification numbers, an action corresponding to the query result within a second time length by the data processing module characterized by the identification numbers, where the second time length is longer than the first time length.
In a possible design, if the data query result is greater than or equal to the preset threshold, the FPGA is specifically configured to, according to the query sequence corresponding to the query result, the correspondence between the query sequence and the identification number, and the data processing module characterized by the identification number, cause the corresponding data processing module to obtain data of which the number is the preset threshold from the queried data cache module, and cause the data processing module to process the obtained data in the second time period.
In one possible design, the second time length sequentially includes a recovery sub-period, a processing sub-period, and a preservation sub-period; the FPGA is specifically used for enabling the data processing module to recover the last processed legacy data and the state information corresponding to the legacy data from the RAM within the recovery sub-time period; the data processing module is enabled to process the data with the quantity being the preset threshold value and the last processed legacy data within the processing sub-time period to obtain a processing result, wherein the processing result comprises data to be output, current legacy data and state information corresponding to the current legacy data, and the current legacy data and the state information corresponding to the current legacy data are data to be stored; and the data processing module is enabled to store the data to be stored into the RAM in the storage sub-time period.
In a possible design, if the data query result is smaller than the preset threshold, the FPGA is specifically configured to enable the corresponding data processing module to maintain the current status within the second time period according to the query sequence corresponding to the query result, the corresponding relationship between the query sequence and the identification number, and the data processing module characterized by the identification number.
In a third aspect, an embodiment of the present application provides an electronic device, including the method in the first aspect or any optional implementation manner of the first aspect.
In a fourth aspect, the present application provides a readable storage medium having stored thereon an executable program which, when executed by a processor, performs the method of the first aspect or any of the optional implementations of the first aspect.
In a fifth aspect, the present application provides an executable program product which, when run on a computer, causes the computer to perform the method of the first aspect or any possible implementation manner of the first aspect.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
FIG. 1 is a schematic block diagram of an FPGA provided in an embodiment of the present application;
FIG. 2 is a flow chart illustrating a data processing method according to an embodiment of the present disclosure;
fig. 3 is a flowchart illustrating a specific step of step S120 in fig. 2;
fig. 4 is a flowchart illustrating a specific step of step S130 in fig. 2;
fig. 5 shows an application diagram of the data processing method.
Detailed Description
In contrast to the embodiment, when the FPGA is used to receive data from a plurality of physical channels of the router, time division multiplexing processing is often required, and a high-performance FPGA is generally required to be selected during the time division multiplexing processing for the following reasons:
the router is not set to adopt 30 paths of pulse code modulation (E1 for short) telecommunication standards, the rate of E1 is 2.048Mhz, and the router has 63E 1 physical channels; then 488ns is required for any E1 physical channel of the router to receive or send 1-bit (bit for short) data, and 488 x 8 ═ 3904ns is required for any E1 physical channel to receive or send 1-byte (i.e. 8-bit) data.
The FPGA is configured with a polling module, a data processing module and 63 data caching modules. The data buffer modules are in one-to-one correspondence with the E1 physical channels.
The data caching module is used for receiving data from a corresponding E1 physical channel; the polling module is used for polling the 63 data cache modules at fixed time intervals, namely inquiring the data volume stored in each data cache module respectively, and the data processing module performs corresponding operation according to the data volume stored in the data cache module.
In order to avoid data missing, the time for the polling module to poll 63 data cache modules once needs to be less than or equal to the time for any E1 physical channel to receive or transmit 1 byte, namely needs to be less than or equal to 3904 ns.
If the FPGA with lower performance is selected, although the polling module can complete polling within 3904ns, the data processing module cannot complete data processing within the time. For example, an FPGA with a system clock of 125Mhz (with a period of 8ns) may not be used, such as a high cloud GW2A-LV55PG484C8/I7 FPGA, and if the polling module of the FPGA can sequentially query one of the 63 data cache modules every 48ns (i.e. 6 periods), the time for polling the 63 data cache modules is 63 × 6 × 8 — 3024ns, which is less than 3904 ns. However, when the query result indicates that the data processing module is required to process, the data processing module of the FPGA cannot complete the processing of the data within 6 cycles, which results in the asynchronous data processing function and the polling function.
If a high-performance FPGA is adopted, the system clock without the high-performance FPGA can reach 250Mhz, and the clock period of the high-performance FPGA is 4 ns. Because of the high performance of a high performance FPGA, the FPGA has 48ns as the polling time interval, and 12 cycles as the time interval. When the inquiry result indicates that the data processing module is required to process, the data processing module of the FPGA can complete the processing of the data in 12 cycles. Therefore, in time division multiplexing processing, it is usually necessary to select a high-performance FPGA, but the cost of the high-performance FPGA is high.
According to the embodiment of the application, the number of the data processing modules in the FPGA is increased, so that the time for each data processing module to process data can be prolonged, the FPGA with lower performance can also realize the data processing process, and the FPGA with higher performance can not be used.
Referring to fig. 1, fig. 1 shows a number of modules of which FPGA 10 is configured: n data caching modules 102, a polling module 104, m data processing modules 106, and a RAM 108.
Each of the n data cache modules 102 is connected to a channel interface corresponding to the router 20, and is configured to receive data from a corresponding channel. Each data cache module 102 of the n data cache modules 102 is connected to the polling module 104, and is configured to receive polling from the polling module 104.
The polling module 104 is further connected to the m data processing modules 106, and the FPGA may control one data processing module 106 of the m data processing modules 106 to execute an action corresponding to the query result according to the query result of the polling module 104 polling the data cache module 102.
The m data processing modules 106 are all connected to the RAM 108. Alternatively, the RAM108 may be a complete cache space, and the m data processing modules 106 may share the same RAM108 space; the RAM108 may also be a plurality of divided cache spaces, and each of the m data processing modules 106 has a corresponding cache space.
Fig. 2 shows a data processing method provided in an embodiment of the present application, where the method may be executed by the FPGA shown in fig. 1, and the method specifically includes the following steps S110 to S130:
in step S110, the FPGA receives data from a corresponding channel through each data cache module 102 of the n data cache modules 102.
Each data caching module 102 of the n data caching modules 102 can receive data from a channel of a corresponding router, and the data receiving process of each data caching module 102 is independent. The channels may be E1 physical channels, n is not set to 63, each E1 physical channel may be connected to the data cache module 102 through a corresponding channel, and each E1 physical channel may also be connected to the data cache module 102 through a corresponding 31 channels. The particular manner in which the channel is connected to the data cache module 102 should not be construed as a limitation on the present application.
Step S120, the FPGA polls the n data cache modules 102 through the polling module 104, determines whether the data cached in each data cache module 102 is greater than or equal to a preset threshold, and obtains an inquiry result of each data cache module 102.
Optionally, referring to fig. 3, fig. 3 shows a flowchart of steps of an embodiment of step S120, which may specifically include the following steps S121 to S122:
step S121, the FPGA polls the n data cache modules 102 through the polling module 104 with the first time length as a time interval, and determines whether the data cached in each data cache module 102 is greater than or equal to a preset threshold.
When the polling module 104 sequentially queries the n data cache modules 102, it may query the data amount of the data currently cached by a certain data cache module 102, compare the data amount with a preset threshold, and determine whether the data amount is equal to or exceeds the preset threshold. The preset threshold may be 8 bits, i.e., one byte. Of course, the preset threshold may be other values, such as 16 bits, and the size of the specific value of the preset threshold should not be construed as limiting the application.
It should be appreciated that the FPGA may perform the above actions for a first length of time; the actions may be completed within the first short time before the first time length, and the waiting time reaches the first time length, that is, the actual processing time for the polling module 104 to acquire the data amount and determine whether the data amount is greater than or equal to the preset threshold may be less than the first time length, so as to leave a sufficient time for the processing of the data processing module 106.
Step S122, obtaining the query result of each data caching module 102 that is greater than or equal to the preset threshold, or the query result that is less than the preset threshold.
The polling module 104 sequentially obtains the data amount of the data cached in each data caching module 102, and determines whether the data amount is greater than or equal to a preset threshold, so as to obtain an inquiry result greater than or equal to the preset threshold, or obtain an inquiry result smaller than the preset threshold.
Step S130, the FPGA controls, according to the query result, the data processing module 106 of the m data processing modules 106 to execute an action corresponding to the query result on the data cache module 102 corresponding to the query result, wherein the two data cache modules 102 sequentially queried by the polling module 104 are executed by different data processing modules 106.
The FPGA controls one data processing module 106 of the m data processing modules 106 to execute a corresponding action on the data caching module 102 queried this time according to the query result. The polling module 104 queries the data cache module 102 twice in succession, and the data processing module 106 performs corresponding actions. By increasing the number of data processing modules 106 in the FPGA, the time for each data processing module 106 to process data can be extended. After the first time length passes, when the polling module 104 queries the next data cache module 102, a certain data processing module 106 may still perform corresponding data processing, that is, the plurality of data processing modules 106 exist, so that the processing time of the data processing module 106 is separated from the limit of the query interval time length of the polling module 104, the FPGA with lower performance can also implement the data processing process, and the cost of data processing on the router is reduced.
In one embodiment, each data processing module 106 of the m data processing modules 106 has a respective RAM cache space; in this embodiment, step S130 may specifically include:
if the query result is greater than or equal to the preset threshold, the FPGA controls an idle data processing module 106 of the m data processing modules 106 to acquire data of which the quantity is the preset threshold from the data cache module 102 corresponding to the query result, and enables the data processing module 106 to process the acquired data.
Each data processing module 106 has a corresponding RAM cache space, which means that each data processing module 106 obtains previous legacy data and corresponding state information from the RAM cache space, or stores new legacy data and corresponding state information into the RAM cache space independently without affecting each other. Therefore, in this case, when the query result is greater than or equal to the preset threshold, a currently idle data processing module 106 from the m data processing modules 106 may be randomly selected to process the data caching module 102 to which the query result belongs, without considering the problem of staggering the access time to the RAM 108.
In a specific embodiment, m data processing modules 106 share the same RAM cache space, each data processing module 106 in the m data processing modules 106 has a respective identification number, and the query sequence for querying by the polling module 104 has a corresponding relationship with the identification number.
In this embodiment, step S130 may specifically include:
and the FPGA executes the action corresponding to the query result within a second time span through the data processing module 106 represented by the identification number according to the query result, the query sequence and the corresponding relation of the identification number, wherein the second time span is longer than the first time span.
The m data processing modules 106 share the same RAM cache space, and in order to avoid that more than two data processing modules 106 in the m data processing modules 106 access the RAM cache space at the same time, corresponding limitations need to be made on the processing order and the processing duration of the data processing modules 106, so that the time periods for the m data processing modules 106 to access the RAM cache space can be staggered, and errors are avoided.
The query sequence refers to a sequence in which the polling module 104 queries the data cache modules 102, the number of the data cache modules 102 is n, and the polling module 104 needs to query n times in a complete polling process, and queries different data cache modules 102 of the n data cache modules 102 each time. Therefore, in a complete polling process, the polling module 104 will start with the 1 st query, perform the 2 nd query, perform the 3 rd query until performing the nth query. The 1 st query, the 2 nd query, and the 3 rd query … are the query sequence.
The identification number represents the identity of the data processing module 106, and for convenience of description, the identification number is not set from 1 to m, and corresponds to m data processing modules 106 respectively. Typically, the number of data processing modules 106 is less than the polling order, and thus, the n queries may be grouped m times into sets. The correspondence of the query order to the identification number may be as follows:
inquiring the data processing module 106 with the corresponding identification number of 1 for the 1 st time;
inquiring the data processing module 106 with the corresponding identification number of 2 for the 2 nd time;
……
the data processing module 106 with the corresponding identification number m is inquired for the mth time;
the data processing module 106 with the corresponding identification number of 1 is inquired for the (m + 1) th time;
the data processing module 106 with the corresponding identification number of 2 is inquired for the m +2 th time;
……
the 2 m-th query corresponds to the data processing module 106 with the identification number m;
the 2m +1 th query is made to the data processing module 106 with the corresponding identification number of 1;
the 2m +2 nd query corresponds to the data processing module 106 with the identification number of 2;
……
data processing module 106 with identification number m corresponding to 3 m-th inquiry
……
For example, if the total number m is 4, the query sequence corresponding to the data processing module 106 with the identification number 1 is im +1, the query sequence corresponding to the data processing module 106 with the identification number 2 is im +2, the query sequence corresponding to the data processing module 106 with the identification number 3 is im +3, and the query sequence corresponding to the data processing module 106 with the identification number 4 is im +4, where i is a natural number.
If the total number m is 2, the query sequence corresponding to the data processing module 106 with the identification number 1 is im +1, and the query sequence corresponding to the data processing module 106 with the identification number 2 is im + 2.
Referring to fig. 4, the steps specifically include the following steps S131 to S132:
step S131, if the data query result is greater than or equal to the preset threshold, the FPGA makes the corresponding data processing module 106 obtain the data of which the quantity is the preset threshold from the queried data cache module 102 according to the query sequence corresponding to the query result, the corresponding relationship between the query sequence and the identification number, and the data processing module 106 represented by the identification number, and makes the data processing module 106 process the obtained data in the second time length.
If the data query result is greater than or equal to the preset threshold, the FPGA acquires a query sequence corresponding to the query result of this time, and acquires the data processing module 106 configured to process the query sequence according to a corresponding relationship between the query sequence and the identification number of the data processing module 106, and the data processing module 106 acquires data of which the number is the preset threshold from the data cache module 102 and completes processing within the second time length. Errors are avoided by defining the identification number of the data processing module 106 and the time to process the data so that access to RAM cache space during data processing is staggered in time.
Optionally, the second time length sequentially includes a recovery sub-period, a processing sub-period, and a saving sub-period;
causing the data processing module 106 to process the acquired data in the second time period may specifically include:
enabling the data processing module 106 to recover the last processed legacy data and the state information corresponding to the legacy data from the random access memory RAM108 within the recovery sub-period; the data processing module 106 is enabled to process the data of which the quantity is the preset threshold and the last processed legacy data within the processing sub-time period to obtain a processing result, where the processing result includes data to be output, current legacy data and state information corresponding to the current legacy data, and the current legacy data and the state information corresponding to the current legacy data are data to be saved; and the data processing module 106 is enabled to store the data to be stored into the RAM108 in the storage sub-period.
The second time length includes three sub-time periods, when the data processing module 106 processes data within the second time length, the last processed legacy data and the state information corresponding to the legacy data are recovered from the RAM108 within the recovery sub-time period, the data processing module 106 performs data processing on the data with the quantity equal to the preset threshold value and the last processed legacy data within the processing sub-time period to obtain a processing result, where the processing result includes the data to be output, the current legacy data and the state information corresponding to the current legacy data, and the current legacy data and the state information corresponding to the current legacy data are stored in the RAM108 within the storage sub-time period. Therefore, when the time for accessing the RAM108 is divided, it is sufficient to ensure that the recovery sub-period and the retention sub-period of each of the plurality of data processing modules 106 are not in the same period.
Specifically, the legacy data includes a legacy data amount and a legacy specific data value, and the legacy data amount may be 0 or not. To illustrate by taking an HDLC frame as an example, since the HDLC uses a fixed flag field 01111110 as a boundary of the frame, in order to avoid that a data field identical to the flag field is mistaken for a transmission end identifier during data transmission, a bit stuffing technique is used when data is transmitted, that is, once 5 1 bits are found in a transmitted data bit sequence after 0, a 0 is inserted into a 7 th bit. The receiving end carries out the reverse operation, if 5 1 bits are found behind 0 at the receiving end, the 7 th bit is checked, if 0 is found, 0 is deleted; if it is 1 and bit 8 is 0, the flag field 01111110 is considered, thus ensuring that there will not be the same field in the data bit as the flag field. Since the receiving end needs to delete 0, the data size after deleting 0 may be less than 8 bits, and the data less than 8 bits needs to be processed after completing 8 bits, so the data less than 8 bits is the legacy data. After the last processed legacy data and the data with the number of the received data being the preset threshold value are processed, the data to be sent which are processed and have 8 bits, and new legacy data can be obtained.
The state information corresponding to the legacy data includes: the number of bits of the legacy data is 0-7, and the state of the HDLC state machine in the data processing module after the last processing (the HDLC state machine comprises three states of conversion, namely a head searching mark state, a pre-frame entering state and an intermediate data processing state, and the HDLC state machine is converted in the three states). In the recovery sub-period, the HDLC state machine in the data processing module is recovered to the state after the last processing, so that the data processing module continues to process data on the last basis. According to three different states of the HDLC state machine, different processing operations are carried out on the data, wherein the processing operations comprise frame head and frame tail identification, intra-frame data output, and error information reports of incomplete bytes, ultra-short frames, ultra-long frames and the like.
Step S132, if the data query result is smaller than the preset threshold, the FPGA makes the corresponding data processing module 106 maintain the current status within the second time period according to the query sequence corresponding to the query result, the corresponding relationship between the query sequence and the identification number, and the data processing module 106 represented by the identification number.
In order to stagger the access to the RAM buffer space in time during the data processing process and avoid errors, even if a certain data processing module 106 does not need to process data currently, the data processing module can maintain the current status in the second time length.
For convenience of description, please refer to fig. 5, for a case where the plurality of data processing modules 106 share the same RAM108, it is not necessary to take m as 2 (that is, the data processing module 106 includes the data processing module 1 and the data processing module 2), n as 63 (that is, there are 63 data cache modules 102), the preset threshold is 8 bits, the first time length is 6 clock cycles of the FPGA, the second time length is 12 clock cycles of the FPGA, and the recovery sub-time period, the processing sub-time period, and the saving sub-time period included in the second time length sequentially occupy the first 2 clock cycles, the middle 8 clock cycles, and the last 2 clock cycles of the 12 clock cycles, respectively.
The FPGA inquires 1 of the 63 data cache modules 102 for the first time through the polling module 104, and the polling module 104 can acquire the data volume of the data in the data cache module 102, judge whether the data volume is less than 8 bits or greater than or equal to 8 bits, and acquire an inquiry result that the data volume is less than 8 bits or an inquiry result that the data volume is greater than or equal to 8 bits.
And the FPGA controls the data processing module 1 to execute corresponding operation according to the inquiry result and the data processing module 1 corresponding to the first inquiry.
If the inquiry result is that the data volume is greater than or equal to 8 bits, then:
the data processing module 1 recovers the last processed legacy data and the state information corresponding to the legacy data from the RAM108 in the first 2 clock cycles (i.e., the recovery sub-period) of the first 6 clock cycles of the first query;
processing the data of which the quantity is a preset threshold value and the last processed legacy data in a time period (namely, a processing sub-time period) from the beginning of the 3 rd clock cycle of the 6 clock cycles of the first inquiry to the end of the 4 th clock cycle of the 6 clock cycles of the second inquiry to obtain a processing result, wherein the processing result comprises data to be output, current legacy data and state information corresponding to the current legacy data, and the current legacy data and the state information corresponding to the current legacy data are data to be stored;
the data to be saved is stored in RAM108 two clock cycles (i.e., save sub-periods) after the 6 clock cycles of the second interrogation.
If the inquiry result is that the data volume is less than 8 bits, then:
the data processing module 1 will remain in the original state for 12 clock cycles consisting of the first and second interrogation.
The FPGA inquires 1 of the 63 data cache modules 102 which is not inquired for the second time through the polling module 104, and the polling module 104 can obtain the data volume of the data in the data cache module 102, judge whether the data volume is less than 8bit or greater than or equal to 8bit, and obtain the inquiry result that the data volume is less than 8bit or the inquiry result that the data volume is greater than or equal to 8 bit.
And the FPGA controls the data processing module 2 to execute corresponding operation according to the inquiry result and the data processing module 2 corresponding to the second inquiry.
If the inquiry result is that the data volume is greater than or equal to 8 bits, then:
the data processing module 2 recovers the last processed legacy data and the state information corresponding to the legacy data from the RAM108 in the first 2 clock cycles (i.e., the recovery sub-period) of the 6 clock cycles of the second query;
processing the data of which the quantity is the preset threshold value and the last processed legacy data in a time period (namely, a processing sub-time period) from the beginning of the 3 rd clock cycle of the 6 clock cycles of the second inquiry to the end of the 4 th clock cycle of the 6 clock cycles of the third inquiry to obtain a processing result, wherein the processing result comprises data to be output, current legacy data and state information corresponding to the current legacy data, and the current legacy data and the state information corresponding to the current legacy data are data to be stored;
the data to be saved is stored in the RAM108 two clock cycles (i.e., save sub-periods) after the 6 clock cycles of the third inquiry.
If the inquiry result is that the data volume is less than 8 bits, then:
the data processing module 2 will remain the same for 12 clock cycles consisting of the second and third queries.
As can be seen from fig. 5, the data processing modules 1 and 2 access the RAM108 at different times, so as to avoid errors caused by accessing the RAM108 by a plurality of data processing modules 106 at the same time.
Referring to fig. 1, fig. 1 shows an FPGA provided in an embodiment of the present application, configured to poll n channels of a router, where the FPGA is configured with n data cache modules 102, m data processing modules 106, and a polling module 104, where the n data cache modules 102 correspond to the n channels of the router one to one;
the FPGA is configured to receive data from a corresponding channel through each data caching module 102 of the n data caching modules 102;
the FPGA is configured to poll the n data cache modules 102 through the polling module 104, determine whether data cached in each data cache module 102 is greater than or equal to a preset threshold, and obtain an inquiry result of each data cache module 102;
the FPGA is configured to control, according to the query result, the data processing modules 106 in the m data processing modules 106 to execute an action corresponding to the query result on the data cache module 102 corresponding to the query result, where the two data cache modules 102 sequentially queried by the polling module 104 are executed by different data processing modules 106.
The method executed by each module by the FPGA has been described in detail above, and is not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A data processing method is characterized in that the method is used for polling n channels of a router through a Field Programmable Gate Array (FPGA), the FPGA is provided with n data cache modules, m data processing modules and a polling module, the n data cache modules correspond to the n channels of the router one by one, wherein m and n are positive integers which are more than or equal to 2; the method comprises the following steps:
the FPGA receives data from a corresponding channel through each data cache module in the n data cache modules;
the FPGA polls the n data cache modules through the polling module, judges whether the data cached in each data cache module is larger than or equal to a preset threshold value or not, and obtains an inquiry result of each data cache module;
and the FPGA controls a data processing module in the m data processing modules to execute an action corresponding to the inquiry result on a data caching module corresponding to the inquiry result according to the inquiry result, wherein the two data caching modules which are sequentially inquired by the polling module execute the action corresponding to the inquiry result by different data processing modules.
2. The method according to claim 1, wherein the FPGA polls the n data cache modules through the polling module, determines whether the data cached in each data cache module is greater than or equal to a preset threshold, and obtains the query result of each data cache module, including:
the FPGA polls the n data cache modules by the polling module by taking a first time length as a time interval, and judges whether the data cached in each data cache module is greater than or equal to a preset threshold value or not;
and obtaining the inquiry result of each data cache module, wherein the inquiry result is greater than or equal to the preset threshold value or is smaller than the preset threshold value.
3. The method of claim 2, wherein each of said m data processing modules has a respective corresponding RAM cache space;
the FPGA controls a data processing module in the m data processing modules to execute actions corresponding to the inquiry result on a data cache module corresponding to the inquiry result according to the inquiry result, and the actions comprise:
if the inquiry result is greater than or equal to the preset threshold, the FPGA controls an idle data processing module in the m data processing modules to acquire data of which the quantity is the preset threshold from the data cache module corresponding to the inquiry result, and the data processing module is made to process the acquired data.
4. The method of claim 2, wherein said m data processing modules share the same RAM cache space; each data processing module in the m data processing modules has an identification number, and the inquiry sequence inquired by the polling module has a corresponding relation with the identification number;
the FPGA controls a data processing module in the m data processing modules to execute actions corresponding to the inquiry result on a data cache module corresponding to the inquiry result according to the inquiry result, and the actions comprise:
and the FPGA executes the action corresponding to the query result within a second time span through the data processing module represented by the identification number according to the corresponding relation among the query result, the query sequence and the identification number, wherein the second time span is longer than the first time span.
5. The method according to claim 4, wherein the FPGA executes the action corresponding to the query result within a second time span through the data processing module characterized by the identification number according to the corresponding relation between the query result, the query sequence and the identification number, and comprises:
if the data query result is greater than or equal to the preset threshold, the FPGA makes the corresponding data processing module obtain data with the quantity of the preset threshold from the queried data cache module according to the query sequence corresponding to the query result, the corresponding relation between the query sequence and the identification number, and the data processing module represented by the identification number, and makes the data processing module process the obtained data in a second time length.
6. The method of claim 5, wherein the second time duration comprises a recovery sub-period, a processing sub-period, and a preservation sub-period in this order;
the causing the data processing module to process the acquired data for a second length of time includes:
enabling the data processing module to recover the last processed legacy data and the state information corresponding to the legacy data from a Random Access Memory (RAM) in the recovery sub-time period;
the data processing module is enabled to process the data with the quantity being the preset threshold value and the last processed legacy data within the processing sub-time period to obtain a processing result, wherein the processing result comprises data to be output, current legacy data and state information corresponding to the current legacy data, and the current legacy data and the state information corresponding to the current legacy data are data to be stored;
and the data processing module is enabled to store the data to be stored into the RAM in the storage sub-time period.
7. The method according to claim 4, wherein the FPGA executes the action corresponding to the query result within a second time span through the data processing module characterized by the identification number according to the corresponding relation between the query result, the query sequence and the identification number, and comprises:
and if the data inquiry result is smaller than the preset threshold, the FPGA enables the corresponding data processing module to maintain the current situation in a second time span according to the inquiry sequence corresponding to the inquiry result, the corresponding relation between the inquiry sequence and the identification number and the data processing module represented by the identification number.
8. An FPGA is used for polling n channels of a router, and is provided with n data cache modules, m data processing modules and a polling module, wherein the n data cache modules correspond to the n channels of the router one by one;
the FPGA is used for receiving data from a corresponding channel through each data cache module in the n data cache modules;
the FPGA is used for polling the n data cache modules through the polling module, judging whether the data cached in each data cache module is greater than or equal to a preset threshold value or not, and obtaining an inquiry result of each data cache module;
the FPGA is used for controlling a data processing module in the m data processing modules to execute an action corresponding to the inquiry result on a data cache module corresponding to the inquiry result according to the inquiry result, wherein the two data cache modules which are sequentially inquired by the polling module are executed by different data processing modules.
9. The FPGA of claim 8,
the FPGA is specifically used for polling the n data cache modules through the polling module by taking a first time length as a time interval, and judging whether the data cached in each data cache module is greater than or equal to a preset threshold value;
and obtaining the inquiry result of each data cache module, wherein the inquiry result is greater than or equal to the preset threshold value or is smaller than the preset threshold value.
10. The FPGA of claim 9 wherein each of said m data processing modules has a respective RAM cache space;
if the inquiry result is greater than or equal to the preset threshold, the FPGA is specifically configured to control an idle data processing module of the m data processing modules to obtain data of which the number is the preset threshold from the data cache module corresponding to the inquiry result, and enable the data processing module to process the obtained data.
CN202010620075.5A 2020-06-30 2020-06-30 Data processing method and FPGA Active CN111783378B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010620075.5A CN111783378B (en) 2020-06-30 2020-06-30 Data processing method and FPGA

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010620075.5A CN111783378B (en) 2020-06-30 2020-06-30 Data processing method and FPGA

Publications (2)

Publication Number Publication Date
CN111783378A true CN111783378A (en) 2020-10-16
CN111783378B CN111783378B (en) 2022-05-17

Family

ID=72761434

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010620075.5A Active CN111783378B (en) 2020-06-30 2020-06-30 Data processing method and FPGA

Country Status (1)

Country Link
CN (1) CN111783378B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101764797A (en) * 2009-12-17 2010-06-30 哈尔滨侨航通信设备有限公司 Time division multi-channel LAPD processor and designing method thereof
CN102137086A (en) * 2010-09-10 2011-07-27 华为技术有限公司 Method, device and system for processing data transmission
CN102739555A (en) * 2012-07-24 2012-10-17 迈普通信技术股份有限公司 Data transmission method and data interface card
CN103198856A (en) * 2013-03-22 2013-07-10 烽火通信科技股份有限公司 DDR (Double Data Rate) controller and request scheduling method
US8711181B1 (en) * 2011-11-16 2014-04-29 Google Inc. Pre-fetching map data using variable map tile radius
CN104754303A (en) * 2015-03-24 2015-07-01 中国科学院长春光学精密机械与物理研究所 Multi-channel data transmission system with high bandwidth and high interference resistance and transmission method
CN107562743A (en) * 2016-06-30 2018-01-09 中兴通讯股份有限公司 Date storage method and device, the treating method and apparatus of data search request
CN107710175A (en) * 2015-04-20 2018-02-16 奈特力斯股份有限公司 Memory module and operating system and method
CN108153689A (en) * 2016-12-06 2018-06-12 比亚迪股份有限公司 The method and apparatus of poll arbitration
CN108563808A (en) * 2018-01-05 2018-09-21 中国科学技术大学 The design method of heterogeneous reconfigurable figure computation accelerator system based on FPGA
CN110460410A (en) * 2019-08-22 2019-11-15 成都卫讯科技有限公司 Data transmission method, device, equipment and storage medium based on network management channel
WO2020057593A1 (en) * 2018-09-20 2020-03-26 中兴通讯股份有限公司 Convolution processing method, apparatus, and storage medium of convolutional neural network
CN110955639A (en) * 2019-10-31 2020-04-03 苏州浪潮智能科技有限公司 Data processing method and device
CN111309353A (en) * 2020-01-20 2020-06-19 山东超越数控电子股份有限公司 Method and device for updating FPGA (field programmable Gate array) firmware of operation board based on server control board

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101764797A (en) * 2009-12-17 2010-06-30 哈尔滨侨航通信设备有限公司 Time division multi-channel LAPD processor and designing method thereof
CN102137086A (en) * 2010-09-10 2011-07-27 华为技术有限公司 Method, device and system for processing data transmission
US8711181B1 (en) * 2011-11-16 2014-04-29 Google Inc. Pre-fetching map data using variable map tile radius
CN102739555A (en) * 2012-07-24 2012-10-17 迈普通信技术股份有限公司 Data transmission method and data interface card
CN103198856A (en) * 2013-03-22 2013-07-10 烽火通信科技股份有限公司 DDR (Double Data Rate) controller and request scheduling method
CN104754303A (en) * 2015-03-24 2015-07-01 中国科学院长春光学精密机械与物理研究所 Multi-channel data transmission system with high bandwidth and high interference resistance and transmission method
CN107710175A (en) * 2015-04-20 2018-02-16 奈特力斯股份有限公司 Memory module and operating system and method
CN107562743A (en) * 2016-06-30 2018-01-09 中兴通讯股份有限公司 Date storage method and device, the treating method and apparatus of data search request
CN108153689A (en) * 2016-12-06 2018-06-12 比亚迪股份有限公司 The method and apparatus of poll arbitration
CN108563808A (en) * 2018-01-05 2018-09-21 中国科学技术大学 The design method of heterogeneous reconfigurable figure computation accelerator system based on FPGA
WO2020057593A1 (en) * 2018-09-20 2020-03-26 中兴通讯股份有限公司 Convolution processing method, apparatus, and storage medium of convolutional neural network
CN110460410A (en) * 2019-08-22 2019-11-15 成都卫讯科技有限公司 Data transmission method, device, equipment and storage medium based on network management channel
CN110955639A (en) * 2019-10-31 2020-04-03 苏州浪潮智能科技有限公司 Data processing method and device
CN111309353A (en) * 2020-01-20 2020-06-19 山东超越数控电子股份有限公司 Method and device for updating FPGA (field programmable Gate array) firmware of operation board based on server control board

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
刘振兵 等: "基于PCI总线的通用网络协议实验平台", 《电子技术应用》 *
夏金军 等: "一种基于FPGA的高速数据缓存的设计", 《微计算机信息》 *
杨小冬: "基于时分复用的E1中继单元设计", 《专题技术与工程应用》 *
罗奎 等: "基于现场可编程门阵列的新型可编程逻辑控制器在线调试技术", 《计算机应用》 *

Also Published As

Publication number Publication date
CN111783378B (en) 2022-05-17

Similar Documents

Publication Publication Date Title
US20180123714A1 (en) Method, Device, and System for Sending and Receiving Code Block Data Stream
US11546088B2 (en) Check code processing method, electronic device and storage medium
KR101745456B1 (en) Ecu for transmitting large data in hil test environment, system including the same and method thereof
CN108512785B (en) Data transmission protocol method
JPS5810236A (en) Interface circuit
CN108509652B (en) Data processing system and method
CN103534968A (en) Encoding and decoding method and device for Ethernet physical layer
CN104598194A (en) Initializing method and circuit of head and tail pointer chain table storage
CN113238856B (en) RDMA-based memory management method and device
CN115567589A (en) Compression transmission method, device, equipment and storage medium of JSON data
CN111783378B (en) Data processing method and FPGA
CN107094085B (en) Signaling transmission method and device
CN113852533A (en) Multi-channel data communication system and method and electronic equipment
CN105281943A (en) Webpage-based remote equipment management method and device
US11212045B2 (en) Synchronization method and apparatus
IL270195B2 (en) Wireless communication method, terminal device, and network device
CN111367916A (en) Data storage method and device
CN112953547A (en) Data processing method, device and system
CN113709010B (en) Modbus communication protocol system without frame length limitation
CN110769049B (en) Power distribution terminal and SOE data uploading method thereof
TWI695591B (en) Data stepped compression transmission method and device and electronic equipment for realizing the method
CN108600066B (en) Single bus communication method
CN111126004A (en) Method, device and equipment for generating product sequence code and computer readable storage medium
CN116561041B (en) Single bus communication system and method
CN114928377B (en) Output transmission method, device and equipment for reducing transparent transmission bandwidth of USB data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant