CN117667765A - Data processing method and device based on high-performance memory and calculation integrated architecture - Google Patents

Data processing method and device based on high-performance memory and calculation integrated architecture Download PDF

Info

Publication number
CN117667765A
CN117667765A CN202311454358.7A CN202311454358A CN117667765A CN 117667765 A CN117667765 A CN 117667765A CN 202311454358 A CN202311454358 A CN 202311454358A CN 117667765 A CN117667765 A CN 117667765A
Authority
CN
China
Prior art keywords
data
target data
controller
cache
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311454358.7A
Other languages
Chinese (zh)
Inventor
龚超
项世珍
石江
郑芳只
董盛鹏
郑晨忆
郑杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 52 Research Institute
Original Assignee
CETC 52 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 52 Research Institute filed Critical CETC 52 Research Institute
Priority to CN202311454358.7A priority Critical patent/CN117667765A/en
Publication of CN117667765A publication Critical patent/CN117667765A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The application discloses a data processing method and device based on a high-performance integrated architecture, wherein the high-performance integrated architecture comprises a first controller, a second controller, a first cache, a second cache and an analysis engine configured by the second cache, and the method comprises the following steps: when the first controller recognizes that network data exists in the user data, filtering the network data based on a preset filtering rule to obtain first target data, and sending the first target data to a first cache; the second buffer memory receives the first target data sent by the first buffer memory to obtain second target data, and when the second target data is detected to meet the preset condition, the second target data is sent to the second controller through the analysis engine; and the second controller analyzes the second target data and stores the processed second target data to the target terminal. The first controller identifies the network data in the front-end user data so as to filter the network data in real time, thereby reducing the pressure of analyzing and processing the data by the second controller and improving the overall performance of the system architecture.

Description

Data processing method and device based on high-performance memory and calculation integrated architecture
Technical Field
The application belongs to the technical field of network data processing, and particularly relates to a data processing method and device based on a high-performance memory integrated architecture.
Background
With the continuous expansion of the integrated development demands of the naval vessel network, the demands of information exchange and sharing among various devices are more and more prominent, the network information amount is also rapidly increased, and the high-performance storage demands and the data analysis capability are more and more important. When the existing naval vessel network system is used for realizing the network data storage and analysis technology, the performance of the existing naval vessel network system in the process of data storage and analysis is limited by the performance of the CPU due to the adoption of a CPU-based implementation mode, so that the existing naval vessel network system is difficult to meet the application scene of high-speed recording real-time storage and analysis.
Disclosure of Invention
In order to solve the problem that the performance of the data storage and analysis is limited by the performance of the CPU, the present application provides a data processing method and apparatus based on a high-performance integrated architecture, in which a first controller identifies network data in front-end user data, so as to filter the network data in real time, thereby reducing the pressure of a second controller in analyzing and processing the data, and improving the overall performance of the system architecture. The technical scheme is as follows:
In a first aspect, the present application provides a data processing method based on a high-performance integrated architecture, where the method is applied to a high-performance integrated architecture, the high-performance integrated architecture includes a first controller, a second controller, a first cache, a second cache, and a parsing engine configured by the second cache, and the method includes:
when the first controller recognizes that network data exists in the user data, filtering the network data based on a preset filtering rule to obtain first target data, and sending the first target data to a first cache;
the second buffer memory receives the first target data sent by the first buffer memory to obtain second target data, and when the second target data is detected to meet the preset condition, the second target data is sent to the second controller through the analysis engine;
and the second controller analyzes the second target data and stores the processed second target data to the target terminal.
In an optional aspect of the first aspect, filtering the network data based on a preset filtering rule to obtain first target data, including:
filtering the network data based on a preset network address in a preset filtering rule to obtain first filtering data containing the preset network address;
Determining a first port range according to all port numbers in the network data, and calculating a second port range according to a preset formula in a preset filtering rule and the first port range;
and filtering the first filtering data based on the second port range to obtain second filtering data, and taking the second filtering data as first target data.
In a further alternative of the first aspect, after the second target data sent by the first buffer is received by the second buffer, before the second target data is detected to meet the preset condition, the method further includes:
when the second cache detects that the data volume of the second target data reaches a preset data volume threshold, determining that the second target data meets a preset condition;
when the second cache detects that the time corresponding to the second target data reaches a preset time threshold, determining that the second target data meets a preset condition;
when the second cache detects that the time corresponding to the second target data does not reach the preset time threshold, judging whether the data volume of the second target data reaches the preset data volume threshold;
when the second cache detects that the data volume of the second target data does not reach a preset data volume threshold value, acquiring third target data; the third target data comprises second target data;
When the second cache detects that the data volume of the third target data reaches a preset data volume threshold, determining that the third target data meets a preset condition;
the third target data is sent to the second controller by the parsing engine.
In a further alternative of the first aspect, the parsing of the second target data by the second controller includes:
after the memory configured by the second controller receives the second target data, sending an interrupt signal to the second controller so as to enable the second controller to be in an awakening state;
and placing a task corresponding to the analysis processing of the second target data at the top of the task queue by the second controller, and executing the task corresponding to the analysis processing of the second target data based on the task queue so as to analyze the second target data.
In yet another alternative of the first aspect, the high performance integrated architecture further includes a storage engine and a storage array configured by the first cache, and the filtering processing is performed on the network data based on a preset filtering rule to obtain first target data, and after the first target data is sent to the first cache, the method further includes:
when the first cache detects that the first target data meets the preset condition, the first target data is written into the storage array through the storage engine.
In yet another alternative of the first aspect, writing, by the storage engine, the first target data to the storage array includes:
when the storage engine detects that the first target data exists in the first cache, a storage instruction is sent to the second controller;
obtaining a target address by the second controller according to at least two allocable space addresses of the storage array, the storage space corresponding to each allocable space address and the data quantity of the first target data, and sending the target address to the storage engine;
the first target data is written to the storage array by the storage engine according to the target address.
In yet another alternative of the first aspect, the high performance integrated architecture further includes a network card, the method further comprising:
when the first controller recognizes that the control command data exists in the user data, the control command data is transmitted to the network card;
and the second controller analyzes the control command data received by the network card.
In a second aspect, an embodiment of the present application provides a data processing apparatus based on a high-performance integrated architecture, where the apparatus is applied to the high-performance integrated architecture, and the high-performance integrated architecture includes a first controller, a second controller, a first cache, a second cache, and a parsing engine configured by the second cache, where the apparatus includes:
The data filtering module is used for filtering the network data based on a preset filtering rule to obtain first target data when the first controller recognizes that the network data exists in the user data, and sending the first target data to the first cache;
the data sending module is used for receiving the first target data sent by the first cache by the second cache, obtaining second target data, and sending the second target data to the second controller through the analysis engine when the second target data is detected to meet the preset condition;
and the data analysis module is used for analyzing and processing the second target data by the second controller and storing the processed second target data to the target terminal.
In a third aspect, embodiments of the present application provide a data processing apparatus based on a high performance integrated architecture, including a processor and a memory;
the processor is connected with the memory;
a memory for storing executable program code;
the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, for implementing the data processing method based on the high-performance integrated architecture provided in the first aspect of the embodiments of the present application or any implementation manner of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer storage medium, where a computer program is stored, where the computer program includes program instructions, where the program instructions, when executed by a processor, may implement a data processing method based on a high performance integrated storage architecture provided in the first aspect or any implementation manner of the first aspect of the embodiments of the present application.
The beneficial effects are that:
when the data processing is performed based on a high-performance integrated architecture, firstly, when a first controller recognizes that network data exists in user data, filtering the network data based on a preset filtering rule to obtain first target data, and sending the first target data to a first cache; the second buffer memory receives the first target data sent by the first buffer memory to obtain second target data, and when the second target data is detected to meet the preset condition, the second target data is sent to the second controller through the analysis engine; and the second controller analyzes the second target data and stores the processed second target data to the target terminal. The first controller identifies the network data in the front-end user data so as to filter the network data in real time, thereby reducing the pressure of analyzing and processing the data by the second controller and improving the overall performance of the system architecture.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a functional block diagram of a data processing method based on a CPU acquisition, analysis and storage architecture according to an embodiment of the present application;
FIG. 2 is a flow chart of a data processing method based on a high performance integrated architecture according to an embodiment of the present application;
FIG. 3 is a functional block diagram of a data processing method based on a high performance integrated architecture according to an embodiment of the present application;
FIG. 4 is a schematic structural diagram of a data processing apparatus based on a high performance integrated architecture according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a data processing apparatus based on a high performance integrated architecture according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In the following description, the terms "first," "second," and "first," are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The following description provides various embodiments of the present application, and various embodiments may be substituted or combined, so that the present application is also intended to encompass all possible combinations of the same and/or different embodiments described. Thus, if one embodiment includes feature A, B, C and another embodiment includes feature B, D, then the present application should also be considered to include embodiments that include one or more of all other possible combinations including A, B, C, D, although such an embodiment may not be explicitly recited in the following.
The following description provides examples and does not limit the scope, applicability, or examples set forth in the claims. Changes may be made in the function and arrangement of elements described without departing from the scope of the application. Various examples may omit, replace, or add various procedures or components as appropriate. For example, the described methods may be performed in a different order than described, and various steps may be added, omitted, or combined. Furthermore, features described with respect to some examples may be combined into other examples.
Referring to fig. 1, fig. 1 is a functional block diagram of a data processing method based on a CPU acquisition, analysis and storage architecture according to an embodiment of the present application.
Specifically, in the process of directly processing the user data by the CPU, the network card in the front-end acquisition module may identify the network data in the user data, and add the network data into the network card receiving queue. It can be understood that when the data volume of the network data in the receiving queue reaches the preset data volume threshold, the network data in the receiving queue is sent to the system DDR host in the record control module through the PCIe Switch interface in the data exchange module; then, the application software dispatches CPU resource to filter and analyze the network data in the DDR main memory of the system; and then storing the processed network data into a back-end memory array through a PCIe Switch interface.
It should be noted that, because the network data arrives at the network card and is added into the network card receiving queue first, when the data volume of the network data in the receiving queue reaches the preset data volume threshold, the network data is sent to the system DDR for main storage through the data exchange module, so that the time stamp generated by collecting the network data has errors; in the implementation process of the whole technical scheme, not only is the CPU resource utilized for data analysis, but also the CPU resource is utilized for guaranteeing real-time data storage, the capability of the CPU becomes the performance bottleneck of the whole data link, and the recording acquisition system adopts an embedded industrial control CPU because of the special application environment, so that the processing capability of the recording acquisition system is difficult to meet the application scene of high-speed recording real-time storage and analysis.
In order to solve the above-mentioned technical problems, in the data processing method based on the high-performance integrated architecture provided by the embodiment of the present application, the first controller identifies the network data in the front-end user data, so that the network data is filtered in real time, thereby reducing the pressure of the second controller in analyzing and processing the data, and improving the overall performance of the system architecture.
Referring next to fig. 2, fig. 2 is a flow chart of a data processing method of a high performance integrated architecture according to an embodiment of the present application.
As shown in fig. 2, the data processing method based on the high-performance integrated architecture at least includes the following steps:
step 202, when the first controller identifies that network data exists in the user data, filtering the network data based on a preset filtering rule to obtain first target data, and sending the first target data to a first cache.
The data processing method based on the high-performance integrated architecture in the embodiment of the present application may be applied to, but is not limited to, a high-performance integrated architecture, where the high-performance integrated architecture includes, but is not limited to, a first controller, a second controller, a first cache, a second cache, and a parsing engine configured by the second cache, the first controller is, for example, but not limited to, a Field-programmable gate array (Field-Programmable Gate Array, FPGA), the second controller is, for example, but not limited to, a Central processor (Central ProcessingUnit, CPU), the parsing engine is, for example, but not limited to, a direct memory access (Direct Memory Access, DMA) parsing engine, and the storage engine is, for example, but not limited to, a direct memory access (Direct Memory Access, DMA) storage engine. The first controller identifies the network data in the front-end user data so as to filter the network data in real time, thereby reducing the pressure of analyzing and processing the data by the second controller and improving the overall performance of the system architecture.
Specifically, before the first controller starts to collect the front end user data, the application software may configure a filtering rule in a register of the first controller, so that when the first controller recognizes that the front end user data includes network data, the first controller performs filtering processing on the network data according to the configured filtering rule, uses data conforming to the filtering rule as first target data, and sends the first target data to the first cache. Wherein, the filtering rule template can be configured in five-tuple form, and each group of filtering rules comprises: source IP, destination IP, source port, destination port, protocol type; both IP and port settings support ranges (min-max), protocol types including, but not limited to TCP, UDP, ICMP, IGMP, etc.; for example, but not limited to, if the source IP in the configured filtering rule is 111.60.85.248, the data with the source IP of 111.60.85.248 in the network data is taken as the first target data. In addition, the filtering rule template can also, but is not limited to, filter the network data content; for example, but not limited to, if the 100 th byte in the network data with 1024 bytes of data is preset to 0xAA in the configured filtering rule, the data with the 100 th byte in the network data being 0xAA is used as the first target data.
It should be noted that, the application software mentioned above is a program for filtering data and analyzing data according to actual requirements in a high-performance integrated architecture; the front-end user data mentioned above may be network signal data sent by other terminal devices in the network.
As an optional implementation manner of the present application, filtering the network data based on a preset filtering rule to obtain first target data includes:
filtering the network data based on a preset network address in a preset filtering rule to obtain first filtering data containing the preset network address;
determining a first port range according to all port numbers in the network data, and calculating a second port range according to a preset formula in a preset filtering rule and the first port range;
and filtering the first filtering data based on the second port range to obtain second filtering data, and taking the second filtering data as first target data.
Specifically, in the process of filtering network data, filtering the network data in front-end user data based on a source IP address range or a destination IP address range preset in a preset filtering rule, and taking the network data in the preset source IP address range or the destination IP address range as first filtering data.
Then, taking the maximum value and the minimum value of all source ports or destination ports in the network data as a first port range, and calculating the first port range according to a preset formula in a preset filtering rule to obtain a second port range; and then, filtering the first filtered data according to the second port range, and taking the first filtered data in the second port range as first target data. Wherein, the preset formula is, for example, but not limited to, y=a×x+b, y is the second port range, x is the first port range, and a and b are adjustable parameters.
Step 204, the second buffer receives the first target data sent by the first buffer, obtains second target data, and sends the second target data to the second controller through the analysis engine when detecting that the second target data meets the preset condition.
Specifically, after the first controller filters the network data according to the configured filtering rule to obtain the first target data and sends the first target data to the first buffer, the second buffer may obtain the second target data sent by the first buffer, and calculate in real time the data amount or the time used for the second buffer to obtain the second target data. It can be understood that when the data amount reaches the preset data amount threshold value of the analysis engine or the used time exceeds the preset time threshold value, it is determined that the second target data meets the preset condition, the analysis engine initiates an interrupt notification to the application software, the application software initiates a start notification to the analysis engine after receiving the interrupt notification, and the analysis engine sends the second target data in the second cache to the memory configured by the second controller after starting.
As still another alternative of the embodiment of the present application, after the second target data sent by the first buffer is received by the second buffer, before the second target data is detected to meet the preset condition, the method further includes:
when the second cache detects that the data volume of the second target data reaches a preset data volume threshold, determining that the second target data meets a preset condition;
when the second cache detects that the time corresponding to the second target data reaches a preset time threshold, determining that the second target data meets a preset condition;
when the second cache detects that the time corresponding to the second target data does not reach the preset time threshold, judging whether the data volume of the second target data reaches the preset data volume threshold;
when the second cache detects that the data volume of the second target data does not reach a preset data volume threshold value, acquiring third target data; the third target data comprises second target data;
when the second cache detects that the data volume of the third target data reaches a preset data volume threshold, determining that the third target data meets a preset condition;
the third target data is sent to the second controller by the parsing engine.
Specifically, after the first controller sends the first target data to the first buffer, the second buffer may obtain the second target data sent by the first buffer, calculate the data size of the second target data in real time, and when the first controller detects that the data size of the second target data obtained by the second buffer reaches the preset data size threshold, determine that the second target data meets the preset condition, so as to start the analysis engine to send the second target data obtained by the second buffer to the memory configured by the second controller.
When the first controller detects that the time for the second buffer to acquire the second target data reaches a preset time threshold, determining that the second target data meets a preset condition, and starting an analysis engine to send the second target data to a memory configured by the second controller.
When the first controller detects that the time taken by the second buffer to acquire the second target data does not reach the preset time threshold, judging whether the data volume of the second target data reaches the preset data volume threshold.
Then, when the first controller detects that the data volume of the second target data acquired by the second cache does not reach a preset data volume threshold, determining that the second target data does not meet a preset condition, further continuously acquiring the network data which is filtered and processed by the first controller according to a preset filtering rule by the second cache, and taking the network data which is filtered and processed in the second cache at the moment as third target data; the third target data comprises second target data.
Then, when the first controller detects that the data volume of the third target data reaches a preset data volume threshold, determining that the third target data meets a preset condition, and starting an analysis engine to send the third target data to a memory configured by the second controller; and then, when the first controller detects that the time for the second buffer to acquire the third target data reaches a preset time threshold, determining that the third target data meets a preset condition, and starting an analysis engine to send the third target data to a memory configured by the second controller.
And 206, analyzing the second target data by the second controller, and storing the processed second target data to the target terminal.
Specifically, after the parsing engine is started to send the second target data from the second cache to the memory configured by the second controller, the application software may further call the second controller to parse the second target data, and store the parsed second target data to the target terminal, where the target terminal is, for example, but not limited to, a database, a storage array, or an external device.
As still another alternative of the embodiment of the present application, the parsing, by the second controller, of the second target data includes:
after the memory configured by the second controller receives the second target data, sending an interrupt signal to the second controller so as to enable the second controller to be in an awakening state;
and placing a task corresponding to the analysis processing of the second target data at the top of the task queue by the second controller, and executing the task corresponding to the analysis processing of the second target data based on the task queue so as to analyze the second target data.
Specifically, in the process of analyzing the second target data, the application software initiates an interrupt request to the second controller, so that the second controller is in an awake state.
And then, after receiving the interrupt request, the second controller places the task corresponding to the analysis processing of the second target data in the task queue as the task with the highest priority, and sequentially executes the tasks in the task queue according to the descending order of the priority, namely, the analysis processing is performed on the second target data by the current priority.
As still another alternative of the embodiment of the present application, the high performance integrated architecture further includes a storage engine and a storage array configured by the first cache, and the method further includes, after filtering the network data based on a preset filtering rule to obtain first target data and sending the first target data to the first cache:
when the first cache detects that the first target data meets the preset condition, the first target data is written into the storage array through the storage engine.
Specifically, when the first controller identifies that network data exists in front-end user data, filtering the network data based on a preset filtering rule to obtain first target data, sending the first target data to a first cache, calculating in real time by the first controller to obtain the data quantity and the time used for the first target data, when the data quantity reaches a preset data quantity threshold value of a storage engine or the time used for the first target data exceeds a preset time threshold value, initiating an interrupt notification to application software by the storage engine, initiating a start notification to the storage engine after receiving the interrupt notification by the application software, and sending the first target data in the first cache to a storage array after the storage engine is started.
As yet another alternative of an embodiment of the present application, writing, by a storage engine, first target data to a storage array includes:
when the storage engine detects that the first target data exists in the first cache, a storage instruction is sent to the second controller;
obtaining a target address by the second controller according to at least two allocable space addresses of the storage array, the storage space corresponding to each allocable space address and the data quantity of the first target data, and sending the target address to the storage engine;
the first target data is written to the storage array by the storage engine according to the target address.
Specifically, in the process of writing the first target data into the storage array by the storage engine, when the first target data is detected to exist in the second cache, a storage instruction is sent to the second controller; after receiving the storage instruction, the second controller divides at least two allocatable spaces from the rest storage space of the storage array according to the data volume of the first target data for storing the first target data, and sends the allocatable space address corresponding to each allocatable space as a target address to the storage engine, and starts the storage engine to write the first target data in the second cache to the target address corresponding to the storage array.
As yet another alternative of the embodiment of the present application, the high-performance integrated architecture further includes a network card, and the method further includes:
when the first controller recognizes that the user data exists, the first controller transmits the control command data to the network card;
and the second controller analyzes the control command data received by the network card.
Specifically, when the first controller recognizes that control command data exists in the front-end user data, the control command data is transmitted to a network card receiving queue; then, the application software sends the control command data from the network card receiving queue to the memory configured by the second controller, and sends an interrupt request to the second controller.
When the second controller receives the interrupt request, the task corresponding to the analysis processing of the control command data is used as the task with the highest priority to be placed in the task queue, and the tasks in the task queue are sequentially executed according to the descending order of the priority, namely the analysis processing of the control command data is performed by the current priority.
It should be noted that transparent transmission is a data transmission mode, which refers to that in the data transmission process, data is directly transmitted from a source to a target without any processing or parsing of the data content, so as to realize complete transparent transmission.
Referring to fig. 3, fig. 3 is a functional block diagram of a data processing method based on a high-performance integrated architecture according to an embodiment of the present application. The first controller may be an FPGA, the first buffer may be a DDR1 buffer, the storage engine may be a DMA engine 1, the second controller may be a CPU, the memory configured by the CPU may be a system DDR main memory, the second buffer may be a DDR2 buffer, and the parsing engine may be a DMA engine 2.
Specifically, after the FPGA in the front-end acquisition module recognizes that network data exist in user data, filtering processing is performed according to preset filtering rules to obtain target data; and then, storing the target data into the DDR1 cache, copying the target data in the DDR1 cache into the DDR2 cache, and calculating the data quantity or the time used for acquiring the target data by the DDR2 cache in real time. It can be understood that when the data amount reaches the preset data amount threshold of the DMA engine 2 or the used time exceeds the preset time threshold, determining that the target data meets the preset condition, starting the DMA engine 2 to send the target data in the DDR2 cache to the system DDR main memory of the record control module through the PCIe Switch interface in the data exchange module; then, dispatching CPU resource by application software to analyze the target data in DDR main memory; the processed network data may then be stored in a back-end memory bank array, may be stored in a database, or sent to a user external device via a PCIe Switch interface.
After the target data is stored in the DDR1 buffer, the data amount or the time taken for the DDR1 buffer to acquire the target data is calculated in real time. It can be understood that when the data amount reaches the preset data amount threshold of the DMA engine 1 or the used time exceeds the preset time threshold, it is determined that the target data meets the preset condition, and the DMA engine 1 is started to store the high-speed network data in the DDR1 cache into the back-end memory bank array through the PCIe Switch interface in the data exchange module.
After the FPGA in the front-end acquisition module recognizes that control command data exist in the user data, the control command data are transmitted to a network card receiving queue in the recording control module; then, the application software sends control command data from the network card receiving queue to the system DDR host; then, the application software dispatches CPU resource to analyze the control command data.
A data processing apparatus based on a high-performance integrated architecture according to an embodiment of the present application will be described in detail with reference to fig. 4. It should be noted that, the data processing apparatus based on the high-performance integrated architecture shown in fig. 4 is used to execute the method of the embodiment shown in fig. 2 of the present application, for convenience of explanation, only the portion relevant to the embodiment of the present application is shown, and specific technical details are not disclosed, please refer to the embodiment shown in fig. 2 of the present application.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a data processing apparatus based on a high-performance integrated architecture according to an embodiment of the present application.
As shown in fig. 4, the high-performance integrated architecture-based data processing apparatus at least includes a data filtering module 401, a data sending module 402, and a data parsing module 403, where the apparatus is applied to the high-performance integrated architecture, and the high-performance integrated architecture includes a first controller, a second controller, a first cache, a second cache, and a parsing engine configured by the second cache, where:
the 401 data filtering module is used for filtering the network data based on a preset filtering rule to obtain first target data when the first controller recognizes that the network data exists in the user data, and sending the first target data to the first cache;
the 402 data sending module is configured to receive, by the second cache, the first target data sent by the first cache, obtain second target data, and send, by the parsing engine, the second target data to the second controller when detecting that the second target data meets a preset condition;
and 403 the data analysis module is used for analyzing and processing the second target data by the second controller and storing the processed second target data to the target terminal.
In an alternative of the second aspect, the data filtering module specifically includes:
the first filtering unit is used for filtering the network data based on the preset network address in the preset filtering rule so as to obtain first filtering data containing the preset network address;
the computing unit is used for determining a first port range according to all port numbers in the network data, and computing a second port range according to a preset formula in a preset filtering rule and the first port range;
and the second filtering unit is used for filtering the first filtering data based on the second port range to obtain second filtering data, and taking the second filtering data as first target data.
In yet another alternative of the second aspect, the data transmission module further includes:
the first judging unit is used for determining that the second target data meets the preset condition when the second cache detects that the data volume of the second target data reaches the preset data volume threshold;
the second judging unit is used for determining that the second target data meets the preset condition when the second buffer detects that the time corresponding to the second target data reaches the preset time threshold;
a third judging unit, configured to judge whether the data amount of the second target data reaches a preset data amount threshold when the second buffer detects that the time corresponding to the second target data does not reach the preset time threshold;
When the second cache detects that the data volume of the second target data does not reach a preset data volume threshold value, acquiring third target data; the third target data comprises second target data;
when the second cache detects that the data volume of the third target data reaches a preset data volume threshold, determining that the third target data meets a preset condition;
the third target data is sent to the second controller by the parsing engine.
In yet another alternative of the second aspect, the data parsing module further includes:
the interrupt unit is used for sending an interrupt signal to the second controller after the memory configured by the second controller receives the second target data so as to enable the second controller to be in an awakening state;
and the first analysis unit is used for placing a task corresponding to the analysis processing of the second target data at the top of the task queue by the second controller, and executing the task corresponding to the analysis processing of the second target data based on the task queue so as to analyze the second target data.
In yet another alternative of the second aspect, the high-performance architecture further includes a storage engine and a storage array configured by the first cache, and the apparatus further includes a high-speed data storage module, specifically including:
And the storage unit is used for writing the first target data into the storage array through the storage engine when the first cache detects that the first target data meets the preset condition.
In a further alternative of the second aspect, the storage unit is specifically configured to:
when the storage engine detects that the first target data exists in the first cache, a storage instruction is sent to the second controller;
obtaining a target address by the second controller according to at least two allocable space addresses of the storage array, the storage space corresponding to each allocable space address and the data quantity of the first target data, and sending the target address to the storage engine;
the first target data is written to the storage array by the storage engine according to the target address.
In a further alternative of the second aspect, the high-performance integrated architecture further includes a network card, and the apparatus further includes a control data sending module, specifically including:
the second sending unit is used for transmitting the control command data to the network card when the first controller recognizes that the control command data exists in the user data;
and the second analysis unit is used for analyzing and processing the control command data received by the network card by the second controller.
It will be apparent to those skilled in the art that the embodiments of the present application may be implemented in software and/or hardware. "Unit" and "module" in this specification refer to software and/or hardware capable of performing a specific function, either alone or in combination with other components, such as Field programmable gate arrays (Field-Programmable Gate Array, FPGAs), integrated circuits (Integrated Circuit, ICs), etc.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a data processing apparatus based on a high-performance integrated architecture according to an embodiment of the present application. As shown in fig. 5, the high-performance integrated architecture-based data processing apparatus 500 may include: at least one processor 501, at least one network interface 504, a user interface 503, a memory 505, and at least one communication bus 502.
Wherein the communication bus 502 may be used to enable connectivity communication of the various components described above.
The user interface 503 may include keys, and the optional user interface may also include a standard wired interface, a wireless interface, among others.
The network interface 504 may include, but is not limited to, a bluetooth module, an NFC module, a Wi-Fi module, and the like.
Wherein the processor 501 may include one or more processing cores. The processor 501 utilizes various interfaces and lines to connect various portions of the overall electronic device 500, perform various functions of the routing device 300 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 505, and invoking data stored in the memory 505. Alternatively, the processor 501 may be implemented in at least one hardware form of DSP, FPGA, PLA. The processor 501 may integrate one or a combination of several of a CPU, GPU, modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 501 and may be implemented by a single chip.
The memory 505 may include RAM or ROM. Optionally, the memory 505 comprises a non-transitory computer readable medium. Memory 505 may be used to store instructions, programs, code sets, or instruction sets. The memory 505 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-described various method embodiments, etc.; the storage data area may store data or the like referred to in the above respective method embodiments. The memory 505 may also optionally be at least one storage device located remotely from the processor 501. As shown in FIG. 5, memory 505, which is one type of computer storage medium, may include an operating system, a network communication module, a user interface module, and a data processing application based on a high performance computing architecture.
In particular, the processor 501 may be configured to invoke a high-performance architecture-based data processing application stored in the memory 505 and to perform the following operations in particular:
when the first controller recognizes that network data exists in the user data, filtering the network data based on a preset filtering rule to obtain first target data, and sending the first target data to a first cache;
the second buffer memory receives the first target data sent by the first buffer memory to obtain second target data, and when the second target data is detected to meet the preset condition, the second target data is sent to the second controller through the analysis engine;
and the second controller analyzes the second target data and stores the processed second target data to the target terminal.
In some possible embodiments, filtering the network data based on a preset filtering rule to obtain first target data includes:
filtering the network data based on a preset network address in a preset filtering rule to obtain first filtering data containing the preset network address;
determining a first port range according to all port numbers in the network data, and calculating a second port range according to a preset formula in a preset filtering rule and the first port range;
And filtering the first filtering data based on the second port range to obtain second filtering data, and taking the second filtering data as first target data.
In some possible embodiments, after the second target data sent by the first buffer is received by the second buffer, before the second target data is detected to meet the preset condition, the method further includes:
when the second cache detects that the data volume of the second target data reaches a preset data volume threshold, determining that the second target data meets a preset condition;
when the second cache detects that the time corresponding to the second target data reaches a preset time threshold, determining that the second target data meets a preset condition;
when the second cache detects that the time corresponding to the second target data does not reach the preset time threshold, judging whether the data volume of the second target data reaches the preset data volume threshold;
when the second cache detects that the data volume of the second target data does not reach a preset data volume threshold value, acquiring third target data; the third target data comprises second target data;
when the second cache detects that the data volume of the third target data reaches a preset data volume threshold, determining that the third target data meets a preset condition;
The third target data is sent to the second controller by the parsing engine.
In some possible embodiments, the parsing, by the second controller, of the second target data includes:
after the memory configured by the second controller receives the second target data, sending an interrupt signal to the second controller so as to enable the second controller to be in an awakening state;
and placing a task corresponding to the analysis processing of the second target data at the top of the task queue by the second controller, and executing the task corresponding to the analysis processing of the second target data based on the task queue so as to analyze the second target data.
In some possible embodiments, the high-performance integrated architecture further includes a storage engine and a storage array configured by the first cache, and the method further includes, after filtering the network data based on a preset filtering rule to obtain first target data and sending the first target data to the first cache:
when the first cache detects that the first target data meets the preset condition, the first target data is written into the storage array through the storage engine.
In some possible embodiments, writing, by the storage engine, the first target data to the storage array includes:
When the storage engine detects that the first target data exists in the first cache, a storage instruction is sent to the second controller;
obtaining a target address by the second controller according to at least two allocable space addresses of the storage array, the storage space corresponding to each allocable space address and the data quantity of the first target data, and sending the target address to the storage engine;
the first target data is written to the storage array by the storage engine according to the target address.
In some possible embodiments, the high performance architecture further comprises a network card, the method further comprising:
when the first controller recognizes that the control command data exists in the user data, the control command data is transmitted to the network card;
and the second controller analyzes the control command data received by the network card.
The present application also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the above method. The computer readable storage medium may include, among other things, any type of disk including floppy disks, optical disks, DVDs, CD-ROMs, micro-drives, and magneto-optical disks, ROM, RAM, EPROM, EEPROM, DRAM, VRAM, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, such as the division of the units, merely a logical function division, and there may be additional manners of dividing the actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some service interface, device or unit indirect coupling or communication connection, electrical or otherwise.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be performed by hardware associated with a program that is stored in a computer readable memory, which may include: flash disk, read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), magnetic or optical disk, and the like.
The foregoing is merely exemplary embodiments of the present disclosure and is not intended to limit the scope of the present disclosure. That is, equivalent changes and modifications are contemplated by the teachings of this disclosure, which fall within the scope of the present disclosure. Embodiments of the present disclosure will be readily apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a scope and spirit of the disclosure being indicated by the claims.

Claims (10)

1. A data processing method based on a high-performance integrated architecture, wherein the method is applied to a high-performance integrated architecture, the high-performance integrated architecture including a first controller, a second controller, a first cache, a second cache, and a parsing engine configured by the second cache, the method comprising:
When the first controller recognizes that network data exists in user data, filtering the network data based on a preset filtering rule to obtain first target data, and sending the first target data to the first cache;
receiving the first target data sent by the first cache by the second cache to obtain second target data, and sending the second target data to the second controller through the analysis engine when the second target data is detected to meet a preset condition;
and analyzing the second target data by the second controller, and storing the processed second target data to a target terminal.
2. The method of claim 1, wherein the filtering the network data based on the preset filtering rule to obtain the first target data includes:
filtering the network data based on a preset network address in the preset filtering rule to obtain first filtering data containing the preset network address;
determining a first port range according to all port numbers in the network data, and calculating a second port range according to a preset formula in the preset filtering rule and the first port range;
And filtering the first filtering data based on the second port range to obtain second filtering data, and taking the second filtering data as the first target data.
3. The method of claim 1, wherein after the second target data sent by the first buffer is received by the second buffer, before the second target data is detected to satisfy a preset condition, further comprising:
when the second cache detects that the data volume of the second target data reaches a preset data volume threshold, determining that the second target data meets the preset condition;
when the second cache detects that the time corresponding to the second target data reaches a preset time threshold, determining that the second target data meets the preset condition;
when the second cache detects that the time corresponding to the second target data does not reach a preset time threshold, judging whether the data volume of the second target data reaches the preset data volume threshold;
when the second cache detects that the data volume of the second target data does not reach the preset data volume threshold value, third target data are obtained; wherein the third target data includes the second target data;
When the second cache detects that the data volume of the third target data reaches the preset data volume threshold, determining that the third target data meets the preset condition;
and sending the third target data to the second controller through the analysis engine.
4. The method of claim 1, wherein the parsing of the second target data by the second controller comprises:
after the memory configured by the second controller receives the second target data, sending an interrupt signal to the second controller so as to enable the second controller to be in an awake state;
and placing a task corresponding to the analysis processing of the second target data at the top of a task queue by the second controller, and executing the task corresponding to the analysis processing of the second target data based on the task queue so as to analyze the second target data.
5. The method of claim 1, wherein the high performance architecture further comprises a storage engine and a storage array configured by the first cache, the filtering the network data based on a preset filtering rule to obtain first target data, and after sending the first target data to the first cache, the method further comprises:
And when the first cache detects that the first target data meets the preset condition, the first target data is written into the storage array through the storage engine.
6. The method of claim 5, wherein the writing, by the storage engine, the first target data to a storage array comprises:
when the storage engine detects that the first target data exists in the first cache, a storage instruction is sent to the second controller;
obtaining, by the second controller, a target address according to at least two allocable space addresses of the storage array, a storage space corresponding to each allocable space address, and a data amount of the first target data, and sending the target address to the storage engine;
and writing the first target data into the storage array according to the target address by the storage engine.
7. The method of claim 1, wherein the high performance integrated architecture further comprises a network card, the method further comprising:
when the first controller recognizes that control command data exists in the user data, the control command data is transmitted to the network card;
And the second controller analyzes the control command data received by the network card.
8. A data processing apparatus based on a high performance integrated architecture, the apparatus being applied to a high performance integrated architecture, the high performance integrated architecture comprising a first controller, a second controller, a first cache, a second cache, and a parsing engine configured by the second cache, the apparatus comprising:
the data filtering module is used for filtering the network data based on a preset filtering rule to obtain first target data when the first controller recognizes that the network data exists in the user data, and sending the first target data to the first cache;
the data sending module is used for receiving the first target data sent by the first cache by the second cache, obtaining second target data, and sending the second target data to the second controller through the analysis engine when the second target data is detected to meet the preset condition;
and the data analysis module is used for analyzing the second target data by the second controller and storing the processed second target data to a target terminal.
9. The data processing device based on the high-performance integrated architecture is characterized by comprising a processor and a memory;
the processor is connected with the memory;
the memory is used for storing executable program codes;
the processor runs a program corresponding to executable program code stored in the memory by reading the executable program code for performing the steps of the method according to any of claims 1-7.
10. A computer readable storage medium having stored thereon a computer program, characterized in that the computer readable storage medium has stored therein instructions which, when run on a computer or a processor, cause the computer or the processor to perform the steps of the method according to any of claims 1-7.
CN202311454358.7A 2023-11-03 2023-11-03 Data processing method and device based on high-performance memory and calculation integrated architecture Pending CN117667765A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311454358.7A CN117667765A (en) 2023-11-03 2023-11-03 Data processing method and device based on high-performance memory and calculation integrated architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311454358.7A CN117667765A (en) 2023-11-03 2023-11-03 Data processing method and device based on high-performance memory and calculation integrated architecture

Publications (1)

Publication Number Publication Date
CN117667765A true CN117667765A (en) 2024-03-08

Family

ID=90083478

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311454358.7A Pending CN117667765A (en) 2023-11-03 2023-11-03 Data processing method and device based on high-performance memory and calculation integrated architecture

Country Status (1)

Country Link
CN (1) CN117667765A (en)

Similar Documents

Publication Publication Date Title
EP3204862A1 (en) Emulated endpoint configuration
CN111049762A (en) Data acquisition method and device, storage medium and switch
CN105335309B (en) A kind of data transmission method and computer
CN108563518A (en) Slave communication means, device, terminal device and storage medium
CN114356223B (en) Memory access method and device, chip and electronic equipment
CN114039875B (en) Data acquisition method, device and system based on eBPF technology
US20130110960A1 (en) Method and system for accessing storage device
US9612934B2 (en) Network processor with distributed trace buffers
CN112650558B (en) Data processing method and device, readable medium and electronic equipment
CN112100090A (en) Data access request processing method, device, medium and memory mapping controller
CN113760473A (en) Priority processing method, processor, processing chip, circuit board and electronic equipment
CN107025146B (en) A kind of document generating method, device and system
CN112765084A (en) Computer device, virtualization acceleration device, data transmission method, and storage medium
CN113590512A (en) Self-starting DMA device capable of directly connecting peripheral equipment and application
WO2022032990A1 (en) Command information transmission method, system, and apparatus, and readable storage medium
US20180173639A1 (en) Memory access method, apparatus, and system
CN117667765A (en) Data processing method and device based on high-performance memory and calculation integrated architecture
JP2012089948A (en) Data transmission device and data transmission method
CN109905486B (en) Application program identification display method and device
CN116610262A (en) Method, device, equipment and medium for reducing SSD sequential reading delay
CN207424866U (en) A kind of data communication system between kernel based on heterogeneous multi-nucleus processor
CN109800202B (en) PCIE (peripheral component interface express) -based data transmission system, method and device
CN109120665B (en) High-speed data packet acquisition method and device
CN117373501B (en) Statistical service execution rate improving method and related device
CN106057226B (en) The access control method of dual-port storage system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination