CN117041168A - QoS queue scheduling realization method and device, storage medium and processor - Google Patents

QoS queue scheduling realization method and device, storage medium and processor Download PDF

Info

Publication number
CN117041168A
CN117041168A CN202311298478.2A CN202311298478A CN117041168A CN 117041168 A CN117041168 A CN 117041168A CN 202311298478 A CN202311298478 A CN 202311298478A CN 117041168 A CN117041168 A CN 117041168A
Authority
CN
China
Prior art keywords
data
flow
model
service
lstm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311298478.2A
Other languages
Chinese (zh)
Inventor
李晓轩
朱晨
张燕
杨强
程月宝
肖瑾
梁金伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Nanfei Microelectronics Co ltd
Original Assignee
Changzhou Nanfei Microelectronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Nanfei Microelectronics Co ltd filed Critical Changzhou Nanfei Microelectronics Co ltd
Priority to CN202311298478.2A priority Critical patent/CN117041168A/en
Publication of CN117041168A publication Critical patent/CN117041168A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2441Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6295Queue scheduling characterised by scheduling criteria using multiple queues, one for each individual QoS, connection, flow or priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3009Header conversion, routing tables or routing tags

Abstract

The invention provides a QoS queue scheduling realization method, a QoS queue scheduling realization device, a storage medium and a processor, and belongs to the technical field of data communication. The QoS queue scheduling implementation method comprises the following steps: acquiring a data stream; classifying the plurality of service flows by adopting a preset data flow classification model; the preset data flow classification model is constructed based on CNN and LSTM; determining the processing priority of each class of service flow according to preset scheduling requirements; setting corresponding ACL rules in the switch in turn according to the processing priority of the service flows of each category; and carrying out QoS queue scheduling on the data flow based on ACL rules corresponding to the service flows of each category. Through the scheduling, the early-stage switch can provide different service qualities for different types of application programs, so that scheduling is realized for different types of service flows, the requirements of various service flows are better met, and the user experience is improved.

Description

QoS queue scheduling realization method and device, storage medium and processor
Technical Field
The present invention relates to the field of data communications technologies, and in particular, to a QoS queue scheduling implementation method, a QoS queue scheduling implementation device, a machine-readable storage medium, and a processor.
Background
In a network, some applications or data traffic may be more important and sensitive than others. For example, confidential data flows inside an enterprise, which require higher security. By providing differentiated service quality of service (Quality of Service, qoS), preferential handling of these sensitive data can be achieved, thereby improving the security and confidentiality of the network.
Providing scheduling for different types of traffic flows becomes an important issue for network switches. Such scheduling may be based on factors such as the type and priority of the traffic stream. Through this scheduling, the network switch can provide different quality of service for different types of applications, thereby improving user experience. The DiffServ model is a class-based QoS technique, among other things, whose purpose is to classify and prioritize different data flows to guarantee quality of service for the network. The differentiated services model can be divided into two parts: classification and queue scheduling.
However, qoS services of early switches may suffer from some limitations compared to modern switches. For example, they typically do not support complex QoS policies, such as DiffServ and IntServ. The granularity of traffic differentiation of early switches was relatively large, and it was not possible to provide finer QoS services, and to more finely differentiate and manage different types of data flows.
Therefore, for the early traditional switch with lower performance, the existing QoS scheduling method cannot be realized, so that the early switch cannot schedule different types of traffic flows.
Disclosure of Invention
The embodiment of the application aims to provide a QoS queue scheduling realization method, a QoS queue scheduling realization device, a machine-readable storage medium and a processor, wherein the QoS queue scheduling realization method realizes scheduling aiming at different types of service flows, and better meets the requirements of various service flows, thereby improving the user experience.
In order to achieve the above object, a first aspect of the present application provides a QoS queue scheduling implementation method, including:
acquiring a data stream, wherein the data stream comprises a plurality of service streams;
classifying the plurality of service flows by adopting a preset data flow classification model to obtain a plurality of classes of service flows;
determining the processing priority of each class of service flow according to preset scheduling requirements;
setting corresponding ACL rules in the switch in turn according to the processing priority of the service flows of each category;
based on ACL rules corresponding to the service flows of each category, qoS queue scheduling is carried out on the data flows;
The preset data flow classification model is constructed based on CNN and LSTM.
In the embodiment of the present application, the process of constructing the preset data stream classification model includes:
acquiring a service flow sample;
dividing the service flow sample into a training sample set and a verification sample set;
training by adopting a CNN-LSTM model based on the training sample set to obtain an initial model;
and testing the initial model by adopting the verification sample set to obtain a data stream classification model.
In the embodiment of the application, the service flow sample comprises a plurality of historical service flows and classification labels corresponding to the historical service flows;
training is carried out by adopting a CNN-LSTM model based on the training sample set to obtain an initial model, and the method comprises the following steps:
data cleaning is carried out on the training sample set, and a cleaned data stream set is obtained;
respectively carrying out data preprocessing on each historical service flow in the cleaned data flow set to obtain a data flow characteristic diagram corresponding to each historical service flow;
respectively inputting the data flow characteristic diagrams into a CNN-LSTM model to obtain prediction classification corresponding to each historical service flow;
and adjusting parameters of the CNN-LSTM model according to the prediction classification and classification labels corresponding to each historical service flow to obtain an initial model.
In an embodiment of the present application, the performing data cleaning on the training sample set includes:
and cleaning the data of the training sample set based on the self-similarity characteristic of the network traffic.
In the embodiment of the application, the data preprocessing is performed on the historical service flow in the cleaned data flow set to obtain a data flow characteristic diagram corresponding to the historical service flow, which comprises the following steps:
performing feature selection on the historical service flows in the cleaned data flow set to obtain a plurality of original features;
combining the plurality of original features, and combining new features obtained by combining the features with the plurality of original features to obtain a new feature set;
and generating a data flow characteristic diagram corresponding to the historical service flow based on the new characteristic set.
In an embodiment of the present application, the generating, based on the new feature set, a data flow feature map corresponding to a historical service flow includes:
and respectively converting each feature in the new feature set into a pixel point in the image to generate a data flow feature map corresponding to the historical service flow.
In the embodiment of the application, the framework of the CNN-LSTM model comprises a CNN unit, a Dropout layer, an LSTM unit and a full connection layer; the output end of the CNN unit is connected with the input end of the Dropout layer, the output end of the Dropout layer is connected with the input end of the LSTM unit, and the output end of the LSTM unit is connected with the full connection layer; wherein the LSTM unit adopts a model decision architecture based on repeated LSTM.
The second aspect of the present application provides a QoS queue scheduling implementing apparatus, including:
the system comprises an acquisition module, a data processing module and a data processing module, wherein the acquisition module is used for acquiring a data stream, and the data stream comprises a plurality of service streams;
the classification module is used for classifying the plurality of service flows by adopting a preset data flow classification model to obtain a plurality of classes of service flows; the preset data flow classification model is constructed based on CNN and LSTM;
the determining module is used for determining the processing priority of the service flows of each category according to the preset scheduling requirement;
the setting module is used for sequentially setting corresponding ACL rules in the switch according to the processing priority of the service flows of each category;
and the scheduling module is used for performing QoS queue scheduling on the data flow based on ACL rules corresponding to the service flows of each category.
A third aspect of the present application provides a processor configured to perform the QoS queue scheduling implementation method described above.
A fourth aspect of the application provides a machine-readable storage medium having stored thereon instructions that, when executed by a processor, cause the processor to be configured to perform the QoS queue scheduling implementation method described above.
According to the technical scheme, the data flows are acquired, the preset data flow classification model is adopted to classify the plurality of service flows, and the processing priority of the service flows of each category is determined according to the preset scheduling requirements; setting corresponding ACL rules in the switch in turn according to the processing priority of the service flows of each category; and carrying out QoS queue scheduling on the data flow based on ACL rules corresponding to the service flows of each category. Because the constructed ACL rules are classified based on the data flow classification model, the service flows are ordered according to the priority according to the scheduling requirements, and corresponding ACL rules are sequentially arranged in the switch, the network processes the service flows according to different priorities, the service flows are classified and split, and the service flows with high priorities are processed preferentially, so that the requirements of various service flows are better met. Through the scheduling, the early switch can provide different service quality for different types of application programs, and scheduling is realized for different types of service flows, so that the user experience is improved. The data flow classification model combines the advantages of a convolutional neural network and a long-term and short-term memory network, can capture the spatial characteristics and the temporal characteristics of data at the same time, is beneficial to better classification, and improves classification accuracy.
Additional features and advantages of embodiments of the application will be set forth in the detailed description which follows.
Drawings
The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain, without limitation, the embodiments of the application. In the drawings:
fig. 1 schematically illustrates an application environment of a QoS queue scheduling implementation method according to an embodiment of the present application;
fig. 2 schematically illustrates a Qos scheduling method construction flow diagram according to an embodiment of the present application;
FIG. 3 schematically illustrates a process diagram for building a data flow classification model according to an embodiment of the application;
FIG. 4 schematically illustrates a schematic diagram of constructing a traffic classification model based on CNNs and LSTMs, according to an embodiment of the application;
FIG. 5 schematically illustrates a schematic diagram of a repeated LSTM based model decision architecture in accordance with an embodiment of the application;
fig. 6 schematically shows a block diagram of a QoS queue scheduling implementing apparatus according to an embodiment of the application;
fig. 7 schematically shows an internal structural view of a computer device according to an embodiment of the present application.
Description of the reference numerals
410-an acquisition module; 420-classification module; 430-determining a module; 440-setting up a module; 450-scheduling module; a01-a processor; a02-a network interface; a03-an internal memory; a04-a display screen; a05-an input device; a06—a nonvolatile storage medium; b01-operating system; b02-computer program.
Detailed Description
The following describes the detailed implementation of the embodiments of the present application with reference to the drawings. It should be understood that the detailed description and specific examples, while indicating and illustrating the application, are not intended to limit the application.
It should be noted that, if there is a description of "first", "second", etc. in the embodiments of the present application, the description of "first", "second", etc. is only for descriptive purposes, and is not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present application.
Referring to fig. 1 and fig. 2, fig. 1 schematically illustrates an application environment of a QoS queue scheduling implementation method according to an embodiment of the present application, and fig. 2 schematically illustrates a QoS scheduling method construction flow diagram according to an embodiment of the present application. The embodiment provides a QoS queue scheduling implementation method, which includes the following steps:
step 210: acquiring a data stream, wherein the data stream comprises a plurality of service streams; in this embodiment, the data stream may be captured in the switch, and in a specific implementation, the data stream mainly includes a plurality of service streams, where the service streams may be of different types, such as video streams, audio streams, and so on.
Step 220: classifying the plurality of service flows by adopting a preset data flow classification model to obtain a plurality of classes of service flows; the preset data flow classification model is constructed based on CNN and LSTM.
In this embodiment, the data flow classification model may be pre-constructed, and is used to classify the traffic flow. In order to improve classification accuracy, a data flow classification model can be constructed based on a neural network model according to a large number of historical data flows.
The data flow classification model can be constructed based on a convolutional neural network (ConvolutionalNeuralNetwork, CNN) and a Long-Term Memory network (LSTM), so that the model combines the advantages of the convolutional neural network and the Long-Term Memory network, and can capture the spatial characteristics and the temporal characteristics in the data flow at the same time, thereby facilitating more accurate classification and improving the accuracy of classification of the data flow classification model.
Referring to fig. 3, fig. 3 schematically illustrates a process diagram of establishing a data flow classification model according to an embodiment of the application. In some embodiments, the process of constructing the preset data stream classification model includes:
firstly, acquiring a service flow sample; the traffic flow samples may be data flow samples obtained from switches within the network.
Then, dividing the service flow sample into a training sample set and a verification sample set;
then, training by adopting a CNN-LSTM model based on the training sample set to obtain an initial model;
and finally, testing the initial model by adopting the verification sample set to obtain a data flow classification model.
In this embodiment, the traffic flow samples may be processed according to 10:1 into a training sample set and a validation sample set. And (3) inputting the training sample set into the CNN-LSTM model, continuously performing iterative training, simultaneously verifying the sample set to continuously verify the CNN-LSTM model, and finally storing the model with the best result in the training process as a data stream classification model. The training may be Mini-batch learning, as shown in fig. 3, where Mini-batch SGD Training refers to small batch random gradient descent training, specifically, selecting a batch of data (called Mini-batch) from training data, and then learning each Mini-batch, called Mini-batch learning. The verification may be a ten-fold cross-validation used to test algorithm accuracy. The mini-batch learning and ten-fold cross-validation mentioned above belongs to the prior art and will not be described in detail herein.
The CNN-LSTM model can be integrated with a convolutional neural network and a long-term and short-term memory network to operate, the data is preprocessed and then is input into the convolutional neural network to perform feature extraction, and then the LSTM model is combined to realize classification prediction. The model combines the advantages of a convolutional neural network and a long-term and short-term memory network, and can capture the spatial characteristics and the temporal characteristics of data at the same time. Specifically, since CNN is a feature extractor, spatial features of data can be extracted. For data with spatial features such as images, videos and the like, the CNN can extract local features of the data through convolution operation, so that the spatial information of the data is better captured. Meanwhile, the CNN can reduce the dimension of data through pooling operation, reduce the calculated amount of the model and improve the efficiency of the model. In this embodiment, CNN is mainly used for feature extraction of a service flow, and spatial features in the service flow can be extracted by performing convolution operation on a data packet of the service flow. LSTM acts as a classifier that is able to capture temporal features of data. In traffic classification, LSTM can discover timing relationships between traffic by memorizing and forgetting preamble information, thereby better capturing time information of data. Meanwhile, the LSTM can control information flow through mechanisms such as an output gate, an input gate and a forget gate, thereby avoiding information confusion and loss and improving the accuracy and robustness of the model. The LSTM is mainly used for classifying the service flow in the embodiment, the service flow data subjected to the feature extraction is input into the LSTM, and the time sequence features of the service flow are learned and classified through the LSTM, so that the automatic classification of the service flow is realized.
In some embodiments, the obtained service flow sample includes a plurality of historical service flows and classification labels corresponding to the historical service flows; in this embodiment, a plurality of history traffic flows, which are raw data, can be obtained on the network internal switch. The original data also contains classification labels for the traffic streams, which can be added by way of manual labeling. By manually labeling the tags, different types of data streams can be categorized, e.g., video streams, audio streams, etc., apart. It should be noted that, the service flow may also be classified and marked according to the importance of the service as required.
The training is performed by adopting a CNN-LSTM model based on the training sample set to obtain an initial model, and the method comprises the following steps:
firstly, carrying out data cleaning on the training sample set to obtain a cleaned data stream set;
in this embodiment, the burstiness of computer data communications is considered, which results in a relatively single traffic sample. Meanwhile, under the condition of unbalanced traffic class, the model may bias to predicting the class with higher occurrence frequency in the training process, and the condition of missing prediction or misprediction is easy to occur for the class with lower occurrence frequency. This can lead to reduced accuracy of the model, especially for those categories that occur less frequently. Therefore, in order to improve the accuracy of the model, data cleaning can be performed on the training sample set.
In some embodiments, the training sample set may be data cleaned based on network traffic self-similar characteristics. In this embodiment, in order to ensure diversity and balance of data, processing data in the training sample set based on self-similarity characteristics of network traffic may be considered, and traffic flows with relatively high similarity are discarded, so as to ensure diversity and balance of data. The similarity between the flows is mainly based on a fractal theory, and a Hurst index estimation method of a discrete sequence is adopted. The specific cleaning process is as follows:
is provided withFor a stream randomly fetched over a certain interval, it is divided into +.>The non-overlapping segments are then each separately +.>Order aggregation sequence->
According to the fractal characteristics of network traffic, there are definitions:wherein->Is->Order aggregation sequence->Variance of->Is the original sequence variance. Meanwhile, the same logarithm of the above formula can be obtained:
wherein the slope isI.e. the estimated value of the Hurst index.
When (when)When the random process has self-similarity, and +.>The greater the value, the higher the degree of self-similarity. In this embodiment, as shown in fig. 3, a data stream with a traffic flow specific gravity greater than 80% and a Hurst index greater than 0.5 may be screened for random discarding.
Step two, respectively carrying out data preprocessing on each historical service flow in the cleaned data flow set to obtain a data flow characteristic diagram corresponding to each historical service flow;
in the present embodiment, considering that CNN is a feature extractor, feature extraction is performed on data having spatial features such as images, videos, and the like, and thus it is necessary to convert a history traffic stream into image data, and thus, performing data preprocessing using the above includes mainly three parts: feature selection, feature generation processing and feature conversion.
In some embodiments, performing data preprocessing on the historical traffic flow in the cleaned data flow set to obtain a data flow feature map corresponding to the historical traffic flow, including:
the characteristic selection step: performing feature selection on the historical service flows in the cleaned data flow set to obtain a plurality of original features; in this embodiment, the feature selection is to select a partial flow feature, and the selected feature needs to be able to find a corresponding matching item in the access control list (Access Control List, ACL). Thus, the selected features may be: IP, port number, protocol number, average rate of packets, packet size, etc. By selecting the features, the original features meet the requirement of ACLs, so that the corresponding ACL rules can be correctly set in the switch in the follow-up process.
A feature generation processing step, wherein the plurality of original features are subjected to feature combination, and new features obtained by the feature combination are combined with the plurality of original features to obtain a new feature set; in the present embodiment, the above-described combination may be a combination by a method of adding or multiplying features by each other.
The feature conversion step: and generating a data flow characteristic diagram corresponding to the historical service flow based on the new characteristic set. The generating of the data flow feature map may be respectively converting each feature in the new feature set into a pixel point in the image, so as to generate a data flow feature map corresponding to the historical service flow.
In this embodiment, feature conversion means converting feature values corresponding to respective features into pixel values of pixel points in an image, so that a data flow feature map can be obtained. Such as: the number of the original features is n, the sum of two and the product of two can be selected from the n original features without repetition, and then a new feature set is formed by the n original features. The feature numbers thus generated are:
wherein,representing the original feature number, < >>Representing the generated feature number. Will->Transformation of individual characteristics into->And taking the image of the pixel point as an input of the CNN-LSTM model.
It should be noted that, for each historical service flow in the training sample set, when data preprocessing is performed, feature selection, feature generation processing and feature conversion are required according to the above method.
In the implementation process, the data preprocessing is performed on the historical service flow to obtain the corresponding data flow characteristic diagram, so that the spatial characteristic extraction is conveniently performed by adopting the CNN, and the better classification is facilitated. Meanwhile, in the feature generation processing stage, the pixel points of the generated image are more by expanding the feature set, so that the classification accuracy is further improved.
Thirdly, respectively inputting the data flow characteristic diagrams into a CNN-LSTM model to obtain prediction classification corresponding to each historical service flow;
and fourthly, adjusting parameters of the CNN-LSTM model according to the prediction classification and classification labels corresponding to each historical service flow to obtain an initial model. In this embodiment, the training sample set is input into the CNN-LSTM model and is continuously trained iteratively, and finally an initial model is obtained.
Referring to fig. 4, fig. 4 schematically illustrates a schematic diagram of constructing a traffic classification model based on CNNs and LSTMs according to an embodiment of the present application. The CNN-LSTM model can be obtained by adding a Dropout layer, an LSTM layer and a full connection layer on the basis of the CNN model. Namely: the framework of the CNN-LSTM model comprises a CNN unit, a Dropout layer, an LSTM unit and a full connection layer; the output end of the CNN unit is connected with the input end of the Dropout layer, the output end of the Dropout layer is connected with the input end of the LSTM unit, and the output end of the LSTM unit is connected with the full connection layer.
In this embodiment, the CNN reduces the number of network parameters by convolution operation and pooling operation, and also retains the deep features of most data. The CNN architecture employed consisted of a convolutional layer, a batch normalization (Batch Normalization, BN) layer, a leaky rectifying linear unit (Leaky Rectified Linear Unit, leakyReLU) layer, and a pooling layer.
When the input image passes through the convolution layers, each convolution layer carries out two-dimensional convolution through the input image and the convolution kernel, and an excitation result output image is obtained through convolution response and a nonlinear excitation function. A convolution operation includes a filter that, during processing, has a window width for an imageAnd outputs new characteristics after filtering the set of flows. Novel features->The method can be concretely represented as follows:
wherein,nonlinear activation function>Indicate->Personal characteristic image->Is a bias term for the filter. The relevant feature map can be obtained by doing so:
the data then passes through the BN layer and the LeakyReLU activation layer. The BN layer is used for normalizing the distribution of input data, accelerating model training and improving the generalization performance of the model. The basic principle is that each small batch of input data is normalized, and the input data is converted into standard normal distribution with the mean value of 0 and the variance of 1. The method has the advantages that the distribution of input data is more stable, the influence of internal covariate displacement is reduced, and therefore the training efficiency and generalization performance of the model are improved. The eigenvectors after BN layer processing are:
Wherein,and->Respectively representing the mean and variance of the input data; />Is a very small constant for avoiding the case that the denominator is 0; />And->Is a learned parameter for scaling and translating the normalized data.
LeakyReLU is a variant of a rectifying linear unit (Rectified linear unit, reLU) that can retain a small slope when the input is less than zero, rather than truncating it to zero. The ReLU enables the neural network to learn a nonlinear function, so that complex data distribution is fitted better, sparsity is realized, complexity of a model is reduced, and generalization capability is improved. However, when the input is less than zero, the output of the ReLU layer is zero, which may cause the problems of gradient disappearance and neuron "death", and the LeakyReLU layer can avoid the problem by keeping some negative values, so that the generalization capability of the model is improved, and the risk of overfitting is reduced.
Wherein,is a constant less than 1, here, set to 0.0.1.
The data is then processed by the pooling layer. The pooling operation is a downsampling operation that extracts a certain attribute from the corresponding sampling window as a low-dimensional output, with optional attributes having a maximum value, an average value, etc. In this embodiment, the maximum value is selected as the attribute of sampling, and the obtained feature set is:
Through the CNN unit, data extraction and information mining can be carried out on input data to reduce redundant information, and effective information can be better extracted through dimension increase and dimension reduction. After passing through the CNN unit, the CNN unit enters a Dropout layer, and in the process, the output of a part of neurons is randomly discarded, so that the dependency relationship among the neurons is reduced, and the robustness and generalization performance of the model are improved. Specifically, the Dropout layer will randomly select some neurons with a certain probability p and set their outputs to 0. The advantage of this is that the model can be made more robust, the risk of overfitting is reduced, and the expressive power of the model can be increased.
Wherein,is a binary random vector, and the value of each element is 0 or 1, which indicates whether the neuron is reserved. />Is (I)>Is the result.
LSTM is a special recurrent neural network (Recurrent Neural Network, RNN) model. The RNN model has the problem of long-term dependence in the training process, i.e. the output at the current moment is affected by the input at a plurality of previous moments. Such dependency can lead to the gradient being constantly multiplied by the weight matrix during the back propagation, resulting in problems of gradient extinction or gradient explosion. LSTM solves the long-term dependence problem by introducing a gate function, thereby improving the expression capacity and generalization performance of the model. The key part in LSTM is LSTM unit including forgetting gate Input door->And an output door->Three parts. Forgetting gate control discards from cell stateWhich information, which new information is added to the cell state by the input gate control, and what the output gate control outputs. By control of these gate functions, LSTM can effectively deal with long-term dependency problems in sequence data.
Referring to FIG. 5, FIG. 5 schematically illustrates a schematic diagram of a repeated LSTM based model decision architecture according to an embodiment of the application. In this embodiment, the LSTM unit may use a model decision based on repeated LSTM, and its architecture is shown in fig. 5. From the architecture in fig. 5, it can be determined that the parameter update can be obtained by the following formula:
wherein,sigmoid is a nonlinear activation function; w and b are respectively a weight matrix and a bias term of the corresponding gate; tanh is the activation function; />、/>And->Respectively expressing forgetting gate, inputs and outputs; />A candidate value representing an input gate output; />Representing the saved cell state; />Is the output of the LSTM cell.
Feature set to be passed through Dropout layerAs input to the LSTM cell. The parameter t selected in this embodiment is 20, i.e. a group of data is input into the network at 20 time points respectively; the number of nodes of the hidden layer setting for storing and memorizing the past state is 64. After passing through the LSTM unit, a vector sequence is obtained. These vectors may be further processed, in this embodiment as inputs to the fully connected layer. In the LSTM unit, the parameter t is set to 20, the number of hidden layer neurons is set to 64, time information of time sequence data is captured, finally, the full connection layer is utilized to map to distribution features in a sample mark space, so that the influence of feature positions on classification is reduced, and the probability value of a service flow label is output through a Softmax function to classify. To prevent overfitting, dropout layers were added during the training phase with a discard rate of 0.5, and in LSTM cell training, L1 regularization terms were added to the loss function.
After the output of the LSTM unit is obtained, the output vector sequence is used as the input of a full connection layer, and is converted into probability distribution of each service flow category by using a Softmax function, and then classification is carried out according to the probability size, so that a final classification result is obtained.
In the implementation process, a data flow classification model is established to convert network flow data into pictures by extracting features and feature generation means; and classifying and identifying the network traffic by using the CNN-LSTM model. The CNN unit is used for extracting data and mining information of input data to reduce redundant information, and extracting effective information better through dimension increase and dimension reduction. The original data is mapped to the hidden layer feature space through the combination operation of the two convolutions (5 multiplied by 5 convolution kernels), BN and LeakyReLU activation functions, richer features are extracted, the nonlinear expression capacity of the model is enhanced, and the model can be more suitable for complex data distribution; and the feature dimension is reduced by adopting a maximum pooling technology through dimension reduction operation of a pooling layer, so that the anti-interference capability of the network is enhanced.
Step 230: determining the processing priority of each class of service flow according to preset scheduling requirements; in this embodiment, after the data flow classification model is obtained, the priority of the service flow may be determined according to the requirement.
Step 240: setting corresponding ACL rules in the switch in turn according to the processing priority of the service flows of each category;
step 250: and carrying out QoS queue scheduling on the data flow based on ACL rules corresponding to the service flows of each category.
In this embodiment, the service flows are ordered according to the priority according to the requirements, and corresponding ACL rules are set in the switch in sequence, so as to implement queue scheduling.
In the implementation process, the data flows are acquired, the preset data flow classification model is adopted to classify the plurality of service flows, and the processing priority of the service flows of each category is determined according to the preset scheduling requirement; setting corresponding ACL rules in the switch in turn according to the processing priority of the service flows of each category; and carrying out QoS queue scheduling on the data flow based on ACL rules corresponding to the service flows of each category. Because the constructed ACL rules are classified based on the data flow classification model, the service flows are ordered according to the priority according to the scheduling requirements, and corresponding ACL rules are sequentially arranged in the switch, the network processes the service flows according to different priorities, the service flows are classified and split, and the service flows with high priorities are processed preferentially, so that the requirements of various service flows are better met. Through the scheduling, the early switch can provide different service quality for different types of application programs, and scheduling is realized for different types of service flows, so that the user experience is improved. The method comprises the steps of capturing service flows in a switch, manually marking the types of the service flows, inputting data into a CNN-LSTM model for training, and obtaining a classification model suitable for the network for classifying the service flows. The data flow classification model combines the advantages of a convolutional neural network and a long-term and short-term memory network, can capture the spatial characteristics and the temporal characteristics of data at the same time, is beneficial to better classification, and improves classification accuracy.
Fig. 1 is a flow diagram of a QoS queue scheduling implementation method in one embodiment. It should be understood that, although the steps in the flowchart of fig. 1 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 1 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of other steps or sub-steps of other steps.
In one embodiment, as shown in fig. 6, fig. 6 schematically shows a block diagram of a QoS queue scheduling implementation apparatus according to an embodiment of the present application. There is provided a QoS queue scheduling implementing apparatus, including an acquisition module 410, a classification module 420, a determination module 430, a setting module 440, and a scheduling module 450, wherein:
An acquisition module 410, configured to acquire a data stream, where the data stream includes a plurality of traffic streams;
the classification module 420 is configured to classify the plurality of service flows by using a preset data flow classification model, so as to obtain a plurality of service flows in a plurality of categories; the preset data flow classification model is constructed based on CNN and LSTM;
a determining module 430, configured to determine a processing priority of each class of traffic flow according to a preset scheduling requirement;
a setting module 440, configured to set corresponding ACL rules in the switch in sequence according to the processing priorities of the service flows of each class;
and the scheduling module 450 is configured to perform QoS queue scheduling on the data flow based on ACL rules corresponding to the service flows of each class.
The QoS queue scheduling implementation device includes a processor and a memory, where the acquiring module 410, the classifying module 420, the determining module 430, the setting module 440, the scheduling module 450, and the like are stored as program units, and the processor executes the program units stored in the memory to implement corresponding functions.
The processor includes a kernel, and the kernel fetches the corresponding program unit from the memory. The kernel may set one or more, and schedule different types of traffic flows by adjusting kernel parameters.
The memory may include volatile memory, random Access Memory (RAM), and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM), among other forms in computer readable media, the memory including at least one memory chip.
Embodiments of the present invention provide a machine-readable storage medium having stored thereon a program which, when executed by a processor, implements the QoS queue scheduling implementation method.
The embodiment of the invention provides a processor which is used for running a program, wherein the QoS queue scheduling implementation method is executed when the program runs.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure of which may be as shown in fig. 7. The computer apparatus includes a processor a01, a network interface a02, a display screen a04, an input device a05, and a memory (not shown in the figure) which are connected through a system bus. Wherein the processor a01 of the computer device is adapted to provide computing and control capabilities. The memory of the computer device includes an internal memory a03 and a nonvolatile storage medium a06. The nonvolatile storage medium a06 stores an operating system B01 and a computer program B02. The internal memory a03 provides an environment for the operation of the operating system B01 and the computer program B02 in the nonvolatile storage medium a06. The network interface a02 of the computer device is used for communication with an external terminal through a network connection. The computer program, when executed by the processor a01, implements a QoS queue scheduling implementation method. The display screen a04 of the computer device may be a liquid crystal display screen or an electronic ink display screen, and the input device a05 of the computer device may be a touch layer covered on the display screen, or may be a key, a track ball or a touch pad arranged on a casing of the computer device, or may be an external keyboard, a touch pad or a mouse.
It will be appreciated by those skilled in the art that the structure shown in FIG. 7 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, the QoS queue scheduling implementation apparatus provided by the present application may be implemented in the form of a computer program, which may be executed on a computer device as shown in fig. 7. The memory of the computer device may store various program modules constituting the QoS queue schedule implementing apparatus, such as the acquisition module 410, the classification module 420, the determination module 430, the setting module 440, and the schedule module 450 shown in fig. 6. The computer program constituted by the respective program modules causes the processor to execute the steps in the QoS queue scheduling implementing method of the respective embodiments of the present application described in the present specification.
The computer apparatus shown in fig. 7 may perform step 210 through the skip code segment acquisition module 410 in the QoS queue scheduling implementation apparatus as shown in fig. 6. The computer device may perform step 220 via classification module 420, step 230 via determination module 430, step 240 via setup module 440, and step 250 via dispatch module 450.
The embodiment of the application provides equipment, which comprises a processor, a memory and a program stored in the memory and capable of running on the processor, wherein the processor realizes the following steps when executing the program:
acquiring a data stream;
classifying the plurality of service flows by adopting a preset data flow classification model to obtain a plurality of classes of service flows;
determining the processing priority of each class of service flow according to preset scheduling requirements;
setting corresponding ACL rules in the switch in turn according to the processing priority of the service flows of each category;
based on ACL rules corresponding to the service flows of each category, qoS queue scheduling is carried out on the data flows;
the preset data flow classification model is constructed based on CNN and LSTM.
In one embodiment, the process of constructing the preset data stream classification model includes:
acquiring a service flow sample;
dividing the service flow sample into a training sample set and a verification sample set;
training by adopting a CNN-LSTM model based on the training sample set to obtain an initial model;
and testing the initial model by adopting the verification sample set to obtain a data stream classification model.
In one embodiment, the service flow sample includes a plurality of historical service flows and classification labels corresponding to the historical service flows;
training is carried out by adopting a CNN-LSTM model based on the training sample set to obtain an initial model, and the method comprises the following steps:
data cleaning is carried out on the training sample set, and a cleaned data stream set is obtained;
respectively carrying out data preprocessing on each historical service flow in the cleaned data flow set to obtain a data flow characteristic diagram corresponding to each historical service flow;
respectively inputting the data flow characteristic diagrams into a CNN-LSTM model to obtain prediction classification corresponding to each historical service flow;
and adjusting parameters of the CNN-LSTM model according to the prediction classification and classification labels corresponding to each historical service flow to obtain an initial model.
In one embodiment, the performing data cleaning on the training sample set includes:
and cleaning the data of the training sample set based on the self-similarity characteristic of the network traffic.
In one embodiment, the data preprocessing is performed on the historical service flows in the cleaned data flow set to obtain a data flow characteristic diagram corresponding to the historical service flows, including:
performing feature selection on the historical service flows in the cleaned data flow set to obtain a plurality of original features;
Combining the plurality of original features, and combining new features obtained by combining the features with the plurality of original features to obtain a new feature set;
and generating a data flow characteristic diagram corresponding to the historical service flow based on the new characteristic set.
In one embodiment, the generating, based on the new feature set, a data flow feature map corresponding to a historical service flow includes:
and respectively converting each feature in the new feature set into a pixel point in the image to generate a data flow feature map corresponding to the historical service flow.
In one embodiment, the architecture of the CNN-LSTM model comprises a CNN unit, a Dropout layer, an LSTM unit and a full connection layer; the output end of the CNN unit is connected with the input end of the Dropout layer, the output end of the Dropout layer is connected with the input end of the LSTM unit, and the output end of the LSTM unit is connected with the full connection layer; wherein the LSTM unit adopts a model decision architecture based on repeated LSTM.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash RAM. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises an element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (8)

1. A QoS queue scheduling implementation method, comprising:
acquiring a data stream, wherein the data stream comprises a plurality of service streams;
classifying the plurality of service flows by adopting a preset data flow classification model to obtain a plurality of classes of service flows;
Determining the processing priority of each class of service flow according to preset scheduling requirements;
setting corresponding ACL rules in the switch in turn according to the processing priority of the service flows of each category;
based on ACL rules corresponding to the service flows of each category, qoS queue scheduling is carried out on the data flows;
the preset data flow classification model is constructed based on CNN and LSTM;
the construction process of the preset data flow classification model comprises the following steps:
acquiring a service flow sample;
dividing the service flow sample into a training sample set and a verification sample set;
training by adopting a CNN-LSTM model based on the training sample set to obtain an initial model;
testing the initial model by adopting the verification sample set to obtain a data stream classification model;
the service flow sample comprises a plurality of historical service flows and classification labels corresponding to the historical service flows;
training is carried out by adopting a CNN-LSTM model based on the training sample set to obtain an initial model, and the method comprises the following steps:
data cleaning is carried out on the training sample set, and a cleaned data stream set is obtained;
respectively carrying out data preprocessing on each historical service flow in the cleaned data flow set to obtain a data flow characteristic diagram corresponding to each historical service flow;
Respectively inputting the data flow characteristic diagrams into a CNN-LSTM model to obtain prediction classification corresponding to each historical service flow;
and adjusting parameters of the CNN-LSTM model according to the prediction classification and classification labels corresponding to each historical service flow to obtain an initial model.
2. The QoS queue scheduling implementation method according to claim 1, wherein the performing data cleansing on the training sample set includes:
and cleaning the data of the training sample set based on the self-similarity characteristic of the network traffic.
3. The QoS queue scheduling implementation method according to claim 1, wherein performing data preprocessing on the historical traffic flow in the cleaned data flow set to obtain a data flow feature map corresponding to the historical traffic flow, includes:
performing feature selection on the historical service flows in the cleaned data flow set to obtain a plurality of original features;
combining the plurality of original features, and combining new features obtained by combining the features with the plurality of original features to obtain a new feature set;
and generating a data flow characteristic diagram corresponding to the historical service flow based on the new characteristic set.
4. The QoS queue scheduling implementation method according to claim 3, wherein the generating a data flow feature map corresponding to a historical traffic flow based on the new feature set includes:
And respectively converting each feature in the new feature set into a pixel point in the image to generate a data flow feature map corresponding to the historical service flow.
5. The QoS queue scheduling implementation method according to claim 1, wherein the framework of the CNN-LSTM model includes a CNN unit, a Dropout layer, an LSTM unit, and a full connection layer; the output end of the CNN unit is connected with the input end of the Dropout layer, the output end of the Dropout layer is connected with the input end of the LSTM unit, and the output end of the LSTM unit is connected with the full connection layer; wherein the LSTM unit adopts a model decision architecture based on repeated LSTM.
6. A QoS queue scheduling implementation apparatus, comprising:
the system comprises an acquisition module, a data processing module and a data processing module, wherein the acquisition module is used for acquiring a data stream, and the data stream comprises a plurality of service streams;
the classification module is used for classifying the plurality of service flows by adopting a preset data flow classification model to obtain a plurality of classes of service flows; the preset data flow classification model is constructed based on CNN and LSTM; the construction process of the preset data flow classification model comprises the following steps: acquiring a service flow sample; dividing the service flow sample into a training sample set and a verification sample set; training by adopting a CNN-LSTM model based on the training sample set to obtain an initial model; testing the initial model by adopting the verification sample set to obtain a data stream classification model; the service flow sample comprises a plurality of historical service flows and classification labels corresponding to the historical service flows; training is carried out by adopting a CNN-LSTM model based on the training sample set to obtain an initial model, and the method comprises the following steps: data cleaning is carried out on the training sample set, and a cleaned data stream set is obtained; respectively carrying out data preprocessing on each historical service flow in the cleaned data flow set to obtain a data flow characteristic diagram corresponding to each historical service flow; respectively inputting the data flow characteristic diagrams into a CNN-LSTM model to obtain prediction classification corresponding to each historical service flow; according to the prediction classification and classification labels corresponding to each historical service flow, parameters of the CNN-LSTM model are adjusted to obtain an initial model;
The determining module is used for determining the processing priority of the service flows of each category according to the preset scheduling requirement;
the setting module is used for sequentially setting corresponding ACL rules in the switch according to the processing priority of the service flows of each category;
and the scheduling module is used for performing QoS queue scheduling on the data flow based on ACL rules corresponding to the service flows of each category.
7. A processor configured to perform the QoS queue scheduling implementation method according to any one of claims 1 to 5.
8. A machine-readable storage medium having instructions stored thereon, which when executed by a processor cause the processor to be configured to perform the QoS queue scheduling implementation method of any one of claims 1 to 5.
CN202311298478.2A 2023-10-09 2023-10-09 QoS queue scheduling realization method and device, storage medium and processor Pending CN117041168A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311298478.2A CN117041168A (en) 2023-10-09 2023-10-09 QoS queue scheduling realization method and device, storage medium and processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311298478.2A CN117041168A (en) 2023-10-09 2023-10-09 QoS queue scheduling realization method and device, storage medium and processor

Publications (1)

Publication Number Publication Date
CN117041168A true CN117041168A (en) 2023-11-10

Family

ID=88630430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311298478.2A Pending CN117041168A (en) 2023-10-09 2023-10-09 QoS queue scheduling realization method and device, storage medium and processor

Country Status (1)

Country Link
CN (1) CN117041168A (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105577473A (en) * 2015-12-21 2016-05-11 重庆大学 Multi-business flow generation system based on network flow model
CN109101507A (en) * 2017-06-20 2018-12-28 腾讯科技(深圳)有限公司 Data processing method, device, computer equipment and storage medium
CN109615064A (en) * 2018-12-07 2019-04-12 电子科技大学 A kind of end-to-end decision-making technique of intelligent vehicle based on space-time characteristic fusion recurrent neural network
US20190295000A1 (en) * 2018-03-26 2019-09-26 H2O.Ai Inc. Evolved machine learning models
CN110519178A (en) * 2019-08-07 2019-11-29 京信通信***(中国)有限公司 Low time delay data processing method, apparatus and system
CN110543903A (en) * 2019-08-23 2019-12-06 国网江苏省电力有限公司电力科学研究院 Data cleaning method and system for GIS partial discharge big data system
CN110932995A (en) * 2019-11-07 2020-03-27 西安邮电大学 QoS queue scheduling implementation method
CN111178902A (en) * 2019-12-12 2020-05-19 同济大学 Network payment fraud detection method based on automatic characteristic engineering
CN111652259A (en) * 2019-04-16 2020-09-11 上海铼锶信息技术有限公司 Method and system for cleaning data
CN112804253A (en) * 2021-02-04 2021-05-14 湖南大学 Network flow classification detection method, system and storage medium
US20210204152A1 (en) * 2019-12-31 2021-07-01 Hughes Network Systems, Llc Traffic flow classification using machine learning
CN115375471A (en) * 2022-06-17 2022-11-22 同济大学 Stock market quantification method based on adaptive feature engineering
CN116383644A (en) * 2023-03-17 2023-07-04 重庆长安汽车股份有限公司 Data enhancement method, device, equipment and storage medium
CN116821646A (en) * 2023-07-14 2023-09-29 四川启睿克科技有限公司 Data processing chain construction method, data reduction method, device, equipment and medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105577473A (en) * 2015-12-21 2016-05-11 重庆大学 Multi-business flow generation system based on network flow model
CN109101507A (en) * 2017-06-20 2018-12-28 腾讯科技(深圳)有限公司 Data processing method, device, computer equipment and storage medium
US20190295000A1 (en) * 2018-03-26 2019-09-26 H2O.Ai Inc. Evolved machine learning models
CN109615064A (en) * 2018-12-07 2019-04-12 电子科技大学 A kind of end-to-end decision-making technique of intelligent vehicle based on space-time characteristic fusion recurrent neural network
CN111652259A (en) * 2019-04-16 2020-09-11 上海铼锶信息技术有限公司 Method and system for cleaning data
CN110519178A (en) * 2019-08-07 2019-11-29 京信通信***(中国)有限公司 Low time delay data processing method, apparatus and system
CN110543903A (en) * 2019-08-23 2019-12-06 国网江苏省电力有限公司电力科学研究院 Data cleaning method and system for GIS partial discharge big data system
CN110932995A (en) * 2019-11-07 2020-03-27 西安邮电大学 QoS queue scheduling implementation method
CN111178902A (en) * 2019-12-12 2020-05-19 同济大学 Network payment fraud detection method based on automatic characteristic engineering
US20210204152A1 (en) * 2019-12-31 2021-07-01 Hughes Network Systems, Llc Traffic flow classification using machine learning
CN112804253A (en) * 2021-02-04 2021-05-14 湖南大学 Network flow classification detection method, system and storage medium
CN115375471A (en) * 2022-06-17 2022-11-22 同济大学 Stock market quantification method based on adaptive feature engineering
CN116383644A (en) * 2023-03-17 2023-07-04 重庆长安汽车股份有限公司 Data enhancement method, device, equipment and storage medium
CN116821646A (en) * 2023-07-14 2023-09-29 四川启睿克科技有限公司 Data processing chain construction method, data reduction method, device, equipment and medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZHANG_XI_XI_94: "ValueError:Input 0 is incompatible with layer lstm_1 expected ndim=3, found ndim=2", pages 1, Retrieved from the Internet <URL:https://blog.csdn.net/ Zhang_xi_xi_94/article/details/89433988> *
张建春;李勃;董蓉;: "基于属性权值多级分类的测试样本数据预处理", 电视技术, no. 03, pages 218 - 225 *
罗扶华;张爱新;: "基于深度学习的僵尸网络检测技术研究", 通信技术, no. 01 *

Similar Documents

Publication Publication Date Title
CN110070067B (en) Video classification method, training method and device of video classification method model and electronic equipment
Wu et al. Max-pooling dropout for regularization of convolutional neural networks
WO2018121690A1 (en) Object attribute detection method and device, neural network training method and device, and regional detection method and device
EP3767536A1 (en) Latent code for unsupervised domain adaptation
Liu et al. Generalized zero-shot learning for action recognition with web-scale video data
Sundsøy et al. Deep learning applied to mobile phone data for individual income classification
CN111368636B (en) Object classification method, device, computer equipment and storage medium
Li et al. Two-class 3D-CNN classifiers combination for video copy detection
CN115496955B (en) Image classification model training method, image classification method, device and medium
US10162879B2 (en) Label filters for large scale multi-label classification
CN110929099B (en) Short video frame semantic extraction method and system based on multi-task learning
Joseph et al. Novel class discovery without forgetting
CN111160959B (en) User click conversion prediction method and device
CN110929785A (en) Data classification method and device, terminal equipment and readable storage medium
WO2021179631A1 (en) Convolutional neural network model compression method, apparatus and device, and storage medium
EP4343616A1 (en) Image classification method, model training method, device, storage medium, and computer program
CN116229530A (en) Image processing method, device, storage medium and electronic equipment
CN114708539A (en) Image type identification method and device, equipment, medium and product thereof
Ko et al. Network prediction with traffic gradient classification using convolutional neural networks
Sultana et al. Unsupervised adversarial learning for dynamic background modeling
JP2012048624A (en) Learning device, method and program
US20230147573A1 (en) Sensor device-based determination of geographic zone dispositions
CN117041168A (en) QoS queue scheduling realization method and device, storage medium and processor
CN114745335B (en) Network traffic classification device, storage medium and electronic equipment
CN113051911B (en) Method, apparatus, device, medium and program product for extracting sensitive words

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination