CN114510872A - Cloud server aging prediction method based on self-attention mechanism DLSTM - Google Patents
Cloud server aging prediction method based on self-attention mechanism DLSTM Download PDFInfo
- Publication number
- CN114510872A CN114510872A CN202210021584.5A CN202210021584A CN114510872A CN 114510872 A CN114510872 A CN 114510872A CN 202210021584 A CN202210021584 A CN 202210021584A CN 114510872 A CN114510872 A CN 114510872A
- Authority
- CN
- China
- Prior art keywords
- data
- dlstm
- cloud server
- layer
- attention mechanism
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2119/00—Details relating to the type or aim of the analysis or the optimisation
- G06F2119/04—Ageing analysis or optimisation against ageing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Human Resources & Organizations (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Strategic Management (AREA)
- Economics (AREA)
- General Health & Medical Sciences (AREA)
- Tourism & Hospitality (AREA)
- Development Economics (AREA)
- Entrepreneurship & Innovation (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Game Theory and Decision Science (AREA)
- General Business, Economics & Management (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Computer Hardware Design (AREA)
- Geometry (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a cloud server aging prediction method based on a self-attention mechanism DLSTM, which comprises the following steps: step 1, collecting data indexes of an aging condition of a cloud server, and acquiring time sequence data of cloud server resources and performance parameters; step 2, preprocessing the sequence data to obtain a preprocessed data set; step 3, dividing the cloud server aging data preprocessed in the step 2 into a training set and a testing set; step 4, constructing a DLSTM prediction model of the cloud server aging data time sequence based on the attention mechanism; step 5, training the DLSTM prediction model by using the training set data; and 6, predicting test set data by using the trained DLSTM prediction model, and evaluating the performance of the DLSTM prediction model. The method solves the problem that the traditional prediction method cannot accurately predict the aging condition of the cloud server which runs for a long time and has large data volume.
Description
Technical Field
The invention belongs to the technical field of time series prediction, and relates to a cloud server aging prediction method based on a self-attention mechanism DLSTM (Deep short-term memory).
Background
Cloud computing includes a variety of computing resources, providing secure and rapidly available cloud computing services and data storage services. The cloud server is one of important support technologies of cloud computing, and is high in expansibility, flexibility and cost performance, so that people can obtain corresponding services according to own requirements, expenses can be saved, and the resource utilization rate can be improved. However, software aging begins to occur during the continuous operation of the server. The software aging phenomenon is caused by the accumulation of error conditions such as resource leakage, unreleased file locks, un-terminated processes and the like, so that the system performance is reduced, even the system is crashed, and finally the system overhead is increased sharply. In order to further optimize the application of the cloud server, it becomes important to explore the influence of the aging condition of the cloud server on the performance of the cloud server.
With the continuous development of cloud computing technology and cloud servers, more and more experts begin to research the aging rule of the cloud server and explore how to enable the cloud server to provide services for cloud users in a better state.
The existing aging prediction analysis methods comprise two types of methods based on a state model and a data measurement. The method based on the state model mainly adopts a Petri net and Markov modeling method to establish a system state model, but in an actual task, the process of establishing an accurate model is very difficult; the data metric based method mainly comprises a time series method and a machine learning method. The Long-short-term memory (LSTM) deep learning network can meet the demand of memorizing data information for a Long time period to achieve the prediction effect, but the performance of time series prediction is not good. In the traditional BP neural network method, the parameter selection is difficult, and the most suitable parameter is difficult to select so as to achieve the best prediction effect.
Disclosure of Invention
The invention aims to provide a cloud server aging prediction method based on a self-attention mechanism DLSTM, and solves the problem that the traditional prediction method cannot predict the aging condition of a cloud server which runs for a long time and has large data volume accurately.
The technical scheme adopted by the invention is as follows:
the cloud server aging prediction method based on the self-attention mechanism DLSTM comprises the following steps:
step 3, dividing the cloud server aging data preprocessed in the step 2 into a training set and a testing set;
step 4, constructing a DLSTM prediction model of the cloud server aging data time sequence based on the attention mechanism;
step 5, training the DLSTM prediction model by using the training set data;
and 6, predicting test set data by using the trained DLSTM prediction model, and evaluating the performance of the DLSTM prediction model.
The invention is also characterized in that:
the step 2 specifically comprises the following steps:
step 2.1, carrying out first-order difference on the sequence data to obtain a difference sequence;
step 2.2, converting the first-order difference data sequence into a time step matrix, wherein each unit in the matrix comprises a data segment with the length of a time step;
and 2.3, normalizing the time step matrix to the range of [ -1,1] to obtain a preprocessed data set.
The specific process of the step 2.2 is as follows:
converting the original sequence into a matrix P of n x 11(ii) a Inserting a 0 before the original sequence, and converting into a matrix P of n x 12(ii) a Will matrix P1And P2Merging into an n x 2 matrix P'; namely:
where the matrix P' is the time step matrix.
In the step 4, the DLSTM prediction model is formed by stacking 50 LSTMs, each LSTM comprises a forgetting gate, an input gate and an output gate, the activation function of the input gate is a tanh function, and the activation functions used by the forgetting gate and the output gate are Sigmoid functions; the DLSTM neural network comprises an input layer, a hidden layer, a connection activation layer and an output layer which are sequentially connected, and a dropout layer is arranged; an attention mechanism is encapsulated in the input layer.
The packaging process of the attention mechanism specifically comprises the following steps: firstly, converting the format of input data into a desired format by using a switching layer; then using a Softmax activation function in the dense layer to calculate the weight of the features, wherein tf.keras.backup.mean () is used for calculating the average value of tensors through the Lambda layer; then, the exchange layer is used for converting the output of the compact layer into a format required by the product layer; and finally, the input is multiplied by the weight when the input reaches the multiplication layer, and the attention mechanism is packaged.
The invention has the beneficial effects that:
the invention can better utilize important characteristics of the time sequence and solve the problem of low accuracy of the traditional prediction method on data with large fluctuation. It is proposed to add a self-attention mechanism to an input layer, to assign different weights to input features in a time series, and to improve prediction accuracy by using a plurality of features with higher weights as input for primary feature prediction in prediction. The attention mechanism can adaptively select a data sequence related to a corresponding time point according to attention weight calculation and weight distribution of a dense layer to an input time sequence, so that a model can automatically select more important input features and obtain long-time sequence characteristics of the time sequence; compared with the traditional LSTM method, the DLSTM method has the advantages that each layer of LSTM in the DLSTM runs on different time scales, and the result is transmitted to the next layer of LSTM, so that the DLSTM can effectively utilize the characteristics of each layer of LSTM, extract information on a time sequence from different scales, and learn more complex time sequence data. The prediction accuracy of DLSTM is therefore higher when a large amount of data is predicted.
Drawings
FIG. 1 is a general framework diagram of a cloud server aging prediction method based on a self-attention mechanism DLSTM according to the present invention;
FIG. 2 is a DLSTM structure diagram of the cloud server aging prediction method based on the self-attention mechanism DLSTM of the present invention;
FIG. 3 is an attention mechanism encapsulation flow chart of the cloud server aging prediction method based on the self-attention mechanism DLSTM;
FIG. 4 is a diagram of a raw data time series according to an embodiment of the present invention;
FIG. 5 is a diagram of predicted results according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1 to fig. 3, the cloud server aging prediction method based on the self-attention mechanism DLSTM of the present invention includes the following steps:
step 3, dividing the cloud server aging data preprocessed in the step 2 into a training set and a testing set;
step 4, constructing a DLSTM prediction model of the cloud server aging data time sequence based on the attention mechanism;
step 5, training the DLSTM prediction model by using the training set data;
and 6, predicting test set data by using the trained DLSTM prediction model, and evaluating the performance of the DLSTM prediction model.
Wherein, the time sequence data of the cloud server performance in the step 1 is an idle memory;
wherein the pretreatment process in the step 2 specifically comprises the following steps:
step 2.1, carrying out first-order difference on the sequence data to obtain a difference sequence;
firstly, carrying out first-order difference on original data; recording the time sequence of the performance parameters of the original cloud server as X ═ X1,x2,…,xn) (n is the length of the entire time series), and the data series after the difference is Y ═ Y (Y is the length of the entire time series)1,y2,…,yn-1) (ii) a The post-value in the sequence is used minus the pre-value, i.e.:
yi=xi+1-xi (1)
and (3) obtaining a first-order difference data sequence Y by using the formula (1), thereby eliminating the time dependence of the time sequence.
Step 2.2, converting the first-order difference data sequence into a time step matrix, wherein each unit in the matrix comprises a data segment with the length of a time step;
the time step matrix is used for prediction; the time step used in the scheme is 2, and the construction process is as follows: converting the original sequence into a matrix P of n x 11(ii) a Inserting a 0 before the original sequence, and converting into a matrix P of n x 12(ii) a Will matrix P1And P2Merging into an n x 2 matrix P'; namely:
step 2.3, normalizing the time step matrix to the range of [ -1,1] to obtain a preprocessed data set, specifically:
use of Denotes xiNormalized value, | x | YmaxNormalizing the data in the matrix P' to [ -1,1] for the maximum of the absolute values of the differentiated data]An interval.
The construction of the DLSTM prediction model in the step 4 specifically comprises the following steps:
the DLSTM prediction model is formed by stacking 50 LSTMs, each LSTM comprises a forgetting gate, an input gate and an output gate, the activation function of the input gate is a tanh function, and the activation functions used by the forgetting gate and the output gate are Sigmoid functions; the DLSTM neural network comprises an input layer, a hidden layer, a connection activation layer and an output layer which are sequentially connected, and a dropout layer is arranged.
Wherein each LSTM maintains a conventional structure; the input of the DLSTM model input layer at the moment of t is recorded as xtThe output of the output layer is ht(ii) a In order to prevent overfitting of the model, a dropout layer is arranged, so that the activation value of a neuron stops working at a certain probability p during forward propagation, and p is set to be 0.3 in the scheme;
connecting an activation layer behind the hidden layer so that the matrix operation result has nonlinearity; the activation function for the forgetting gate and the output gate in LSTM is a Sigmoid function, i.e.The function outputs a value of 0 or 1, where outputting 0 indicates discarding the current information and outputting 1 indicates retaining the current information;
the input gate activation function being a tanh function, i.e.For calculating candidate value vector information;
the input of the i-th layer LSTM of the DLSTM prediction model at the time t is xtAnd h is output at the time t-1t-1T-1 Module State Ct-1And i-1 layer LSTM weight information W(i-1)The components are combined together; output h at time ttAnd state CtTransmitting to the t +1 moment; weight information W of i-th layer LSTM at time t(i)Transferring to the next layer of LSTM for auxiliary prediction until the last layer of LSTM obtains an output value; weight information W of first layer LSTM at time t(1)Transmitting the LSTM of the second layer as input, and repeating the steps until the last LSTM outputs a result;
wherein the LSTM has a forgetting gate, an input gate and an output gate; the forgetting gate calculation method comprises the following steps:
ft=σ(Wf (i-1)·[ht-1,xt]+bf) (3)
the input gate calculation method comprises the following steps:
it=σ(Wi (i-1)·[ht-1,xt]+bi) (4)
the candidate value vector calculation method comprises the following steps:
the state output is:
the output gate calculation method comprises the following steps:
ot=σ(Wo (i-1)·[ht-1,xt]+bo) (7)
the output is:
ht=ot*tanh(Ct) (8);
in the expressions (4) to (9), W is weight information, htIs the output at time t, xtFor input at time t, W(i-1)Is the weight information of the i-1 th layer LSTM, and b is the bias term.
In the step 4, a specific implementation method of an attention mechanism is added to an input layer of a DLSTM prediction model of a cloud server aging data time sequence, and the distribution of attention weight is completed by calculating the weighted average of input information by using a dense layer with a Softmax activation function. Attention-based packaging involves the use of multiple layers in combination: the exchange layer transposes the dimension of the input value into the desired dimension; the Lambda layer realizes the required specific function by using a custom function to achieve the expected effect; the repeat vector layer repeats the input data n times.
The packaging process of the attention mechanism specifically comprises the following steps: firstly, converting the format of input data into a desired format by using a switching layer; then using a Softmax activation function in the dense layer to calculate the weight of the features, wherein tf.keras.backup.mean () is used for calculating the average value of tensors through the Lambda layer; then, the exchange layer is used for converting the output of the compact layer into a format required by the product layer; finally, the input reaches a multiplication layer, and the weight is multiplied by the input to finish the packaging of the attention mechanism;
in the step 6, the root mean square error RMSE, the average absolute percent error MAPE and the average absolute error MAE are used as evaluation indexes, and formulas are respectively shown in (9) to (11);
where N is the length of the data, yiTo predict value, xiThe method comprises the steps of aging raw data of the cloud server.
Examples
In the embodiment, the idle memory is used as an aging index, and the idle memory time sequence data of the cloud server which actually runs is collected. Plots were taken at 20 point intervals as shown in figure 4. The pair of the prediction result of the cloud server aging prediction method based on the self-attention mechanism DLSTM and the cloud server raw data is shown in FIG. 5. The method comprises the following specific steps:
and 2, preprocessing the sequence data.
Step 2.1, carrying out first-order difference on the sequence data to obtain a difference sequence;
and 2.2, converting the first-order difference data sequence into a time step matrix, wherein each unit in the matrix comprises a data segment with the length of a time step. The time step used in the scheme is 2;
and 2.3, normalizing the time step matrix to [ -1,1], and finishing the pretreatment.
Step 3, dividing the cloud server aging data preprocessed in the step 2 into a training set and a test set;
and 4, constructing a DLSTM prediction model of the cloud server aging data time sequence based on the attention mechanism.
Step 4.1, constructing a DLSTM prediction model;
step 4.2, converting the format of the input data into an expected format by using an exchange layer;
step 4.3, then using a Softmax activation function in the dense layer to calculate the weight of the features, wherein the mean value of the tensor is calculated by using tf.keras.back.mean () through the Lambda layer;
step 4.4, the output of the compact layer is converted into a format required by the product layer by using the exchange layer;
step 4.5, multiplying the weight and the input in a product layer to finish the packaging of the attention mechanism;
step 5, training the DLSTM prediction model by using the training set data;
and 6, predicting test set data by using the trained DLSTM prediction model, and evaluating the performance of the DLSTM prediction model.
The root mean square error RMSE, the average absolute percent error MAPE and the average absolute error MAE are used as evaluation indexes, and formulas are respectively shown in (9) to (11);
where N is the length of the data, yiTo predict value, xiThe method comprises the steps of aging raw data of the cloud server. The error results are shown in table 1.
TABLE 1 model prediction error values
Prediction model | RMSE | MAE | MAPE |
Attention-based DLSTM model | 4770.41 | 621.05 | 0.08 |
Claims (5)
1. The cloud server aging prediction method based on the self-attention mechanism DLSTM is characterized by comprising the following steps:
step 1, collecting data indexes of an aging condition of a cloud server, and acquiring time sequence data of cloud server resources and performance parameters;
step 2, preprocessing the sequence data to obtain a preprocessed data set;
step 3, dividing the cloud server aging data preprocessed in the step 2 into a training set and a test set;
step 4, constructing a DLSTM prediction model of the cloud server aging data time sequence based on the attention mechanism;
step 5, training the DLSTM prediction model by using the training set data;
and 6, predicting test set data by using the trained DLSTM prediction model, and evaluating the performance of the DLSTM prediction model.
2. The cloud server aging prediction method based on the self-attention mechanism DLSTM as claimed in claim 1, wherein the step 2 specifically comprises:
step 2.1, carrying out first-order difference on the sequence data to obtain a difference sequence;
step 2.2, converting the first-order difference data sequence into a time step matrix, wherein each unit in the matrix comprises a data segment with the length of a time step;
and 2.3, normalizing the time step matrix to the range of [ -1,1] to obtain a preprocessed data set.
3. The cloud server aging prediction method based on the self-attention mechanism DLSTM as claimed in claim 2, wherein the specific process of step 2.2 is as follows:
converting the original sequence into a matrix P of n x 11(ii) a Inserting a 0 before the original sequence, and converting into a matrix P of n x 12(ii) a Will matrix P1And P2Merging into an n x 2 matrix P'; namely:
where the matrix P' is the time step matrix.
4. The cloud server aging prediction method based on the self-attention mechanism DLSTM as claimed in claim 1, wherein in step 4, the DLSTM prediction model is formed by stacking 50 LSTMs, each LSTM comprises a forgetting gate, an input gate and an output gate, the activation function of the input gate is tanh function, and the activation functions of the forgetting gate and the output gate are Sigmoid functions; the DLSTM neural network comprises an input layer, a hidden layer, a connection activation layer and an output layer which are sequentially connected, and a dropout layer is arranged; an attention mechanism is encapsulated in the input layer.
5. The cloud server aging prediction method based on the self-attention mechanism DLSTM as claimed in claim 4, wherein the encapsulation process of the attention mechanism is specifically as follows: firstly, converting the format of input data into a desired format by using a switching layer; then using a Softmax activation function in the dense layer to calculate the weight of the features, wherein tf.keras.backup.mean () is used for calculating the average value of tensors through the Lambda layer; then, the exchange layer is used for converting the output of the compact layer into a format required by the product layer; and finally, the input is multiplied by the weight when the input reaches the multiplication layer, and the attention mechanism is packaged.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210021584.5A CN114510872A (en) | 2022-01-10 | 2022-01-10 | Cloud server aging prediction method based on self-attention mechanism DLSTM |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210021584.5A CN114510872A (en) | 2022-01-10 | 2022-01-10 | Cloud server aging prediction method based on self-attention mechanism DLSTM |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114510872A true CN114510872A (en) | 2022-05-17 |
Family
ID=81548904
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210021584.5A Pending CN114510872A (en) | 2022-01-10 | 2022-01-10 | Cloud server aging prediction method based on self-attention mechanism DLSTM |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114510872A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114064203A (en) * | 2021-10-28 | 2022-02-18 | 西安理工大学 | Cloud virtual machine load prediction method based on multi-scale analysis and deep network model |
-
2022
- 2022-01-10 CN CN202210021584.5A patent/CN114510872A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114064203A (en) * | 2021-10-28 | 2022-02-18 | 西安理工大学 | Cloud virtual machine load prediction method based on multi-scale analysis and deep network model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110705743B (en) | New energy consumption electric quantity prediction method based on long-term and short-term memory neural network | |
CN110222828B (en) | Unsteady flow field prediction method based on hybrid deep neural network | |
CN113554466B (en) | Short-term electricity consumption prediction model construction method, prediction method and device | |
CN112001556B (en) | Reservoir downstream water level prediction method based on deep learning model | |
US20230095676A1 (en) | Method for multi-task-based predicting massiveuser loads based on multi-channel convolutional neural network | |
CN111783361A (en) | Numerical weather forecast mixed data assimilation method based on triple multi-layer perceptron | |
CN113065715A (en) | Multi-load ultra-short-term prediction method for comprehensive energy system | |
CN113469266A (en) | Electricity stealing behavior detection method based on improved deep convolutional neural network | |
CN110880044B (en) | Markov chain-based load prediction method | |
CN115600640A (en) | Power load prediction method based on decomposition network | |
CN115080795A (en) | Multi-charging-station cooperative load prediction method and device | |
CN116703644A (en) | Attention-RNN-based short-term power load prediction method | |
CN114510872A (en) | Cloud server aging prediction method based on self-attention mechanism DLSTM | |
CN108710966B (en) | Photovoltaic power generation power prediction method based on multi-cluster ESN neural network | |
CN115879335A (en) | Fluid multi-physical-field parameter prediction method based on graph-generated neural network | |
CN114596726A (en) | Parking position prediction method based on interpretable space-time attention mechanism | |
CN113886454A (en) | Cloud resource prediction method based on LSTM-RBF | |
CN112836876A (en) | Power distribution network line load prediction method based on deep learning | |
CN116911459A (en) | Multi-input multi-output ultra-short-term power load prediction method suitable for virtual power plant | |
CN115392387B (en) | Low-voltage distributed photovoltaic power generation output prediction method | |
CN115796327A (en) | Wind power interval prediction method based on VMD (vertical vector decomposition) and IWOA-F-GRU (empirical mode decomposition) -based models | |
CN111291922B (en) | Hybrid data stream flow distribution prediction method based on dynamic time window | |
CN115660895A (en) | Water resource management method for improving deep learning based on width learning | |
CN111476408B (en) | Power communication equipment state prediction method and system | |
CN110991600B (en) | Drought intelligent prediction method integrating distribution estimation algorithm and extreme learning machine |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |