CN114327561A - Gradation issuance verification method, gradation issuance verification system, gradation issuance verification medium, gradation issuance verification device, and gradation issuance verification program - Google Patents

Gradation issuance verification method, gradation issuance verification system, gradation issuance verification medium, gradation issuance verification device, and gradation issuance verification program Download PDF

Info

Publication number
CN114327561A
CN114327561A CN202111661198.4A CN202111661198A CN114327561A CN 114327561 A CN114327561 A CN 114327561A CN 202111661198 A CN202111661198 A CN 202111661198A CN 114327561 A CN114327561 A CN 114327561A
Authority
CN
China
Prior art keywords
verification
data
statement
index data
gray scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111661198.4A
Other languages
Chinese (zh)
Inventor
翁振斌
梁永富
熊刚
江旻
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202111661198.4A priority Critical patent/CN114327561A/en
Publication of CN114327561A publication Critical patent/CN114327561A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Debugging And Monitoring (AREA)

Abstract

The application provides a gray scale release verification method, system, medium, device, and program. In the method, when the version of the subsystem is updated, a first gray scale issuing step is executed, a first execution result after the first gray scale issuing step is executed is determined, then the first execution result is verified, only when the first execution result passes the verification processing, the next gray scale issuing step in the gray scale issuing step sequence is executed, when all the gray scale issuing steps in the gray scale issuing step sequence are executed completely and the execution result passes the verification processing, the gray scale issuing verification corresponding to the version updating of the subsystem is determined to be successful, therefore, in the process of issuing the version gray scale, after each gray scale step is completed, automatic verification can be timely carried out, and problems can be timely found.

Description

Gradation issuance verification method, gradation issuance verification system, gradation issuance verification medium, gradation issuance verification device, and gradation issuance verification program
Technical Field
The present application relates to the field of financial technology (Fintech), and in particular, to a method, system, medium, device, and program for verifying a gray release.
Background
With the development of computer technology, more and more technologies are applied in the financial field, the traditional financial industry is gradually changing to financial technology (fintech), and the interface testing technology is no exception, but due to the requirements of the financial industry on safety and real-time performance, higher requirements are also put forward on the technology.
At present, with the development of internet technology, the application of micro service technology and the improvement of version updating frequency, the labor investment for version release inside enterprises is more and more large, and greater challenges are provided for enterprise operation and maintenance personnel. At present, after enterprise operation and maintenance personnel release versions, most of the enterprise operation and maintenance personnel rely on manual version verification, the verification efficiency is low, time consumption is long, and verification schemes are mainly obtained by the release personnel through the arrangement of experience, so that the problem of incomplete schemes exists.
Especially, in the process of releasing the gray scale of the version, it is difficult to verify the gray scale in time after each gray scale step is completed, and if the version has a problem, it is difficult to find the problem in time.
Disclosure of Invention
Embodiments of the present application provide a method, a system, a medium, a device, and a program for verifying grayscale release, so as to solve the technical problems in the prior art that it is difficult to verify a version grayscale release in time after each grayscale step is completed, and it is difficult to find the version grayscale release in time if the version has a problem.
In a first aspect, an embodiment of the present application provides a gray release verification method applied to a gray release verification system, where the gray release verification system includes multiple subsystems, and version update of each subsystem includes a gray release step sequence, where the gray release step sequence includes multiple gray release steps arranged according to a preset order, and the method includes:
when the version of the subsystem is updated, executing a first gray scale release step, and determining a first execution result after the first gray scale release step is executed;
performing verification processing on the first execution result, wherein the verification processing includes at least one of application verification, database verification, business function verification and monitoring performance index check, the application verification is used for verifying an application configuration result in the first execution result, the database verification is used for verifying a database operation statement in the first execution result, the business function verification is used for verifying a Structured Query Language (SQL) statement execution result in the first execution result, and the monitoring performance index check is used for checking instance level index data in the first execution result;
if the first execution result passes the verification processing, executing a second gray scale issuing step, wherein the second gray scale issuing step is the next gray scale issuing step of the first gray scale issuing step;
and when all the gray scale issuing steps in the gray scale issuing step sequence are executed and the execution result passes the verification processing, determining that the gray scale issuing verification corresponding to the version updating of the subsystem is successful.
In one possible design, when the verification processing includes the database verification, the performing verification processing on the first execution result includes:
reading an SQL statement set to be verified in a database DB material package, and analyzing each SQL statement in the SQL statement set to be verified by using a preset lexical analyzer to generate a corresponding syntax tree;
determining a statement type corresponding to each SQL statement according to a syntax tree corresponding to each SQL statement and a preset object classification condition;
acquiring a first table name according to a statement type corresponding to a first SQL statement, and connecting a corresponding first database according to the first table name, wherein the first SQL statement is any statement in the SQL statement set to be verified;
acquiring a first operation statement corresponding to the first database, and analyzing the first operation statement according to the preset lexical analyzer to generate a first syntax tree;
and determining and checking a syntax tree to be verified and the first syntax tree according to a preset syntax tree statement type matching rule so as to determine the effective state of table and field attributes in a corresponding database, wherein the syntax tree to be verified is a syntax tree generated by analyzing the first SQL statement by using the preset lexical analyzer.
In a possible design, after determining the statement type corresponding to each SQL statement according to the syntax tree corresponding to each SQL statement and a preset object classification condition, the method further includes:
determining the statement with the statement type of Data Definition Language (DDL) as the first SQL statement;
and determining a first operation type corresponding to the first SQL statement according to a preset classification rule, wherein the first operation type is used for determining the first operation statement corresponding to the first SQL statement in the first database.
In a possible design, before the obtaining the first table name according to the statement type corresponding to the first SQL statement, the method further includes:
sorting the SQL sentences in the SQL sentence set to be verified according to the reading sequence of each SQL sentence in the SQL sentence set to be verified and the corresponding table name classification to generate a first table structure object;
starting from the last SQL statement of the first table structure object, carrying out the database verification in a reverse order; if the verification result is successful, deleting the currently verified SQL statement from the first table structure object, rolling back to the previous SQL statement to continue the database verification, and outputting an inspection report after the first table structure object completes the database verification; and if the verification result is failure, outputting a check report.
In one possible design, when the verification process includes the monitoring performance indicator check, the performing the verification process on the first execution result includes:
acquiring instance-level multi-operation-dimension index data in a first execution result, and determining an index data type of the instance-level multi-operation-dimension index data according to data characteristics;
and determining a corresponding anomaly detection algorithm according to the index data type, and verifying the instance-level multi-operation-dimension index data according to the anomaly detection algorithm.
In a possible design, if the determined indicator data type is the stable type data, determining a corresponding anomaly detection algorithm according to the indicator data type, and performing verification processing on the instance-level multi-operation-dimension indicator data according to the anomaly detection algorithm includes:
synchronously traversing the example-level multi-operation-dimension index data by using a plurality of sliding windows, and determining feature statistics of the plurality of sliding windows, wherein the feature statistics are used for representing the central tendency feature of the example-level multi-operation-dimension index data;
respectively subtracting the example-level multi-operation-dimension index data from the feature statistics of the plurality of sliding windows, and generating a feature sequence;
and utilizing a box type graph outlier detection algorithm to test the characteristic sequence, and when the deviation of the statistical data of the example-level multi-operation-dimension index data from the plurality of sliding windows is larger than a preset threshold value, determining that the example-level multi-operation-dimension index data is abnormal.
In one possible design, the exception to the instance-level multi-operation dimensional metric data comprises: burr anomalies, integral lift anomalies, and integral drop anomalies;
if the mean value of the front window and the back window of an abnormal point accords with a preset normal condition and the mean value difference value of the front window and the back window is within a preset range, the abnormal point index data amplification is greater than a preset amplification and greater than the mean value of the front window and the back window, determining that the example-level multi-operation-dimension index data corresponding to the abnormal point belongs to the burr abnormality, and the plurality of sliding windows comprise the front window and the back window;
if the mean value of the abnormal point front window mean value accords with a preset normal condition and is smaller than the rear window mean value, and the abnormal point index data amplification is larger than a preset amplification and larger than the front window mean value, determining that the example-level multi-operation-dimension index data corresponding to the abnormal point belongs to the integral lifting abnormality;
and if the mean value of the abnormal point front window mean value accords with a preset normal condition and is larger than the rear window mean value, and the abnormal point index data amplification is smaller than a preset amplification and is larger than the front window mean value, determining that the example-level multi-operation-dimension index data corresponding to the abnormal point belongs to the integral descending abnormality.
In one possible design, the checking the feature sequence by using a boxed graph outlier detection algorithm includes:
and subtracting the statistical quantity of the front window and the rear window from the judgment time data, reconstructing to obtain the characteristic sequence to determine the normal range of the characteristic sequence, and determining that the characteristic sequence is abnormal if the difference between the statistical quantity of the front window and the statistical quantity of the rear window and the current data is not in the normal range.
In a possible design, if it is determined that the instance-level multi-operation-dimension index data corresponding to the abnormal point belongs to the burr abnormality, setting the length of a left window to be greater than the length of a right window so as to increase the number of data collected before the abnormal point occurs, wherein the plurality of sliding windows comprise the left window and the right window;
if the instance-level multi-operation-dimension index data corresponding to the abnormal point belongs to the integral lifting abnormality or the integral descending abnormality, the lengths of the left window and the right window are set to meet the length requirement, so that stable data meeting the preset duration can be captured.
In a possible design, if the determined indicator data type is the trend-type data, determining a corresponding anomaly detection algorithm according to the indicator data type, and performing verification processing on the instance-level multi-operation-dimension indicator data according to the anomaly detection algorithm includes:
selecting initialized K index data as an initial clustering center, wherein K is a positive integer;
distributing each data in the instance-level multi-operation-dimension index data to a class corresponding to an initial clustering center with the minimum distance according to the distance between the instance-level multi-operation-dimension index data and the initial clustering center, wherein the accumulated distance is calculated to be used as the measurement of the distance;
re-determining the clustering center of the class corresponding to the initial clustering center to serve as a new clustering center for clustering in the subsequent steps;
repeatedly distributing each data in the example-level multi-operation-dimension index data to the class corresponding to the cluster center with the minimum distance according to the distance between the data and the current cluster center, and updating the cluster center of the corresponding class until the iteration times are finished or the iteration result meets the preset condition so as to determine a cluster center sequence;
calculating the cumulative distance between the current clustering center of the clustering center sequence and the farthest data in the corresponding class to be used as an abnormal index threshold of the instance-level multi-operation-dimension index data;
and if the distance between the example-level multi-operation-dimension index data and the corresponding clustering center is greater than the abnormal index threshold, determining that the example-level multi-operation-dimension index data is abnormal.
In a possible design, if the determined indicator data type is the periodic data, determining a corresponding anomaly detection algorithm according to the indicator data type, and performing verification processing on the instance-level multi-operation-dimension indicator data according to the anomaly detection algorithm includes:
a prediction model for periodic data memorability prediction is generated based on an LSTM algorithm and historical multi-operation-dimension index data learning;
determining a predicted value at each moment according to the prediction model, and determining a contemporaneous growth range according to the predicted value and the contemporaneous average growth rate;
and if the example-level multi-operation-dimension index data exceeds the contemporaneous increase range, determining that the example-level multi-operation-dimension index data is abnormal.
In a second aspect, an embodiment of the present application further provides a gray scale publishing verification system, including:
the release module is used for executing a first gray release step when version updating is carried out on the subsystem, and determining a first execution result after the first gray release step is executed;
the processing module is configured to perform verification processing on the first execution result, where the verification processing includes at least one of application verification, database verification, business function verification, and monitoring performance index check, where the application verification is used to verify an application configuration result in the first execution result, the database verification is used to verify a database operation statement in the first execution result, the business function verification is used to verify a Structured Query Language (SQL) statement execution result in the first execution result, and the monitoring performance index check is used to check instance level index data in the first execution result;
the processing module is further configured to execute a second gray scale issuing step when the first execution result passes the verification processing, where the second gray scale issuing step is a next gray scale issuing step of the first gray scale issuing step;
the processing module is further configured to determine that the gray scale release verification corresponding to the version update of the subsystem is successful when all the gray scale release steps in the gray scale release step sequence are completed and the execution result passes the verification processing.
In one possible design, the processing module is specifically configured to:
reading an SQL statement set to be verified in a database DB material package, and analyzing each SQL statement in the SQL statement set to be verified by using a preset lexical analyzer to generate a corresponding syntax tree;
determining a statement type corresponding to each SQL statement according to a syntax tree corresponding to each SQL statement and a preset object classification condition;
acquiring a first table name according to a statement type corresponding to a first SQL statement, and connecting a corresponding first database according to the first table name, wherein the first SQL statement is any statement in the SQL statement set to be verified;
acquiring a first operation statement corresponding to the first database, and analyzing the first operation statement according to the preset lexical analyzer to generate a first syntax tree;
and determining and checking a syntax tree to be verified and the first syntax tree according to a preset syntax tree statement type matching rule so as to determine the effective state of table and field attributes in a corresponding database, wherein the syntax tree to be verified is a syntax tree generated by analyzing the first SQL statement by using the preset lexical analyzer.
In one possible design, the processing module is specifically configured to:
determining the statement with the statement type of Data Definition Language (DDL) as the first SQL statement;
and determining a first operation type corresponding to the first SQL statement according to a preset classification rule, wherein the first operation type is used for determining the first operation statement corresponding to the first SQL statement in the first database.
In a possible design, before the obtaining the first table name according to the statement type corresponding to the first SQL statement, the method further includes:
sorting the SQL sentences in the SQL sentence set to be verified according to the reading sequence of each SQL sentence in the SQL sentence set to be verified and the corresponding table name classification to generate a first table structure object;
starting from the last SQL statement of the first table structure object, carrying out the database verification in a reverse order; if the verification result is successful, deleting the currently verified SQL statement from the first table structure object, rolling back to the previous SQL statement to continue the database verification, and outputting an inspection report after the first table structure object completes the database verification; and if the verification result is failure, outputting a check report.
In one possible design, the processing module is specifically configured to:
acquiring instance-level multi-operation-dimension index data in a first execution result, and determining an index data type of the instance-level multi-operation-dimension index data according to data characteristics;
and determining a corresponding anomaly detection algorithm according to the index data type, and verifying the instance-level multi-operation-dimension index data according to the anomaly detection algorithm.
In one possible design, the processing module is specifically configured to:
synchronously traversing the example-level multi-operation-dimension index data by using a plurality of sliding windows, and determining feature statistics of the plurality of sliding windows, wherein the feature statistics are used for representing the central tendency feature of the example-level multi-operation-dimension index data;
respectively subtracting the example-level multi-operation-dimension index data from the feature statistics of the plurality of sliding windows, and generating a feature sequence;
and utilizing a box type graph outlier detection algorithm to test the characteristic sequence, and when the deviation of the statistical data of the example-level multi-operation-dimension index data from the plurality of sliding windows is larger than a preset threshold value, determining that the example-level multi-operation-dimension index data is abnormal.
In one possible design, the exception to the instance-level multi-operation dimensional metric data comprises: burr anomalies, integral lift anomalies, and integral drop anomalies;
if the mean value of the front window and the back window of an abnormal point accords with a preset normal condition and the mean value difference value of the front window and the back window is within a preset range, the abnormal point index data amplification is greater than a preset amplification and greater than the mean value of the front window and the back window, determining that the example-level multi-operation-dimension index data corresponding to the abnormal point belongs to the burr abnormality, and the plurality of sliding windows comprise the front window and the back window;
if the mean value of the abnormal point front window mean value accords with a preset normal condition and is smaller than the rear window mean value, and the abnormal point index data amplification is larger than a preset amplification and larger than the front window mean value, determining that the example-level multi-operation-dimension index data corresponding to the abnormal point belongs to the integral lifting abnormality;
and if the mean value of the abnormal point front window mean value accords with a preset normal condition and is larger than the rear window mean value, and the abnormal point index data amplification is smaller than a preset amplification and is larger than the front window mean value, determining that the example-level multi-operation-dimension index data corresponding to the abnormal point belongs to the integral descending abnormality.
In one possible design, the processing module is specifically configured to:
and subtracting the statistical quantity of the front window and the rear window from the judgment time data, reconstructing to obtain the characteristic sequence to determine the normal range of the characteristic sequence, and determining that the characteristic sequence is abnormal if the difference between the statistical quantity of the front window and the statistical quantity of the rear window and the current data is not in the normal range.
In a possible design, if it is determined that the instance-level multi-operation-dimension index data corresponding to the abnormal point belongs to the burr abnormality, setting the length of a left window to be greater than the length of a right window so as to increase the number of data collected before the abnormal point occurs, wherein the plurality of sliding windows comprise the left window and the right window;
if the instance-level multi-operation-dimension index data corresponding to the abnormal point belongs to the integral lifting abnormality or the integral descending abnormality, the lengths of the left window and the right window are set to meet the length requirement, so that stable data meeting the preset duration can be captured.
In one possible design, the processing module is specifically configured to:
selecting initialized K index data as an initial clustering center, wherein K is a positive integer;
distributing each data in the instance-level multi-operation-dimension index data to a class corresponding to an initial clustering center with the minimum distance according to the distance between the instance-level multi-operation-dimension index data and the initial clustering center, wherein the accumulated distance is calculated to be used as the measurement of the distance;
re-determining the clustering center of the class corresponding to the initial clustering center to serve as a new clustering center for clustering in the subsequent steps;
repeatedly distributing each data in the example-level multi-operation-dimension index data to the class corresponding to the cluster center with the minimum distance according to the distance between the data and the current cluster center, and updating the cluster center of the corresponding class until the iteration times are finished or the iteration result meets the preset condition so as to determine a cluster center sequence;
calculating the cumulative distance between the current clustering center of the clustering center sequence and the farthest data in the corresponding class to be used as an abnormal index threshold of the instance-level multi-operation-dimension index data;
and if the distance between the example-level multi-operation-dimension index data and the corresponding clustering center is greater than the abnormal index threshold, determining that the example-level multi-operation-dimension index data is abnormal.
In one possible design, the processing module is specifically configured to:
a prediction model for periodic data memorability prediction is generated based on an LSTM algorithm and historical multi-operation-dimension index data learning;
determining a predicted value at each moment according to the prediction model, and determining a contemporaneous growth range according to the predicted value and the contemporaneous average growth rate;
and if the example-level multi-operation-dimension index data exceeds the contemporaneous increase range, determining that the example-level multi-operation-dimension index data is abnormal.
In a third aspect, an embodiment of the present application further provides an electronic device, including:
a processor; and the number of the first and second groups,
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform any one of the grayscale release verification methods of the first aspect via execution of the executable instructions.
In a fourth aspect, embodiments of the present application further provide a storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement any one of the grayscale release verification methods in the first aspect.
In a fifth aspect, the present application further provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, the computer program implements the gray scale issuance verification method in any one of the first aspect.
According to the gray scale release verification method, the gray scale release verification system, the gray scale release verification medium, the gray scale release verification device and the gray scale release verification program, when version updating is conducted on a subsystem, a first gray scale release step is executed, a first execution result after the first gray scale release step is executed is determined, then verification processing is conducted on the first execution result, only when the first execution result passes the verification processing, the next gray scale release step in a gray scale release step sequence is executed, when all gray scale release steps in the gray scale release step sequence are executed completely, and the execution result passes the verification processing, it is determined that the gray scale release verification corresponding to the version updating of the subsystem succeeds, therefore in the process of releasing the gray scale of the version, automatic verification can be conducted timely after each gray scale step is completed, and problems can be found timely.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a gray scale release verification method according to an embodiment of the present application;
fig. 2 is a schematic flowchart illustrating another gray scale issue verification method according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating database verification processing steps according to the second embodiment of the present application;
FIG. 4 is a flowchart illustrating another database verification processing step according to the second embodiment of the present application;
FIG. 5 is a flowchart illustrating a monitoring performance index checking step according to a third embodiment of the present application;
FIG. 6 is a schematic flow chart illustrating another monitoring performance index checking step according to the third embodiment of the present application;
FIG. 7 is a diagram illustrating abnormal data based on a first algorithm according to a fourth embodiment of the present application;
FIG. 8 is a schematic flow chart based on a third algorithm according to the fourth embodiment of the present application;
FIG. 9 is a schematic diagram of a host index time sequence according to the fourth embodiment of the present disclosure;
FIG. 10 is a schematic flow chart based on a fourth algorithm according to the fifth embodiment of the present application;
FIG. 11 is a flow chart of a prediction algorithm based on LSTM according to a sixth embodiment of the present application;
FIG. 12 is a schematic diagram of an LSTM network structure according to a sixth embodiment of the present application;
fig. 13 is a schematic flowchart of a gray scale distribution verification system according to a seventh embodiment of the present application;
fig. 14 is a schematic structural diagram of an electronic device shown in the present application according to an example embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the above-described drawings (if any) are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
With the development of computer technology, more and more technologies are applied in the financial field, the traditional financial industry is gradually changing to financial technology (fintech), and the interface testing technology is no exception, but due to the requirements of the financial industry on safety and real-time performance, higher requirements are also put forward on the technology. At present, with the development of internet technology, the application of micro service technology and the improvement of version updating frequency, the labor investment for version release inside enterprises is more and more large, and greater challenges are provided for enterprise operation and maintenance personnel. At present, after enterprise operation and maintenance personnel release versions, most of the enterprise operation and maintenance personnel rely on manual version verification, the verification efficiency is low, time consumption is long, and verification schemes are mainly obtained by the release personnel through the arrangement of experience, so that the problem of incomplete schemes exists. Especially, in the process of releasing the gray scale of the version, it is difficult to verify the gray scale in time after each gray scale step is completed, and if the version has a problem, it is difficult to find the problem in time.
Specifically, the system operation and health state are often determined by a series of indexes together, the indexes are not mutually independent, along with the increase of the subsystem scale, the system calling complexity is improved, after the version release of the operation and maintenance personnel, the operation and maintenance indexes need to be verified one by one, the verification efficiency is low, and the time consumption is long.
Moreover, the operation and maintenance personnel configure the expert rules to perform version verification by combining long-term operation and maintenance experience, need to construct different rules aiming at different operation and maintenance indexes and service scenes, and have high development and maintenance cost. In addition, on version verification, the problems of incomplete verification points and inaccurate verification method exist, and a uniform and complete verification scheme cannot be formed.
More importantly, manual verification is difficult to ensure that verification can be performed in time after the release of each gray level step in the version release process is completed, and the version quality problem cannot be found from massive monitoring data in time.
In view of the above technical problems, embodiments of the present application aim to provide a gray release verification method, so that the gray release verification is automated to improve the accuracy and timeliness of the gray release verification. In addition, SQL and Data Manipulation Language (DML) verification SQL are verified by automatically reversely generating a Database Definition Language (DDL) based on a Structured Query Language (SQL) inverse analysis technology, so that the labor input of manually generating a verification scheme is reduced, and the accuracy of the scheme is improved. In addition, an abnormal fluctuation detection and an outlier detection are carried out on the key monitoring indexes of the instance level after the monitoring indexes are issued based on a multi-variable operation and maintenance index abnormal detection mode mixed by multiple algorithms.
Fig. 1 is a schematic flowchart of a gray scale release verification method according to an embodiment of the present application. As shown in fig. 1-2, the gray scale issuance verification method provided in this embodiment includes:
step 101, when the subsystem is updated, executing a first gray scale distribution step, and determining a first execution result after executing the first gray scale distribution step.
The present embodiment may be applied to a gray scale release verification system including a plurality of subsystems, the version update of each subsystem includes a gray scale release step sequence including a plurality of gray scale release steps arranged in a preset order. Specifically, when the version of the subsystem is updated through gray scale distribution, the version of all instances of one subsystem can be distributed by dividing into a plurality of gray scale distribution steps, and each gray scale distribution step comprises one or more instances. And after the execution of each releasing step is finished, automatically calling the version verification module for verification, if the verification is passed, continuing to execute the next gray level releasing step, and if the verification is failed, stopping the execution of the subsequent gray level releasing step.
And 102, verifying the first execution result.
Specifically, the first execution result is verified, where the verification includes at least one of application verification, database verification, business function verification, and monitoring performance index check, the application verification is used to verify an application configuration result in the first execution result, the database verification is used to verify a database operation statement in the first execution result, the business function verification is used to verify a structured query SQL statement execution result in the first execution result, and the monitoring performance index check is used to check instance-level index data in the first execution result.
With continued reference to fig. 2, for Application (APP) verification, the following items may be checked:
a. checking whether the state of the app process is a starting state;
b. checking whether the process starting time is started after the version is updated;
c. checking whether the MD5 value of the process material packet is consistent with the MD5 value of a User Acceptance Test (UAT) environment process material packet;
d. checking whether the process version number is equal to the version number to be issued at the time;
e. checking whether the last update time of the log file is greater than the version update time;
f. it is checked whether there are variables in the configuration file that have not been replaced.
For database (Data Base, DB) validation, the following items may be checked:
a. checking whether a DDL table change statement in the DB version is executed;
b. it is checked whether the DML statement in the DB version has executed.
For business function verification, the following items may be checked:
a. and executing the verification SQL statement arranged before the version release, and checking whether the execution result is equal to an expected result value.
For the monitoring performance index check, the following items may be checked:
a. detecting whether stable indexes such as CPU, IO, service success rate and the like of a gray level example have regular spike abnormality and integral lifting or descending abnormality based on a first algorithm, wherein the first algorithm can be a python toolkit for unsupervised abnormality detection;
b. and based on a second algorithm, performing cluster analysis on the time consumption indexes of the service interfaces of the gray-scale examples and the non-gray-scale examples, thereby detecting whether the time consumption of key service indexes and the non-published examples have outliers.
c. And predicting the transaction amount of the instance-level service interface based on a Long Short-Term Memory (LSTM) prediction algorithm and calculating the historical same-proportion increment rate, thereby generating a normal transaction amount interval. And comparing the business transaction amount with the business transaction amount after the version so as to detect whether the key business transaction amount suddenly decreases or suddenly increases abnormally.
And 103, if the first execution result passes the verification processing, executing a second gray level issuing step.
And if the first execution result passes the verification processing, executing a second gray scale issuing step, wherein the second gray scale issuing step is the next gray scale issuing step of the first gray scale issuing step. The method comprises the steps of executing and verifying a plurality of gray scale issuing steps which are arranged according to a preset sequence and included in a gray scale issuing step sequence in sequence, and starting the next gray scale issuing step only after the execution result of the last gray scale issuing step passes the verification processing.
And step 104, when all the gray scale issuing steps in the gray scale issuing step sequence are executed and the execution result passes the verification processing, determining that the gray scale issuing verification corresponding to the version updating of the subsystem is successful.
And when all the gray scale releasing steps in the gray scale releasing step sequence are executed and the execution result passes the verification processing, determining that the gray scale releasing verification corresponding to the version updating of the subsystem is successful. And if the execution result of the gray scale releasing step fails to pass the verification processing before all the gray scale releasing steps in the gray scale releasing step sequence are executed, determining that the gray scale releasing verification corresponding to the version updating of the subsystem fails.
In the embodiment, when the subsystem is subjected to version updating, a first gray scale issuing step is executed, a first execution result after the first gray scale issuing step is executed is determined, then, verification processing is carried out on the first execution result, only when the first execution result passes the verification processing, the next gray scale issuing step in the gray scale issuing step sequence is executed, when all the gray scale issuing steps in the gray scale issuing step sequence are executed completely and the execution result passes the verification processing, the verification of the gray scale issuing corresponding to the version updating of the subsystem is determined to be successful, so that automatic verification can be timely carried out after each gray scale step is completed in the process of version gray scale issuing, and problems can be timely found.
Fig. 3 is a schematic flowchart of database verification processing steps shown in the second embodiment of the present application, and fig. 4 is a schematic flowchart of another database verification processing step shown in the second embodiment of the present application. As shown in fig. 3 to 4, the database verification processing steps provided in this embodiment include:
step 201, reading the SQL statement set to be verified in the DB material package, and analyzing each SQL statement in the SQL statement set to be verified by using a preset lexical analyzer.
Specifically, the Antlr4MySQL grammar and the lexical file may be obtained, and optionally, an unadapted grammar in a part of the new version is added, so as to generate an analysis monitoring and interpretation file for the corresponding file.
In this step, the SQL statement set to be verified in the database DB material package may be read, and each SQL statement in the SQL statement set to be verified is parsed by using a preset lexical analyzer, that is, an Antlr4 lexical analyzer, so as to generate a corresponding syntax tree. Then, analyzing the object type of the first-layer structure of the syntax tree, identifying the statement containing the DdlStatementContext object as a DDL statement, analyzing the second-layer structure of the syntax tree, resolving the create type statement and the alter statement, performing third-layer syntax tree analysis on the alter statement, and further resolving the specific operation type of the target.
Step 202, determining the statement type corresponding to each SQL statement according to the syntax tree corresponding to each SQL statement and the preset object classification condition.
In this step, the DML statement may be parsed into 5 types of statements: and verifying each type of statement by using a corresponding method respectively.
And for the DDL statement, the statement can be resolved into 11 types of statements, which are respectively: the method comprises the following steps of creating a new table statement, deleting a table statement, an alter defining field statement, an alter adding field statement, an alter deleting field statement, an alter modifying field statement, an alter deleting index statement, an alter adding main key statement, an alter deleting main key statement and an alter adding unique key statement, wherein each type of statement can be verified by using a corresponding method.
Step 203, obtaining the related first table name according to the statement type corresponding to the first SQL statement, and connecting the corresponding first database according to the first table name.
And acquiring a first table name according to a statement type corresponding to a first SQL statement, and connecting a corresponding first database according to the first table name, wherein the first SQL statement is any statement in the SQL statement set to be verified. If the statement with the statement type of the data definition language DDL is determined to be the first SQL statement, a first operation type corresponding to the first SQL statement is determined according to a preset classification rule, and the first operation type is used for determining the first operation statement corresponding to the first SQL statement in the first database.
Specifically, for a newly created table statement in a DDL statement, verification can be performed by checking whether a table structure object is a form of examining the table. For a delete table statement, verification can be made by checking that the table structure object is in a way that the table is being reviewed. For the alter defined field statement, verification can be performed by checking whether the fields exist in the table structure object and whether the attributes meet the requirements. For the alter add field statement, verification can be performed by checking whether the field exists in the table structure object and whether the attribute meets the requirement. For the alter delete field statement, verification may be performed by checking whether a field exists in the table structure object. For the alter modified field statement, verification can be performed by checking whether the field exists in the table structure object and whether the attribute meets the requirement. For the alter delete index statement, verification can be performed by checking whether an index exists in the table structure object. The add index statement of alter can be verified by checking whether the index exists in the table structure object and whether the index field is correct. For the alter add primary key statement, verification can be performed by checking whether the index exists in the table structure object and whether the index field is correct. For the alter delete primary key statement, verification can be done by checking whether the primary key index exists in the table structure object. For the alter add unique key statement, verification can be done by checking if the unique index exists in the table structure object. Finally, the inspection report can be produced according to the verification result.
Step 204, obtaining a first operation statement corresponding to the first database, and analyzing the first operation statement according to a preset lexical analyzer to generate a first syntax tree.
In this step, a first operation statement corresponding to the first database is obtained, and the first operation statement is analyzed according to a preset lexical analyzer to generate a first syntax tree. Specifically, the first operation statement may be parsed by an Antlr4 lexical analyzer to generate a corresponding first syntax tree.
Step 205, determining and checking the syntax tree to be verified and the first syntax tree according to a preset syntax tree statement type matching rule, so as to determine the effective state of the table and the field attribute in the corresponding database.
In this step, the syntax tree to be verified and the first syntax tree may be checked according to a preset syntax tree statement type matching rule to determine that the table and field attributes in the corresponding database are in an effective state, where the syntax tree to be verified is a syntax tree generated by parsing the first SQL statement by using a preset lexical analyzer
With continued reference to fig. 4, before obtaining the first table name according to the statement type corresponding to the first SQL statement, the SQL statements in the SQL statement set may be sorted according to the reading order of each SQL statement in the SQL statement set to be verified and the corresponding table name classification, so as to generate a first table structure object. Then, starting from the last SQL statement of the first table structure object, carrying out database verification in a reverse order; if the verification result is successful, deleting the currently verified SQL statement from the first table structure object, rolling back to the previous SQL statement to continue database verification, and outputting an inspection report after the first table structure object completes database verification; and if the verification result is failure, outputting a check report.
In a possible implementation manner, after obtaining the DDL syntax tree, different table name obtaining methods are executed for DDL statements of different types, to obtain a table name related to the DDL, then the DB database is connected, and a table building statement of the table is obtained, the table building statement generates the syntax tree using a DDL parsing function, and then a Python object is generated through parsing, where the Python object is used to record a parsing result of the syntax tree.
And then, according to the DDL statement reading sequence and the table names related to the DDL statements, classifying the DDL statements according to the table names and storing the DDL statements according to the material sequence. Then, the DDL statement check is carried out in batch by base and table. And reading the production DB table structure object of a single table and the related DDL syntax tree list, verifying the DDL statement one by one according to a reverse order, and using different checking rules according to different DDL syntax tree types. Specifically, the DDL syntax tree may be parsed to obtain the modified field names and field attributes, and then the real table structure objects are compared to check whether the field names exist and the field attributes are consistent. If the real table production DB table structure object is effective, roll-back operation is carried out on the real table production DB table structure object, the field is deleted, and if the field does not exist, recording is carried out. The next DDL statement check of the table is then made. And finally, sorting the inspection records, and generating an inspection report for reporting.
In the embodiment, the DB material package is reversely analyzed based on the preset lexical analyzer, and the DDL verification SQL and the DML verification SQL are automatically and reversely generated, so that the labor input of manually generating the verification scheme is reduced, and the accuracy of the scheme is improved.
In addition, with the deep development of the distributed architecture, the micro-service application is started, and the subsystem scale and the system call complexity are improved, so that the application operation and maintenance cost and the application version release verification cost are increased rapidly. Operation and maintenance personnel cannot find quality problems in time and accurately from the mass monitoring data after version change, and fault root causes are located. Therefore, the embodiment of the application provides a multivariate operation and maintenance index abnormality detection module based on a hybrid algorithm, so as to be used for paying attention to whether each index concerned in a gray level example is abnormal or not.
The example-level operation and maintenance index monitoring data is typical time sequence data, and has the characteristics of periodicity, regular spurs, integral lifting and descending, low peak period and the like on the index data form. For version release verification, operation and maintenance personnel need to pay attention to not only the host performance index of the gray scale example, but also the influence of version release on business transaction amount, interface time consumption, success rate and the like.
For multivariable operation and maintenance index prediction, index monitoring data are various, the data scale is large, the data flow rate is high, and an abnormality detection model is required to process real-time index data so as to accurately position abnormal time. In daily operation and maintenance, for example-level operation and maintenance index monitoring data, it is unreasonable to judge the whole multidimensional data as abnormal only by a certain dimension. The existing anomaly detection algorithm mainly comprises the following steps:
1. and longitudinally separating the multivariate time series data, and researching and finding abnormal sequence patterns through a correlation algorithm. The multivariate sequence is used as a whole to be input into a deep learning algorithm such as a differential self-encoder or a generation countermeasure network, an abnormal time sequence model is mined, a detection model only outputs an abnormal result, the mining abnormal mode has weak interpretability, and a fault root index cannot be explicitly positioned according to the abnormality.
2. And transversely separating the multivariable time sequence data, converting the multivariable time sequence data into a plurality of single-dimensional time sequences, and detecting the abnormality by utilizing an algorithm in the field of the single-dimensional time sequences.
Fig. 5 is a schematic flowchart of a monitoring performance index checking step shown in the third embodiment of the present application, and fig. 6 is a schematic flowchart of another monitoring performance index checking step shown in the third embodiment of the present application. As shown in fig. 5 to fig. 6, the monitoring performance index checking step provided in this embodiment includes:
step 301, obtaining example-level multi-operation-dimension index data in the first execution result, and determining an index data type of the example-level multi-operation-dimension index data according to the data characteristics.
And 302, determining a corresponding anomaly detection algorithm according to the index data type, and verifying instance-level multi-operation-dimension index data according to the anomaly detection algorithm.
The variable time sequence data can be transversely separated, the multivariate time sequence indexes are detected in parallel according to the index type matching corresponding abnormity detection algorithm, the abnormity result is fed back, and the operation and maintenance personnel can quickly position the abnormity index type and the time period. Specifically, for multi-dimensional operation and maintenance index data of different representation types, a transversely-separated detection algorithm can be used, a single-dimensional abnormality detector is used for stable type, trend type and periodic type index data, and operation and maintenance personnel can quickly judge abnormal indexes. Therefore, in the gray release process, operation and maintenance personnel need to find system abnormity in time, multi-algorithm mixing avoids using a multi-dimensional deep learning algorithm for modeling, and abnormity detection time is shortened. And then, determining a corresponding anomaly detection algorithm according to the index data type, and verifying the instance-level multi-operation-dimension index data according to the anomaly detection algorithm.
Specifically, if the determined index data type is stable data, the determined abnormality detection algorithm is the first algorithm. If the determined index data type is trend-type data, the determined anomaly detection algorithm is based on a second algorithm. If the determined index data type is periodic data, the determined anomaly detection algorithm is a prediction algorithm based on LSTM.
Step 3031, if the determined index data type is stable data, the determined abnormality detection algorithm is the first algorithm.
For a stable operation and maintenance index, the value is stable, the fluctuation amplitude of the index is not large, common abnormal types are a burr abnormality and an integral lifting and descending abnormality, namely whether the current time is normal or not depends on whether the value of the index is aligned with the nearest past. Fig. 7 is a diagram of abnormal data based on the first algorithm according to the fourth embodiment of the present application, as shown in fig. 7, if the value is suddenly increased or decreased for a short time, it is called a glitch, and if the value is changed to be permanent, it becomes an integral lift. After the version is released, operation and maintenance personnel need to pay attention to whether the CPU usage is excessively high or the IO occupancy is too much and the success rate is jittered and abnormal in the new version application.
Based on the high availability and performance of the system, threshold monitoring is typically set based on empirical values. Such conventional monitoring cannot be perceived by the operation and maintenance personnel before the indicator reaches the critical alarm value, and cannot be aware of the potential anomaly in advance. Therefore, a common abnormality of the stable index can be detected based on the first algorithm in this step.
Step 3032, if the determined index data type is trend data, the determined anomaly detection algorithm is a second algorithm.
In this step, since the service interface time consumption indexes are often aggregated at the level of the data center IDC, the overall time consumption for processing the request by the application instance same as that of the database deployment machine room is low, and the form of multi-cluster-center distribution is presented. After the version is released, the operation and maintenance personnel need to pay attention to whether the gray-level example has the outlier compared with other examples of the IDC. Conventional monitoring often uses a fixed threshold, but an abnormality detection threshold needs to be set individually for each machine room, maintenance cost is high, and DB switching or interface replacement needs to be reconfigured. Therefore, the clustering center of each instance in the IDC is calculated by using the second algorithm, and the distance between the clustering center and the farthest time sequence in the cluster is used as an abnormal threshold. And calculating the distance between the time-consuming sequence of the gray-scale example interface and the cluster heart of the local room through DTW, and if the distance is greater than an abnormal threshold value, judging that the time-consuming sequence is an outlier.
3033, if the determined index data type is periodic data, the determined anomaly detection algorithm is a prediction algorithm based on LSTM.
In this step, as the business transaction amount index shows strong periodicity and has business peaks and low peaks, the waveform forms of the index values are different according to different actual business conditions. If the abnormal detection methods such as the same-loop ratio or the fixed threshold value are used, the misjudgment and the leakage rate are high. Therefore, in the step, the service transaction amount prediction abnormity detection based on the LSTM long-and-short period prediction algorithm is used, the LSTM algorithm is used for learning and generating a prediction model of the service transaction amount, and the service transaction amount prediction value R at each judgment time after the version is released is obtained. And comparing the current index data with historical contemporaneous data by taking days as a periodic unit by adopting a dynamic threshold setting method, calculating the average growth rate Y% of the historical contemporaneous data, determining that the data in the contemporaneous growth range (R-R Y%, R + R Y%) are normal data, and determining the rest as abnormal values.
In this embodiment, after being released, the multivariate operation and maintenance index anomaly detection method based on multi-algorithm mixing is used to perform anomaly fluctuation detection and outlier detection on the example level key monitoring indexes.
Fig. 8 is a schematic flow chart based on the third algorithm in the fourth embodiment of the present application. As shown in fig. 8, the third algorithm provided in this embodiment includes:
step 401, two sliding windows are used to traverse the example-level multi-operation-dimension index data synchronously, and feature statistics of the two sliding windows are determined.
In this step, two sliding windows may be used to traverse the example-level multi-operation-dimension index data synchronously, and the feature statistics of the two sliding windows are determined, where the feature statistics are used to characterize the central tendency feature of the example-level multi-operation-dimension index data.
Specifically, the method comprises the following steps. In the third algorithm, an anomaly detection module based on sliding window and outlier discovery is provided, and different components can be combined into an anomaly detection model aiming at different scenes and different types. The third algorithm uses multiple sliding windows simultaneously through the raw data, keeping track of the differences between the raw sequence values and the window mean or median values. The feature statistics of the sliding windows may be window average values or median values.
And 402, respectively subtracting the example-level multi-operation-dimension index data from the feature statistics of the two sliding windows, and generating a feature sequence.
The data at the judgment moment can be differed from the statistic of the front and back reference windows, and the feature sequence s can be obtained through reconstruction1Wherein the sequence is reconstructed by using a synchronous sliding window method, the aggregation operation of the front window and the rear window is carried out, and the original sequence is characterized by the reconstructed sequenceThe transaction changes. The sequence characterization insufficiency of single-window sliding is avoided, the average value is used in the aggregation operation, and the numerical value change in the window is integrally counted, so that the influence of a certain outlier is reduced.
And 403, checking the characteristic sequence by using a box type graph outlier detection algorithm, and determining that the example-level multi-operation-dimension index data is abnormal when the deviation of the statistical data of the example-level multi-operation-dimension index data from the two sliding windows is greater than a preset threshold value.
In particular, for the signature sequence s1And (5) testing by using a box-type diagram outlier detection method. And when the statistical data in the left window and the right window are different remarkably, judging that the abnormal change occurs at the moment. Fig. 9 is a schematic diagram of a time series of host indicators according to the fourth embodiment of the present application. As shown in fig. 9, in this step, in order to take the integrity of the original sequence into consideration, the average value is used as a statistical variable to perform the aggregation operation and the anomaly determination on the original window, and the specific determination method of each type of anomaly is as follows, where μw1Is the front window mean, μw2Is the mean value of the back window, vt1To determine the index value at a time, Q1Is the first quartile, Q3And IQR is a quartile distance, namely the difference between the third quartile and the first quartile. Wherein for s1The determination can be made according to the following formula:
Figure BDA0003447001700000201
with continued reference to FIGS. 7 and 9, different exception expressions may be constructed for the following types of data exceptions:
burr abnormality: and if the mean value of the front window and the back window of the abnormal point accords with a preset normal condition and the mean value difference value of the front window and the back window is within a preset range, the amplification of the index data of the abnormal point is greater than the preset amplification and greater than the mean value of the front window and the back window, determining that the example-level multi-operation-dimension index data corresponding to the abnormal point belongs to burr abnormality. Specifically, the average values of the front window and the rear window of the abnormal point are normal and approximate in value, the index data of the abnormal point is increased sharply and is larger than the average values of the front window and the rear window, and the abnormal expression is as follows.
Figure BDA0003447001700000202
Integral lifting abnormity: and if the mean value of the abnormal point front window mean value accords with a preset normal condition and is smaller than the rear window mean value, and the abnormal point index data amplification is larger than a preset amplification and is larger than the front window mean value, determining that the example-level multi-operation-dimension index data corresponding to the abnormal point belongs to the integral lifting abnormality. Namely, the average value of the front window of the abnormal point is normal and is smaller than the average value of the rear window, and the index data of the abnormal point is increased sharply and is larger than the average value of the front window.
And if the mean value of the front window mean value of the abnormal points accords with the preset normal condition and is larger than the rear window mean value, and the amplification of the index data of the abnormal points is smaller than the preset amplification and is larger than the front window mean value, determining that the example-level multi-operation-dimension index data corresponding to the abnormal points belongs to integral descending abnormality. Namely, the average value of the front window of the abnormal point is normal and is larger than that of the rear window, and the index data of the abnormal point is increased sharply and is smaller than that of the front window.
Specifically, the abnormal expression is as follows:
w1<μw2&&μw1<vt1)||(μw1>μw2&&μw2<vt1)
abnormality determination of reconstructed sequence: and subtracting the statistical quantity of the front window and the rear window from the judgment time data, reconstructing to obtain a characteristic sequence to determine the normal range of the characteristic sequence, and determining that the characteristic sequence is abnormal if the difference between the current data and the statistical quantity of the front window and the rear window is not within the normal range. In particular, a reconstructed sequence s is calculated1Has a normal range of [ Q ]1-c*IQR,Q3+c*IQR]And if the difference between the current value and the window statistic is not within the normal range, judging the window statistic to be abnormal. According to different types of abnormal expressions, the abnormal detection model parameters, namely the window number W and the limit parameter c, are adjusted based on a first algorithm, a normal interval is calculated based on a box diagram by a quartile, and different limit parameters c are used.
The length of the sliding window controls the detection time scale: if the fact that the instance-level multi-operation-dimension index data corresponding to the abnormal point belongs to the burr abnormality is determined, the length of the left window is set to be larger than that of the right window so as to improve the quantity of data collected before the abnormal point occurs, and the plurality of sliding windows comprise the left window and the right window. And if the instance-level multi-operation-dimension index data corresponding to the abnormal point belongs to integral lifting abnormality or integral descending abnormality, setting the lengths of the left window and the right window to meet the length requirement so as to capture stable data meeting the preset duration. Specifically, for a glitch anomaly, the left window is longer than the right window to capture representative information of the recent past; for global horizontal lift or drop anomalies, both the left and right windows should be set long enough to capture long-term steady state. And for the lengths of the front window and the rear window, a sufficient number of values before the abnormal point is required to be collected in the burr type, and the left window is set to be larger than the right window. The lifting abnormality needs to eliminate the interference of the burr abnormality, the left window and the right window need to be large enough, and the interference of burr outliers is weakened through average value polymerization.
In addition, for the first algorithm implementation, for glitch anomalies, the sliding left window may be set to wLIs 10, slide right wRIs 5, and sets the boundary parameter C of the normal range of the history quartile range as 3, namely the reconstruction sequence s1Has a normal range of [ Q ]1-3*IQR,Q3+3*IQR]. By setting a front window w of larger scaleLAnd a smaller limit parameter c, the attention and the abnormal capturing sensitivity of the algorithm to the host indexes before release are enhanced. For the abnormal integral lifting and descending, the front window W and the rear window W can be set to be 10, and the limit parameter C of the normal range of the historical quartile range is set to be 6, namely the reconstruction sequence s1Has a normal range of [ Q ]1-6*IQR,Q3+6*IQR]. The same window size and larger limit parameter c before and after are set, the index value change before and after the decision point is concerned, and the normal range is enlarged to avoid the interference of burrs.
Fig. 10 is a schematic flowchart based on the fourth algorithm shown in the fifth embodiment of the present application. As shown in fig. 10, the fourth algorithm provided in this embodiment includes:
and 501, selecting initialized K index data as initial clustering centers.
In this step, the fourth algorithm may be an iterative solution density cluster analysis algorithm.
Step a: selecting initialized k samples as initial clustering centers, a ═ a1,a2……akIn the step, the number of the IDCs is used for clustering and generating corresponding cluster centers, and a time-consuming index sequence of one instance is randomly selected in each IDC as an initial cluster center.
And 502, distributing each data in the example-level multi-operation-dimension index data to the class corresponding to the initial clustering center with the minimum distance according to the distance between the data and the initial clustering center.
And distributing each data in the example-level multi-operation-dimension index data to the class corresponding to the initial clustering center with the minimum distance according to the distance between the data and the initial clustering center, wherein DTW is selected to calculate the accumulated distance as the distance measurement.
Step b: example time-consuming index sequence X for each IDCiAnd calculating the distance from the cluster center to each cluster center and dividing the cluster center into the classes corresponding to the cluster centers with the minimum distance, wherein the cluster centers are the cluster centers. In the step, DTW is selected to calculate the accumulated distance as the distance measure for comparing the similarity of the time series. It should be noted that the way of calculating the clustering distance based on DTW in this step can eliminate the problem of inconsistent scale that may occur in the historical time series data, and the time series index sequence is aligned in order, thereby avoiding using the euclidean distance alone as the measure of the degree of acquaintance.
Step 503, re-determining the cluster center for the class corresponding to the initial cluster center, so as to be used as a new cluster center for clustering in the subsequent steps.
Step c: for each cluster core ajRecalculating its cluster center
Figure BDA0003447001700000221
And then, repeating the step b and the step c until the iteration times or the minimum error change of the termination condition is reached, namely the completed iteration times or the iteration result meets the preset condition.
And step 504, repeatedly distributing each data in the example-level multi-operation-dimension index data to the class corresponding to the cluster center with the minimum distance according to the distance between the data and the current cluster center, and updating the cluster center of the corresponding class until the iteration times are finished or the iteration result meets the preset condition so as to determine a cluster center sequence.
And 505, calculating the cumulative distance between the current clustering center of the clustering center sequence and the farthest data in the corresponding class to serve as an abnormal index threshold of the instance-level multi-operation-dimension index data.
The dynamic time warping algorithm DTW used in this step is a common algorithm for comparing similarity of time series. Based on dynamic programming idea, finding out the best normalization path to stretch or compress the time sequence, and calculating the accumulated distance to obtain the acquaintance degree of the two time sequences, and taking the acquaintance degree as the distance measurement can avoid that the time sequence with inconsistent length can not be calculated by simply using Euclidean distance.
For time series T ═ q1,q2,…qn},R={c1,c2,…cmThe lengths are n and m respectively, and the process of calculating the similarity between T and R by applying DTW is as follows:
constructing a matrix D of size nxm, the matrix elements Dij=dist(qi,cj) Wherein dist is a Euclidean distance computation function;
searching the secondary D in the matrix D by adopting a dynamic programming search method11To dnmAll the regular search paths of (1) and denoted by W. The searched point (i, j) of the path in the search path W can be known according to monotonicity and continuity, and the next passing point can only be one of the following three cases: (i +1, j), (i +1, j +1), and (i, j + 1).
Then pass through
Figure BDA0003447001700000222
Obtaining the path DTW with the minimum time sequence T, R regular cost, wherein wkFor the kth search path, the Euclidean distance is taken as the measurement of each search path;k in the denominator is mainly used to compensate for the different length regular paths. For the slave D in the matrix D11To dnmThe shortest path of (2) calculates a cumulative distance γ, the cumulative distance γ (i, j) being a point qiAnd cjEuclidean distance (q) ofi,cj) The sum of the minimum cumulative distance to reach the point, i.e., γ (i, j) ═ dist (q)i,cj) + min { γ (i-1, j), γ (i-1, j-1), γ (i, j-1) }, as a similarity measure for the time series T and R.
It should be noted that, in this step, the cumulative distance of the DTW is used as the distance measurement and the recognition measurement between the operation and maintenance index time sequence sequences, so that the scale between the sequences can be aligned and the time continuity of the index sequence can be considered, which is more accurate than the euclidean distance.
Step 506, if the distance between the instance-level multi-operation-dimension index data and the corresponding cluster center is greater than the abnormal index threshold, determining that the instance-level multi-operation-dimension index data is abnormal.
In this step, cluster center sequences are obtained based on the second algorithm, and each cluster center sequence should be a representation of each IDC time-consuming index sequence. And calculating the DTW cumulative distance between each cluster center sequence and the farthest sequence in the cluster as an abnormal threshold distance, calculating the DTW cumulative distance between the gray-level example service time consumption index sequence and the IDC cluster center sequence, and if the DTW cumulative distance is greater than the abnormal threshold distance, judging that the time consumption of the example-level multi-operation-dimension index data is abnormal.
In this embodiment, the cluster center of the time-consuming index time series data of each IDC is calculated, the cluster center and the farthest time series sample distance in the cluster are used as the abnormal threshold, and the time-consuming abnormal threshold of each IDC is automatically set by setting the dynamic threshold, so that the maintenance cost of the fixed threshold can be reduced.
Fig. 11 is a flow chart of the LSTM-based prediction algorithm according to the sixth embodiment of the present application. As shown in fig. 11, the prediction algorithm of LSTM provided in this embodiment includes:
step 601, generating a prediction model for periodic data memorability prediction based on LSTM algorithm and historical multi-operation index data learning.
The LSTM, namely a long-short term memory network, is a recurrent neural network which controls the accumulation and the transmission of time sequence information by introducing a gate function, constructs memory unit selection and forgetting part of information, and excavates a time sequence change rule in a time sequence.
Fig. 12 is a schematic diagram of an LSTM network structure according to a sixth embodiment of the present application. As shown in FIG. 12, the input at time t is the network input value X at the current timetLast time LSTM output value ht-1Current time memory cell state Ct. C is a core memory cell (cell) of the LSTM network, which mainly linearly transfers the circulating information and nonlinearly outputs the information to the external state h of the hidden layert. Wherein the forgetting gate determines the cell state C at the previous momentt-1How much information needs to be forgotten and passed to Ct(ii) a The input gate determines the input X of the network at the current momentt' how many cells to save to cell state Ct(ii) a The output gate determines the state C of the control unittHow much current output value h is output to LSTMt
And for the LSTM memory cell network construction, a gate function can be adopted as a full connection layer network, and a real number vector between 0 and 1 is output to represent that information transmission is reserved in a certain proportion. The gate function may be set to g (x) ═ σ (wx + b), where σ is a Logistic function with an output interval of [0,1 ]; w is the weight vector of the gate function; b is a bias term. After being processed by the gate function, the memory cell outputs are as follows:
an input gate: i.e. it=σ(Wxixt+Whiht-1+bi);
Forget the door: f. oft=σ(Wxfxt+Whfht-1+bf);
Output gate ot=σ(Wxoxt+Whoht-1+bo);
Currently input Unit State, C't=tanh(Wxcxt+Whcht-1+bc);
Cell state at the present time: ct=ft⊙Ct-1+it⊙C′t
Final output of LSTM: h ist=ot⊙tanh(Ct)。
For the implementation of the LSTM prediction algorithm, in the step, an LSTM prediction model is constructed through a neural network framework keras commonly used by Python, the model is optimized only by adjusting parameters to activate a function, the number of LSTM layers and the input and output variable dimensions, the calculation details of the bottom layer model are shielded, and the specific construction process is as follows:
and data normalization is standardized, so that the Loss reduction and convergence speed of the training model are accelerated. The method uses a Z-score standardization method, so that the standardized sequence conforms to the standard normal distribution, the mean value is 0, the standard deviation is 1, and the standardization formula is as follows, wherein mu is the mean value of the original sequence, and sigma is the variance of the original sequence:
Figure BDA0003447001700000241
and selecting an activation function, namely determining the activation function of the LSTM module as tanh, and receiving the activation function of the LSTM output, namely the fully connected artificial neural network as linear.
Determining model training parameters, determining the rejection rate of network nodes to be 0.2, and preventing overfitting of the model; determining the calculation method of Loss as a mean square error; the iterative updating mode for determining the network weight parameter is an RMSprop algorithm commonly used by the RNN.
And determining the model training super-parameter epoch as 10 and the batch size as 100, wherein the batch _ size is the size of data trained in the network randomly fed each time, and is set as 100 for ensuring the convergence speed of the model. The epoch is the number of times the complete data sample is trained in the network, and the optimal setting for guaranteeing parameter training is 10.
And step 602, determining a predicted value at each moment according to the prediction model, and determining a contemporaneous growth range according to the predicted value and the contemporaneous average growth rate.
Step 603, if the example-level multi-operation-dimension index data exceeds the contemporaneous increase range, determining that the example-level multi-operation-dimension index data is abnormal.
In steps 602 to 603, if the example-level multi-operation-dimension index data is a service transaction amount index, the periodicity is strong, and there are service peaks and peaks, and the waveform forms of the index values are different according to different actual service conditions. And (3) performing service transaction amount prediction abnormity detection based on an LSTM long-and-short period prediction algorithm, and learning and generating a prediction model of the service transaction amount by using the LSTM algorithm to obtain a service transaction amount prediction value R at each judgment moment after the version is released. And comparing the current index data with historical contemporaneous data by taking days as a periodic unit by adopting a dynamic threshold setting method, calculating the average growth rate Y% of the historical contemporaneous data, determining that the data in the contemporaneous growth range (R-R Y%, R + R Y%) are normal data, and determining the rest as abnormal values.
If the homocyclic ratio anomaly detection threshold is used alone for the periodic index data, the normal range of the index cannot be effectively calculated according to the high peak and the low peak of the service. In the implementation, the normal interval of the gray-scale moment instance-level business transaction amount is calculated by combining a prediction model and statistical homocyclic ratio calculation. And taking the LSTM predicted value as a reference, and taking the same-proportion average growth rate as an up-down floating interval to form an up-down channel of a normal interval, thereby reducing false alarms of periodic variation indexes such as similar business transaction amount and the like.
Fig. 13 is a schematic flowchart of a gray scale distribution verification system according to a seventh embodiment of the present application. As shown in fig. 13, the gray scale distribution verification system 700 provided in this embodiment includes:
the release module 701 is configured to execute a first gray scale release step when version update is performed on a subsystem, and determine a first execution result after the first gray scale release step is executed;
a processing module 702, configured to perform verification processing on the first execution result, where the verification processing includes at least one of application verification, database verification, business function verification, and monitoring performance index check, where the application verification is used to verify an application configuration result in the first execution result, the database verification is used to verify a database operation statement in the first execution result, the business function verification is used to verify a Structured Query Language (SQL) statement execution result in the first execution result, and the monitoring performance index check is used to check instance level index data in the first execution result;
the processing module 702 is further configured to execute a second gray scale issuing step when the first execution result passes the verification processing, where the second gray scale issuing step is a next gray scale issuing step of the first gray scale issuing step;
the processing module 702 is further configured to determine that the verification of the gray scale distribution corresponding to the version update of the subsystem is successful when all the gray scale distribution steps in the sequence of the gray scale distribution steps are completed and the execution result passes the verification process.
In one possible design, the processing module 702 is specifically configured to:
reading an SQL statement set to be verified in a database DB material package, and analyzing each SQL statement in the SQL statement set to be verified by using a preset lexical analyzer to generate a corresponding syntax tree;
determining a statement type corresponding to each SQL statement according to a syntax tree corresponding to each SQL statement and a preset object classification condition;
acquiring a first table name according to a statement type corresponding to a first SQL statement, and connecting a corresponding first database according to the first table name, wherein the first SQL statement is any statement in the SQL statement set to be verified;
acquiring a first operation statement corresponding to the first database, and analyzing the first operation statement according to the preset lexical analyzer to generate a first syntax tree;
and determining and checking a syntax tree to be verified and the first syntax tree according to a preset syntax tree statement type matching rule so as to determine the effective state of table and field attributes in a corresponding database, wherein the syntax tree to be verified is a syntax tree generated by analyzing the first SQL statement by using the preset lexical analyzer.
In one possible design, the processing module 702 is specifically configured to:
determining the statement with the statement type of Data Definition Language (DDL) as the first SQL statement;
and determining a first operation type corresponding to the first SQL statement according to a preset classification rule, wherein the first operation type is used for determining the first operation statement corresponding to the first SQL statement in the first database.
In a possible design, before the obtaining the first table name according to the statement type corresponding to the first SQL statement, the method further includes:
sorting the SQL sentences in the SQL sentence set to be verified according to the reading sequence of each SQL sentence in the SQL sentence set to be verified and the corresponding table name classification to generate a first table structure object;
starting from the last SQL statement of the first table structure object, carrying out the database verification in a reverse order; if the verification result is successful, deleting the currently verified SQL statement from the first table structure object, rolling back to the previous SQL statement to continue the database verification, and outputting an inspection report after the first table structure object completes the database verification; and if the verification result is failure, outputting a check report.
In one possible design, the processing module 702 is specifically configured to:
acquiring instance-level multi-operation-dimension index data in a first execution result, and determining an index data type of the instance-level multi-operation-dimension index data according to data characteristics;
and determining a corresponding anomaly detection algorithm according to the index data type, and verifying the instance-level multi-operation-dimension index data according to the anomaly detection algorithm.
In one possible design, the processing module 702 is specifically configured to:
synchronously traversing the example-level multi-operation-dimension index data by using a plurality of sliding windows, and determining feature statistics of the plurality of sliding windows, wherein the feature statistics are used for representing the central tendency feature of the example-level multi-operation-dimension index data;
respectively subtracting the example-level multi-operation-dimension index data from the feature statistics of the plurality of sliding windows, and generating a feature sequence;
and utilizing a box type graph outlier detection algorithm to test the characteristic sequence, and when the deviation of the statistical data of the example-level multi-operation-dimension index data from the plurality of sliding windows is larger than a preset threshold value, determining that the example-level multi-operation-dimension index data is abnormal.
In one possible design, the exception to the instance-level multi-operation dimensional metric data comprises: burr anomalies, integral lift anomalies, and integral drop anomalies;
if the mean value of the front window and the back window of an abnormal point accords with a preset normal condition and the mean value difference value of the front window and the back window is within a preset range, the abnormal point index data amplification is greater than a preset amplification and greater than the mean value of the front window and the back window, determining that the example-level multi-operation-dimension index data corresponding to the abnormal point belongs to the burr abnormality, and the plurality of sliding windows comprise the front window and the back window;
if the mean value of the abnormal point front window mean value accords with a preset normal condition and is smaller than the rear window mean value, and the abnormal point index data amplification is larger than a preset amplification and larger than the front window mean value, determining that the example-level multi-operation-dimension index data corresponding to the abnormal point belongs to the integral lifting abnormality;
and if the mean value of the abnormal point front window mean value accords with a preset normal condition and is larger than the rear window mean value, and the abnormal point index data amplification is smaller than a preset amplification and is larger than the front window mean value, determining that the example-level multi-operation-dimension index data corresponding to the abnormal point belongs to the integral descending abnormality.
In one possible design, the processing module is specifically configured to:
and subtracting the statistical quantity of the front window and the rear window from the judgment time data, reconstructing to obtain the characteristic sequence to determine the normal range of the characteristic sequence, and determining that the characteristic sequence is abnormal if the difference between the statistical quantity of the front window and the statistical quantity of the rear window and the current data is not in the normal range.
In a possible design, if it is determined that the instance-level multi-operation-dimension index data corresponding to the abnormal point belongs to the burr abnormality, setting the length of a left window to be greater than the length of a right window so as to increase the number of data collected before the abnormal point occurs, wherein the plurality of sliding windows comprise the left window and the right window;
if the instance-level multi-operation-dimension index data corresponding to the abnormal point belongs to the integral lifting abnormality or the integral descending abnormality, the lengths of the left window and the right window are set to meet the length requirement, so that stable data meeting the preset duration can be captured.
In one possible design, the processing module 702 is specifically configured to:
selecting initialized K index data as an initial clustering center, wherein K is a positive integer;
distributing each data in the instance-level multi-operation-dimension index data to a class corresponding to an initial clustering center with the minimum distance according to the distance between the instance-level multi-operation-dimension index data and the initial clustering center, wherein the accumulated distance is calculated to be used as the measurement of the distance;
re-determining the clustering center of the class corresponding to the initial clustering center to serve as a new clustering center for clustering in the subsequent steps;
repeatedly distributing each data in the example-level multi-operation-dimension index data to the class corresponding to the cluster center with the minimum distance according to the distance between the data and the current cluster center, and updating the cluster center of the corresponding class until the iteration times are finished or the iteration result meets the preset condition so as to determine a cluster center sequence;
calculating the cumulative distance between the current clustering center of the clustering center sequence and the farthest data in the corresponding class to be used as an abnormal index threshold of the instance-level multi-operation-dimension index data;
and if the distance between the example-level multi-operation-dimension index data and the corresponding clustering center is greater than the abnormal index threshold, determining that the example-level multi-operation-dimension index data is abnormal.
In one possible design, the processing module 702 is specifically configured to:
a prediction model for periodic data memorability prediction is generated based on an LSTM algorithm and historical multi-operation-dimension index data learning;
determining a predicted value at each moment according to the prediction model, and determining a contemporaneous growth range according to the predicted value and the contemporaneous average growth rate;
and if the example-level multi-operation-dimension index data exceeds the contemporaneous increase range, determining that the example-level multi-operation-dimension index data is abnormal.
The gray scale distribution verification system provided by this embodiment may be used to perform the steps in the above method embodiments. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method described above in the present application.
Fig. 14 is a schematic structural diagram of an electronic device shown in the present application according to an example embodiment. As shown in fig. 14, the present embodiment provides an electronic device 800, including:
a processor 801; and the number of the first and second groups,
a memory 802 for storing executable instructions of the processor, which may also be a flash (flash memory);
wherein the processor 801 is configured to perform the steps of the above-described method via execution of the executable instructions.
Alternatively, the memory 802 may be separate or integrated with the processor 801.
When the memory 802 is a device independent of the processor 801, the electronic device 800 may further include:
a bus 803 for connecting the processor 801 and the memory 802.
The present embodiment also provides a readable storage medium, in which a computer program is stored, and when at least one processor of the electronic device executes the computer program, the electronic device executes the steps of the above method.
The present embodiment also provides a program product comprising a computer program stored in a readable storage medium. The computer program may be read from a readable storage medium by at least one processor of the electronic device, and execution of the computer program by the at least one processor causes the electronic device to perform the steps of the above-described method.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (15)

1. A gray release verification method is applied to a gray release verification system, the gray release verification system comprises a plurality of subsystems, version updating of each subsystem comprises a gray release step sequence, the gray release step sequence comprises a plurality of gray release steps which are arranged according to a preset sequence, and the method comprises the following steps:
when the version of the subsystem is updated, executing a first gray scale release step, and determining a first execution result after the first gray scale release step is executed;
performing verification processing on the first execution result, wherein the verification processing includes at least one of application verification, database verification, business function verification and monitoring performance index check, the application verification is used for verifying an application configuration result in the first execution result, the database verification is used for verifying a database operation statement in the first execution result, the business function verification is used for verifying a Structured Query Language (SQL) statement execution result in the first execution result, and the monitoring performance index check is used for checking instance level index data in the first execution result;
if the first execution result passes the verification processing, executing a second gray scale issuing step, wherein the second gray scale issuing step is the next gray scale issuing step of the first gray scale issuing step;
and when all the gray scale issuing steps in the gray scale issuing step sequence are executed and the execution result passes the verification processing, determining that the gray scale issuing verification corresponding to the version updating of the subsystem is successful.
2. The gray release verification method of claim 1, wherein when the verification process includes the database verification, the performing the verification process on the first execution result includes:
reading an SQL statement set to be verified in a database DB material package, and analyzing each SQL statement in the SQL statement set to be verified by using a preset lexical analyzer to generate a corresponding syntax tree;
determining a statement type corresponding to each SQL statement according to a syntax tree corresponding to each SQL statement and a preset object classification condition;
acquiring a first table name according to a statement type corresponding to a first SQL statement, and connecting a corresponding first database according to the first table name, wherein the first SQL statement is any statement in the SQL statement set to be verified;
acquiring a first operation statement corresponding to the first database, and analyzing the first operation statement according to the preset lexical analyzer to generate a first syntax tree;
and determining and checking a syntax tree to be verified and the first syntax tree according to a preset syntax tree statement type matching rule so as to determine the effective state of table and field attributes in a corresponding database, wherein the syntax tree to be verified is a syntax tree generated by analyzing the first SQL statement by using the preset lexical analyzer.
3. The gray scale issuance verification method according to claim 2, wherein after determining the statement type corresponding to each SQL statement according to the syntax tree corresponding to each SQL statement and the preset object classification condition, the method further comprises:
determining the statement with the statement type of Data Definition Language (DDL) as the first SQL statement;
and determining a first operation type corresponding to the first SQL statement according to a preset classification rule, wherein the first operation type is used for determining the first operation statement corresponding to the first SQL statement in the first database.
4. The method according to claim 3, wherein before the obtaining the first table name according to the statement type corresponding to the first SQL statement, the method further comprises:
sorting the SQL sentences in the SQL sentence set to be verified according to the reading sequence of each SQL sentence in the SQL sentence set to be verified and the corresponding table name classification to generate a first table structure object;
starting from the last SQL statement of the first table structure object, carrying out the database verification in a reverse order; if the verification result is successful, deleting the currently verified SQL statement from the first table structure object, rolling back to the previous SQL statement to continue the database verification, and outputting an inspection report after the first table structure object completes the database verification; and if the verification result is failure, outputting a check report.
5. The gray release verification method according to any one of claims 2 to 4, wherein when the verification processing includes the monitoring performance index check, the performing verification processing on the first execution result includes:
acquiring instance-level multi-operation-dimension index data in a first execution result, and determining an index data type of the instance-level multi-operation-dimension index data according to data characteristics;
and determining a corresponding anomaly detection algorithm according to the index data type, and verifying the instance-level multi-operation-dimension index data according to the anomaly detection algorithm.
6. The gray release verification method according to claim 5, wherein if the determined indicator data type is stable data, determining a corresponding anomaly detection algorithm according to the indicator data type, and performing verification processing on the instance-level multi-operation-dimensional indicator data according to the anomaly detection algorithm includes:
synchronously traversing the example-level multi-operation-dimension index data by using a plurality of sliding windows, and determining feature statistics of the plurality of sliding windows, wherein the feature statistics are used for representing the central tendency feature of the example-level multi-operation-dimension index data;
respectively subtracting the example-level multi-operation-dimension index data from the feature statistics of the plurality of sliding windows, and generating a feature sequence;
and utilizing a box type graph outlier detection algorithm to test the characteristic sequence, and when the deviation of the statistical data of the example-level multi-operation-dimension index data from the plurality of sliding windows is larger than a preset threshold value, determining that the example-level multi-operation-dimension index data is abnormal.
7. A gray release verification method as claimed in claim 6, wherein the exception of the instance-level multi-operation dimensional index data comprises: burr anomalies, integral lift anomalies, and integral drop anomalies;
if the mean value of a front window and a rear window of an abnormal point accords with a preset normal condition and the mean value difference value of the front window and the rear window is within a preset range, the abnormal point index data amplification is greater than a preset amplification and greater than the mean value of the front window and the rear window, determining that the example-level multi-operation-dimension index data corresponding to the abnormal point belongs to the burr abnormality, and the plurality of sliding windows comprise the front window and the rear window;
if the mean value of the abnormal point front window mean value accords with a preset normal condition and is smaller than the rear window mean value, and the abnormal point index data amplification is larger than a preset amplification and larger than the front window mean value, determining that the example-level multi-operation-dimension index data corresponding to the abnormal point belongs to the integral lifting abnormality;
and if the mean value of the abnormal point front window mean value accords with a preset normal condition and is larger than the rear window mean value, and the abnormal point index data amplification is smaller than a preset amplification and is larger than the front window mean value, determining that the example-level multi-operation-dimension index data corresponding to the abnormal point belongs to the integral descending abnormality.
8. The gray scale publication verification method of claim 7, wherein the verifying the signature sequence using a box plot outlier detection algorithm comprises:
and subtracting the statistical quantity of the front window and the rear window from the judgment time data, reconstructing to obtain the characteristic sequence to determine the normal range of the characteristic sequence, and determining that the characteristic sequence is abnormal if the difference between the statistical quantity of the front window and the statistical quantity of the rear window and the current data is not in the normal range.
9. The gray scale issuance verification method according to claim 8, wherein if it is determined that the instance-level multi-operation-dimension index data corresponding to the abnormal point belongs to the glitch abnormality, the length of a left window is set to be greater than the length of a right window to increase the number of data collected before the occurrence of the abnormal point, and the plurality of sliding windows include the left window and the right window;
if the instance-level multi-operation-dimension index data corresponding to the abnormal point belongs to the integral lifting abnormality or the integral descending abnormality, the lengths of the left window and the right window are set to meet the length requirement, so that stable data meeting the preset duration can be captured.
10. The gray scale release verification method according to claim 5, wherein if the determined indicator data type is trend-type data, determining a corresponding anomaly detection algorithm according to the indicator data type, and performing verification processing on the instance-level multi-operation-dimensional indicator data according to the anomaly detection algorithm includes:
selecting initialized K index data as an initial clustering center, wherein K is a positive integer;
distributing each data in the instance-level multi-operation-dimension index data to a class corresponding to an initial clustering center with the minimum distance according to the distance between the instance-level multi-operation-dimension index data and the initial clustering center, wherein the accumulated distance is calculated to be used as the measurement of the distance;
re-determining the clustering center of the class corresponding to the initial clustering center to serve as a new clustering center for clustering in the subsequent steps;
repeatedly distributing each data in the example-level multi-operation-dimension index data to the class corresponding to the cluster center with the minimum distance according to the distance between the data and the current cluster center, and updating the cluster center of the corresponding class until the iteration times are finished or the iteration result meets the preset condition so as to determine a cluster center sequence;
calculating the cumulative distance between the current clustering center of the clustering center sequence and the farthest data in the corresponding class to be used as an abnormal index threshold of the instance-level multi-operation-dimension index data;
and if the distance between the example-level multi-operation-dimension index data and the corresponding clustering center is greater than the abnormal index threshold, determining that the example-level multi-operation-dimension index data is abnormal.
11. The gray scale release verification method according to claim 5, wherein if the determined indicator data type is periodic data, determining a corresponding anomaly detection algorithm according to the indicator data type, and performing verification processing on the instance-level multi-operation-dimension indicator data according to the anomaly detection algorithm includes:
a prediction model for periodic data memorability prediction is generated based on an LSTM algorithm and historical multi-operation-dimension index data learning;
determining a predicted value at each moment according to the prediction model, and determining a contemporaneous growth range according to the predicted value and the contemporaneous average growth rate;
and if the example-level multi-operation-dimension index data exceeds the contemporaneous increase range, determining that the example-level multi-operation-dimension index data is abnormal.
12. A gray release verification system, comprising:
the release module is used for executing a first gray release step when version updating is carried out on the subsystem, and determining a first execution result after the first gray release step is executed;
the processing module is configured to perform verification processing on the first execution result, where the verification processing includes at least one of application verification, database verification, business function verification, and monitoring performance index check, where the application verification is used to verify an application configuration result in the first execution result, the database verification is used to verify a database operation statement in the first execution result, the business function verification is used to verify a Structured Query Language (SQL) statement execution result in the first execution result, and the monitoring performance index check is used to check instance level index data in the first execution result;
the processing module is further configured to execute a second gray scale issuing step when the first execution result passes the verification processing, where the second gray scale issuing step is a next gray scale issuing step of the first gray scale issuing step;
the processing module is further configured to determine that the gray scale release verification corresponding to the version update of the subsystem is successful when all the gray scale release steps in the gray scale release step sequence are completed and the execution result passes the verification processing.
13. An electronic device, comprising:
a processor; and
a memory for storing a computer program for the processor;
wherein the processor is configured to implement the gray release verification method of any one of claims 1 to 11 by executing the computer program.
14. A computer-readable storage medium on which a computer program is stored, the computer program, when being executed by a processor, implementing the gray release verification method according to any one of claims 1 to 11.
15. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the greyscale distribution verification method of any of claims 1 to 11.
CN202111661198.4A 2021-12-30 2021-12-30 Gradation issuance verification method, gradation issuance verification system, gradation issuance verification medium, gradation issuance verification device, and gradation issuance verification program Pending CN114327561A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111661198.4A CN114327561A (en) 2021-12-30 2021-12-30 Gradation issuance verification method, gradation issuance verification system, gradation issuance verification medium, gradation issuance verification device, and gradation issuance verification program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111661198.4A CN114327561A (en) 2021-12-30 2021-12-30 Gradation issuance verification method, gradation issuance verification system, gradation issuance verification medium, gradation issuance verification device, and gradation issuance verification program

Publications (1)

Publication Number Publication Date
CN114327561A true CN114327561A (en) 2022-04-12

Family

ID=81019089

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111661198.4A Pending CN114327561A (en) 2021-12-30 2021-12-30 Gradation issuance verification method, gradation issuance verification system, gradation issuance verification medium, gradation issuance verification device, and gradation issuance verification program

Country Status (1)

Country Link
CN (1) CN114327561A (en)

Similar Documents

Publication Publication Date Title
US9633198B2 (en) Detecting anomalous process behavior
TWI723528B (en) Computer-executed event risk assessment method and device, computer-readable storage medium and computing equipment
US9047559B2 (en) Computer-implemented systems and methods for testing large scale automatic forecast combinations
CN110717535B (en) Automatic modeling method and system based on data analysis processing system
US11487996B2 (en) Real-time predictive maintenance of hardware components using a stacked deep learning architecture on time-variant parameters combined with a dense neural network supplied with exogeneous static outputs
CN108491991B (en) Constraint condition analysis system and method based on industrial big data product construction period
CN111338972A (en) Machine learning-based software defect and complexity incidence relation analysis method
CN116955092B (en) Multimedia system monitoring method and system based on data analysis
Endres et al. Synthetic data generation: A comparative study
CN111860698A (en) Method and device for determining stability of learning model
CN117453764A (en) Data mining analysis method
GB2465860A (en) A directed graph behaviour model for monitoring a computer system in which each node of the graph represents an event generated by an application
CN113891342A (en) Base station inspection method and device, electronic equipment and storage medium
US20240152818A1 (en) Methods for mitigation of algorithmic bias discrimination, proxy discrimination and disparate impact
CN114327561A (en) Gradation issuance verification method, gradation issuance verification system, gradation issuance verification medium, gradation issuance verification device, and gradation issuance verification program
CN111221704B (en) Method and system for determining running state of office management application system
CN114757495A (en) Membership value quantitative evaluation method based on logistic regression
CN114186644A (en) Defect report severity prediction method based on optimized random forest
CN115904920A (en) Test case recommendation method and device, terminal and storage medium
KR20220080121A (en) Collaborative Learning Models for Semiconductor Applications
CN111724048A (en) Characteristic extraction method for finished product library scheduling system performance data based on characteristic engineering
Chakrapani et al. Predicting performance analysis of system configurations to contrast feature selection methods
Madou et al. Software for improving source code quality
CN116304814A (en) Method and system for analyzing working condition of monitoring object based on classification algorithm
Tzinieris Machine learning based warning system for failed procurement classification documents

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination