CN111666198A - Log abnormity monitoring method and device and electronic equipment - Google Patents
Log abnormity monitoring method and device and electronic equipment Download PDFInfo
- Publication number
- CN111666198A CN111666198A CN202010520925.4A CN202010520925A CN111666198A CN 111666198 A CN111666198 A CN 111666198A CN 202010520925 A CN202010520925 A CN 202010520925A CN 111666198 A CN111666198 A CN 111666198A
- Authority
- CN
- China
- Prior art keywords
- abnormity
- anomaly
- target
- log
- alarm
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 238000012544 monitoring process Methods 0.000 title claims abstract description 62
- 238000001514 detection method Methods 0.000 claims abstract description 49
- 238000012806 monitoring device Methods 0.000 claims abstract description 5
- 230000002159 abnormal effect Effects 0.000 claims description 65
- 238000004422 calculation algorithm Methods 0.000 claims description 30
- 230000005856 abnormality Effects 0.000 claims description 20
- 238000004590 computer program Methods 0.000 claims description 8
- 238000012549 training Methods 0.000 claims description 7
- 230000000875 corresponding effect Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 230000006870 function Effects 0.000 description 4
- 238000005065 mining Methods 0.000 description 4
- 230000002093 peripheral effect Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000001364 causal effect Effects 0.000 description 3
- 230000002596 correlated effect Effects 0.000 description 3
- 239000003795 chemical substances by application Substances 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3466—Performance evaluation by tracing or monitoring
- G06F11/3476—Data logging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/32—Monitoring with visual or acoustical indication of the functioning of the machine
- G06F11/324—Display of status information
- G06F11/327—Alarm or error message display
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/2433—Single-class perspective, e.g. one-against-all classification; Novelty detection; Outlier detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Quality & Reliability (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Computer Hardware Design (AREA)
- Life Sciences & Earth Sciences (AREA)
- Debugging And Monitoring (AREA)
Abstract
The application provides a log abnormity monitoring method, a log abnormity monitoring device and electronic equipment, wherein the method comprises the following steps: acquiring monitoring log data of a specified time period; inputting monitoring log data of a specified time period into a log anomaly detection model for detection so as to determine whether anomaly exists in the specified time period; if the target abnormity exists in the appointed time period, determining the association abnormity associated with the target abnormity from a pre-stored abnormity alarm frequent pattern set according to the target abnormity, wherein each abnormity and the abnormity set associated with each abnormity are stored in the abnormity alarm frequent pattern set; and outputting an alarm according to the target abnormity and the associated abnormity.
Description
Technical Field
The application relates to the technical field of computers, in particular to a log abnormity monitoring method and device and electronic equipment.
Background
In the automatic operation and maintenance process of a computer, a monitoring log or an alarm log is generally used as a basis. The alarm log is generally used for recording abnormal alarm information of the computer. In order to avoid the computer directly generating an abnormality, an alarm is generally output in advance before the abnormality is possible. The existing alarm prediction has two methods, one is manual rule, namely, a threshold value is set according to human experience, and an alarm is given in advance when certain monitoring information reaches the threshold value. Another way is to predict anomalies based on a time series anomaly detection algorithm.
Disclosure of Invention
In view of this, an object of the present application is to provide a log anomaly monitoring method and apparatus, and an electronic device. The effect of solving the existing problem of insufficient warning can be achieved.
In a first aspect, an embodiment of the present application provides a log anomaly monitoring method, including:
acquiring monitoring log data of a specified time period;
inputting the monitoring log data of the specified time period into a log abnormity detection model for detection so as to determine whether the specified time period is abnormal;
if the target abnormity exists in the appointed time period, determining the association abnormity associated with the target abnormity from a pre-stored abnormity alarm frequent pattern set according to the target abnormity, wherein each abnormity and the abnormity set associated with each abnormity are stored in the abnormity alarm frequent pattern set;
and outputting an alarm according to the target abnormity and the associated abnormity.
In an optional implementation manner, the step of outputting an alarm according to the target anomaly and the associated anomaly includes:
outputting an event alarm corresponding to the target abnormity;
and outputting a probability alarm which can generate the association abnormity.
According to the log abnormity monitoring method provided by the embodiment of the application, not only the alarm of the predicted event is output, but also the probability alarm of the possibly generated associated abnormity can be output, so that the possibly generated abnormity can be predicted, and the abnormity can be better predicted.
In an optional implementation manner, the step of outputting an alarm according to the target anomaly and the associated anomaly includes:
and outputting an event alarm which can possibly generate the target abnormity and the associated abnormity at a future time node corresponding to the specified time period.
According to the log abnormity monitoring method provided by the embodiment of the application, the target abnormity and the associated abnormity are both adopted to output alarms, so that a user can know more possibly generated abnormity, and the abnormity prediction can be more comprehensive.
In an optional implementation manner, the step of determining, according to the target anomaly, an associated anomaly associated with the target anomaly from a pre-stored anomaly alarm frequent pattern set includes:
searching a target abnormity set in which the target abnormity is located in the pre-stored abnormal alarm frequent pattern set;
and determining the association abnormity associated with the target abnormity according to the target abnormity set.
According to the log anomaly monitoring method provided by the embodiment of the application, the set where the target anomaly is located is screened out from the set of the anomaly alarm frequent patterns, and the association anomaly is further determined, so that the anomaly associated with the target anomaly can be effectively determined, and the accuracy of the output anomaly alarm can be improved.
In an optional embodiment, the log anomaly detection model is determined by:
inputting historical monitoring data into an anomaly detection algorithm for training to obtain the log anomaly detection model, wherein the anomaly detection algorithm is any one of an expandable Python toolkit algorithm, a local outlier factor detection algorithm, a histogram-based outlier score algorithm and an isolated forest algorithm.
According to the log anomaly monitoring method provided by the embodiment of the application, one of the algorithms is adopted as a model to be trained to obtain the log anomaly detection model, so that the association between the anomaly and each monitoring data can be effectively analyzed, and the detection result can be effectively output.
In an optional embodiment, the method further comprises:
acquiring abnormal associated information;
and updating the abnormal alarm frequent pattern set according to the abnormal associated information.
In an optional embodiment, the method further comprises:
acquiring an updated online log according to a set time rule;
and updating the abnormal alarm frequent pattern set according to the updated on-line log.
The log abnormity monitoring method provided by the embodiment of the application can also update the abnormal alarm frequent pattern set, so that the timeliness of the abnormal alarm frequent pattern set is better, the determined associated abnormity is more accurate, and the output alarm is more accurate.
In a second aspect, an embodiment of the present application further provides a log anomaly monitoring apparatus, including:
the acquisition module is used for acquiring the monitoring log data of the specified time period;
the detection module is used for inputting the monitoring log data of the specified time period into a log abnormity detection model for detection so as to obtain whether the specified time period is abnormal or not;
a determining module, configured to determine, according to a target anomaly, an association anomaly associated with the target anomaly from a pre-stored anomaly alarm frequent pattern set if the target anomaly exists within the specified time period, where each anomaly and each anomaly-associated anomaly set are stored in the anomaly alarm frequent pattern set;
and the output module is used for outputting an alarm according to the target abnormity and the associated abnormity.
In a third aspect, an embodiment of the present application further provides an electronic device, including: a processor, a memory, the memory storing machine readable instructions executable by the processor, the machine readable instructions when executed by the processor perform the steps of the log anomaly monitoring method of the first aspect, or any one of the possible implementations of the first aspect, when the electronic device is running.
In a fourth aspect, this application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the log anomaly monitoring method in the first aspect, or any one of the possible implementations of the first aspect.
According to the log abnormity monitoring method and device, the electronic equipment and the computer readable storage medium, the model is firstly used for preliminary prediction, then the abnormal alarm frequent pattern set is combined to determine the associated abnormity, and compared with the prior art that whether the computer is abnormal is directly judged by setting a threshold value, the log abnormity monitoring method and device can analyze whether the state corresponding to the monitoring data is abnormal or not and also can combine the association between the abnormity, so that the output alarm can be more accurate.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Fig. 2 is a flowchart of a log anomaly monitoring method according to an embodiment of the present application.
Fig. 3 is a flowchart of a log anomaly monitoring method according to an embodiment of the present application.
Fig. 4 is a flowchart of a log anomaly monitoring method according to an embodiment of the present application.
Fig. 5 is a schematic functional block diagram of a log anomaly monitoring apparatus according to an embodiment of the present application.
Detailed Description
The technical solution in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
The inventor of the application can realize the abnormity early warning by researching the prior art and using the abnormity detection model determined by the regression algorithm, but the regression algorithm can only make early warning in a continuous interval, such as the utilization rate of a Central Processing Unit (CPU), the utilization rate of a memory and the like, and can only generate a single alarm, and can not realize early warning for discrete problems, such as login failure, operation system ramming, server simulation failure, operation system Agent state and the like.
Based on the above problems, the inventor of the present application has studied this, and in general, there may be a correlation between anomalies in the generation of alarms, some anomalies may be advanced, and the advanced anomalies may trigger a series of other anomalies after the advanced anomalies.
Based on the above research, embodiments of the present application provide a log anomaly monitoring method and apparatus, an electronic device, and a computer-readable storage medium, which can specifically solve the above technical problems. According to the embodiment of the application, the associated warning which may occur can be early warned while early warning is performed. This is described below by means of several examples.
Example one
To facilitate understanding of the present embodiment, first, an electronic device executing the log anomaly monitoring method disclosed in the embodiments of the present application will be described in detail.
As shown in fig. 1, is a block schematic diagram of an electronic device. The electronic device 100 may include a memory 111, a memory controller 112, a processor 113, a peripheral interface 114, an input-output unit 115, and a display unit 116. It will be understood by those of ordinary skill in the art that the structure shown in fig. 1 is merely exemplary and is not intended to limit the structure of the electronic device 100. For example, electronic device 100 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The above-mentioned elements of the memory 111, the memory controller 112, the processor 113, the peripheral interface 114, the input/output unit 115 and the display unit 116 are electrically connected to each other directly or indirectly, so as to implement data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The processor 113 is used to execute the executable modules stored in the memory.
The Memory 111 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 111 is configured to store a program, and the processor 113 executes the program after receiving an execution instruction, and the method executed by the electronic device 100 defined by the process disclosed in any embodiment of the present application may be applied to the processor 113, or implemented by the processor 113.
The processor 113 may be an integrated circuit chip having signal processing capability. The Processor 113 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The peripheral interface 114 couples various input/output devices to the processor 113 and memory 111. In some embodiments, the peripheral interface 114, the processor 113, and the memory controller 112 may be implemented in a single chip. In other examples, they may be implemented separately from the individual chips.
The input/output unit 115 is used to provide input data to the user. The input/output unit 115 may be, but is not limited to, a mouse, a keyboard, and the like.
The display unit 116 provides an interactive interface (e.g., a user operation interface) between the electronic device 100 and the user or is used for displaying image data to the user for reference. In this embodiment, the display unit may be a liquid crystal display or a touch display. In the case of a touch display, the display can be a capacitive touch screen or a resistive touch screen, which supports single-point and multi-point touch operations. The support of single-point and multi-point touch operations means that the touch display can sense touch operations simultaneously generated from one or more positions on the touch display, and the sensed touch operations are sent to the processor for calculation and processing.
The electronic device 100 in this embodiment may be configured to perform each step in each method provided in this embodiment. The following describes in detail the implementation process of the log anomaly monitoring method by several embodiments.
Example two
Please refer to fig. 2, which is a flowchart illustrating a log anomaly monitoring method according to an embodiment of the present application. The specific process shown in fig. 2 will be described in detail below.
Optionally, the specified time period is a time period from a specified node before the time point needing prediction to the time point needing prediction. For example, if the predicted time point is a time point a and a set time length is B, the specified time period may be represented as: and the time period from the A-B time point to the A time point is corresponding.
Illustratively, the monitoring log data may include a plurality of monitoring logs, and the monitoring logs are used for recording operation data of the computer, and the like. For example, the operation data may include computer CPU usage, content occupancy, network speed, disk read-write speed, and the like. For another example, the operational data may further include: login failure, operation system ramming, server simulation failure and operation system Agent state.
Optionally, the log anomaly detection model is determined by: and inputting historical monitoring data into an anomaly detection algorithm for training to obtain the log anomaly detection model.
Optionally, the anomaly detection algorithm is any one of a Python anomaly detection algorithm (PYOD), a Local Outlier Factor detection algorithm (LOF), a Histogram-based Outlier Score algorithm (HBOS), an isolated Forest algorithm (iformet), and a clustering-based Local Outlier Factor detection algorithm (LOF).
Illustratively, the exception may be that the CPU usage rate and the memory usage rate exceed a threshold value. For example, the CPU usage rate is 99%. The CPU utilization is too high.
In this embodiment, each anomaly and an anomaly set associated with each anomaly are stored in the anomaly alarm frequent pattern set.
Step 203 may comprise: searching a target abnormity set in which the target abnormity is located in the pre-stored abnormal alarm frequent pattern set; and determining the association abnormity associated with the target abnormity according to the target abnormity set.
In one embodiment, first, a log anomaly detection model detects that there is a target anomaly U1 in the computer. Secondly, searching the abnormal U1 in the abnormal alarm frequent pattern set, and finding out the set where the abnormal U1 is located, namely the abnormal set M1, the abnormal M2 and the abnormal M3. Alternatively, all of the exceptions in exception set M1, exception M2, exception M3 may be considered associated exceptions.
In another embodiment, first, a log anomaly detection model detects that a target anomaly U2 may exist in the computer. Secondly, searching the abnormal U2 in the abnormal alarm frequent pattern set, and finding out the set where the abnormal U2 is located, namely the abnormal set M4, the abnormal M5 and the abnormal M6. Alternatively, all of the exceptions in the exception set M4, exception M5, exception M6 whose distance from the ranked distance to exception U2 is less than a specified value may be taken as associated exceptions. Illustratively, each exception set may be ordered in order of relevance from high to low. The probability that two adjacent anomalies occur at the same time is the largest.
In another embodiment, the set of abnormal alarm frequent patterns may include a plurality of abnormal sets, each abnormal set including a causal abnormality and an associated abnormality. The information that can be represented in each anomaly set includes: if a causal exception occurs, then an associated exception may result. For example, { abnormal Ua: in the embodiment, step 203 can comprise searching the target abnormity in a pre-stored abnormal alarm frequent pattern set as a target abnormity set of the cause abnormity, and taking all the associated abnormalities in the target abnormity set as the associated abnormalities associated with the target abnormity.
And 204, outputting an alarm according to the target abnormity and the associated abnormity.
In one embodiment, step 204 may comprise: and outputting an event alarm corresponding to the target abnormity and outputting a probability alarm which can possibly generate the associated abnormity.
Illustratively, each anomaly set in the set of anomaly alarm frequent patterns may include a causal anomaly, an associated anomaly, and a probability of occurrence of the associated anomaly. For example, { abnormal Ua: (abnormal Ub, 0.7; abnormal Uc, 0.6), if abnormal Ua occurs, the probability of the abnormal Ub is 0.7, and the probability of the abnormal Uc is 0.6.
Illustratively, if the target anomaly detected in step 202 is Ua, and the anomaly Ua is a CPU usage over-threshold; in this example, step 204 may be implemented as: and outputting an alarm message that the CPU utilization rate exceeds a threshold value, and outputting a prompt message that the probability of the occurrence of the abnormal Ub is 0.7 and the probability of the occurrence of the abnormal Uc is 0.6.
In another embodiment, step 204 may include: and outputting an event alarm which can possibly generate the target abnormity and the associated abnormity at a future time node corresponding to the specified time period.
Taking the above example as an example, step 204 may also be implemented as: and outputting an alarm message that the CPU utilization rate exceeds a threshold value, and outputting alarm messages which may occur in an abnormal Ub and an abnormal Uc.
In this embodiment, as shown in fig. 3, the log anomaly monitoring method may further include the following steps.
For example, the abnormal association information may be association information uploaded by a specific user terminal. For example, the anomaly-associated information may include: the login failure is associated with the operation system ramming; the login failure is associated with network exception, and the like.
And step 206, updating the abnormal alarm frequent pattern set according to the abnormal association information.
Illustratively, the above-mentioned abnormal association information may be formed into a new abnormal set to be added to the abnormal alarm frequent pattern set.
For example, the abnormality in the abnormality related information may be added to an abnormality set in which the abnormality in the abnormality related information already exists in the abnormality alarm frequent pattern set.
In this embodiment, as shown in fig. 4, the log anomaly monitoring method may further include:
and step 207, acquiring the updated online log according to a set time rule.
Illustratively, the online log may include anomalies that occurred at various points in time.
For example, the above-mentioned set time law may be that an online log is obtained every preset period. Illustratively, the preset period may be a one-day period, a one-week period, a two-day period, and the like.
And 208, updating the abnormal alarm frequent pattern set according to the updated on-line log.
Optionally, the online log may be input into the association rule mining model as training data for update training to obtain an anomaly with an association. Then, the abnormal alarm frequent pattern set is updated by using the correlated abnormality.
Illustratively, the association rule mining model may use the FPGrowth algorithm.
Illustratively, the association rule mining model is used for association of collected online logs.
Optionally, the anomaly sets mined by the association rule mining model described above may include correlated anomalies, as well as the confidence levels of the individual correlated anomalies. For example, the exception set may be { exception Uq: (exception Uw, exception Ue), 0.6}, indicating that if an exception Uq occurs, then exception Uw and exception Ue may subsequently occur with a probability of 0.6.
In this embodiment, the self-learning feedback process in the steps 205-208 can automatically update the abnormal alarm frequent pattern set, so that the abnormal detection can be more accurate.
According to the log abnormity monitoring method in the embodiment, through the detection of the combination of the log abnormity detection model and the abnormity alarm frequent pattern set, the alarm early warning is output in advance, and the associated alarm related to the alarm is output, so that related maintenance personnel can know the condition of the computer earlier, and the safety of the computer is improved.
Further, the abnormal alarm frequent pattern set can be self-learned and updated, so that the detected abnormality and the associated abnormality can be more accurate.
EXAMPLE III
Based on the same application concept, a log anomaly monitoring device corresponding to the log anomaly monitoring method is further provided in the embodiment of the present application, and because the principle of solving the problem of the device in the embodiment of the present application is similar to that of the embodiment of the log anomaly monitoring method, the implementation of the device in the embodiment of the present application can be referred to the description in the embodiment of the method, and repeated details are omitted.
Please refer to fig. 5, which is a schematic diagram of a functional module of a log anomaly monitoring apparatus according to an embodiment of the present application. Each module in the log anomaly monitoring device in this embodiment is configured to execute each step in the foregoing method embodiment. The log abnormality monitoring device includes: an acquisition module 301, a detection module 302, a determination module 303 and an output module 304; wherein the content of the first and second substances,
an obtaining module 301, configured to obtain monitoring log data of a specified time period;
a detection module 302, configured to input the monitoring log data of the specified time period into a log anomaly detection model for detection, so as to obtain whether an anomaly exists in the specified time period;
a determining module 303, configured to determine, according to the target anomaly, an association anomaly associated with the target anomaly from a pre-stored anomaly alarm frequent pattern set if the target anomaly exists within the specified time period, where each anomaly and an anomaly set associated with each anomaly are stored in the anomaly alarm frequent pattern set;
an output module 304, configured to output an alarm according to the target exception and the associated exception.
In one possible implementation, the output module 304 is configured to:
outputting an event alarm corresponding to the target abnormity;
and outputting a probability alarm which can generate the association abnormity.
In one possible implementation, the output module 304 is configured to:
and outputting an event alarm which can possibly generate the target abnormity and the associated abnormity at a future time node corresponding to the specified time period.
In a possible implementation, the determining module 303 is configured to:
searching a target abnormity set in which the target abnormity is located in the pre-stored abnormal alarm frequent pattern set;
and determining the association abnormity associated with the target abnormity according to the target abnormity set.
In a possible implementation manner, the log anomaly monitoring apparatus in this embodiment may further include:
and the training module is used for inputting historical monitoring data into an anomaly detection algorithm for training so as to obtain the log anomaly detection model, wherein the anomaly detection algorithm is any one of an expandable Python toolkit algorithm, a local outlier factor detection algorithm, a histogram-based outlier score algorithm and an isolated forest algorithm.
In a possible implementation manner, the log anomaly monitoring apparatus in this embodiment may further include:
the first acquisition module is used for acquiring abnormal associated information;
and the first updating module is used for updating the abnormal alarm frequent pattern set according to the abnormal associated information.
In a possible implementation manner, the log anomaly monitoring apparatus in this embodiment may further include:
the second acquisition module is used for acquiring the updated online log according to a set time rule;
and the second updating module is used for updating the abnormal alarm frequent pattern set according to the updated on-line log.
In addition, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the log anomaly monitoring method in the foregoing method embodiment are executed.
The computer program product of the log anomaly monitoring method provided in the embodiment of the present application includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the log anomaly monitoring method in the above method embodiment, which may be specifically referred to in the above method embodiment, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
1. A log anomaly monitoring method is characterized by comprising the following steps:
acquiring monitoring log data of a specified time period;
inputting the monitoring log data of the specified time period into a log abnormity detection model for detection so as to determine whether the specified time period is abnormal;
if the target abnormity exists in the appointed time period, determining the association abnormity associated with the target abnormity from a pre-stored abnormity alarm frequent pattern set according to the target abnormity, wherein each abnormity and the abnormity set associated with each abnormity are stored in the abnormity alarm frequent pattern set;
and outputting an alarm according to the target abnormity and the associated abnormity.
2. The method of claim 1, wherein the step of outputting an alert based on the target anomaly and the associated anomaly comprises:
outputting an event alarm corresponding to the target abnormity;
and outputting a probability alarm which can generate the association abnormity.
3. The method of claim 1, wherein the step of outputting an alert based on the target anomaly and the associated anomaly comprises:
and outputting an event alarm which can possibly generate the target abnormity and the associated abnormity at a future time node corresponding to the specified time period.
4. The method according to claim 1, wherein the step of determining, from a pre-stored set of frequent patterns of alarms for abnormalities associated with the target abnormality, an associated abnormality associated with the target abnormality comprises:
searching a target abnormity set in which the target abnormity is located in the pre-stored abnormal alarm frequent pattern set;
and determining the association abnormity associated with the target abnormity according to the target abnormity set.
5. The method of any of claims 1-4, wherein the log anomaly detection model is determined by:
inputting historical monitoring data into an anomaly detection algorithm for training to obtain the log anomaly detection model, wherein the anomaly detection algorithm is any one of an expandable Python anomaly detection algorithm, a local outlier factor detection algorithm, a histogram-based outlier score algorithm and an isolated forest algorithm.
6. The method according to any one of claims 1-4, further comprising:
acquiring abnormal associated information;
and updating the abnormal alarm frequent pattern set according to the abnormal associated information.
7. The method according to any one of claims 1-4, further comprising:
acquiring an updated online log according to a set time rule;
and updating the abnormal alarm frequent pattern set according to the updated on-line log.
8. A log anomaly monitoring device, comprising:
the acquisition module is used for acquiring the monitoring log data of the specified time period;
the detection module is used for inputting the monitoring log data of the specified time period into a log abnormity detection model for detection so as to obtain whether the specified time period is abnormal or not;
a determining module, configured to determine, according to a target anomaly, an association anomaly associated with the target anomaly from a pre-stored anomaly alarm frequent pattern set if the target anomaly exists within the specified time period, where each anomaly and each anomaly-associated anomaly set are stored in the anomaly alarm frequent pattern set;
and the output module is used for outputting an alarm according to the target abnormity and the associated abnormity.
9. An electronic device, comprising: a processor, a memory storing machine-readable instructions executable by the processor, the machine-readable instructions when executed by the processor performing the steps of the method of any of claims 1 to 7 when the electronic device is run.
10. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, is adapted to carry out the steps of the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010520925.4A CN111666198A (en) | 2020-06-10 | 2020-06-10 | Log abnormity monitoring method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010520925.4A CN111666198A (en) | 2020-06-10 | 2020-06-10 | Log abnormity monitoring method and device and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111666198A true CN111666198A (en) | 2020-09-15 |
Family
ID=72386522
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010520925.4A Pending CN111666198A (en) | 2020-06-10 | 2020-06-10 | Log abnormity monitoring method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111666198A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112712113A (en) * | 2020-12-29 | 2021-04-27 | 广州品唯软件有限公司 | Alarm method and device based on indexes and computer system |
CN113890821A (en) * | 2021-09-24 | 2022-01-04 | 绿盟科技集团股份有限公司 | Log association method and device and electronic equipment |
CN114039837A (en) * | 2021-11-05 | 2022-02-11 | 奇安信科技集团股份有限公司 | Alarm data processing method, device, system, equipment and storage medium |
CN114301768A (en) * | 2020-09-23 | 2022-04-08 | ***通信集团广东有限公司 | Anomaly detection method and device for Network Function Virtualization (NFV) equipment |
CN114650240A (en) * | 2022-03-10 | 2022-06-21 | 网宿科技股份有限公司 | Method and device for detecting abnormity of service data |
CN114816950A (en) * | 2021-01-21 | 2022-07-29 | 腾讯科技(深圳)有限公司 | Data monitoring method and device and electronic equipment |
CN115348150A (en) * | 2022-08-11 | 2022-11-15 | 睿云奇智(重庆)科技有限公司 | Abnormal alarming method and device |
CN115865611A (en) * | 2021-09-24 | 2023-03-28 | ***通信集团湖南有限公司 | Fault processing method and device of network equipment and electronic equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104935601A (en) * | 2015-06-19 | 2015-09-23 | 北京奇虎科技有限公司 | Cloud-based method, device and system for analyzing website log safety |
CN108683562A (en) * | 2018-05-18 | 2018-10-19 | 深圳壹账通智能科技有限公司 | Abnormality detection localization method, device, computer equipment and storage medium |
CN109617745A (en) * | 2019-01-11 | 2019-04-12 | 云智慧(北京)科技有限公司 | Alarm prediction method, device, system and storage medium |
-
2020
- 2020-06-10 CN CN202010520925.4A patent/CN111666198A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104935601A (en) * | 2015-06-19 | 2015-09-23 | 北京奇虎科技有限公司 | Cloud-based method, device and system for analyzing website log safety |
CN108683562A (en) * | 2018-05-18 | 2018-10-19 | 深圳壹账通智能科技有限公司 | Abnormality detection localization method, device, computer equipment and storage medium |
CN109617745A (en) * | 2019-01-11 | 2019-04-12 | 云智慧(北京)科技有限公司 | Alarm prediction method, device, system and storage medium |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114301768A (en) * | 2020-09-23 | 2022-04-08 | ***通信集团广东有限公司 | Anomaly detection method and device for Network Function Virtualization (NFV) equipment |
CN112712113A (en) * | 2020-12-29 | 2021-04-27 | 广州品唯软件有限公司 | Alarm method and device based on indexes and computer system |
CN112712113B (en) * | 2020-12-29 | 2024-04-09 | 广州品唯软件有限公司 | Alarm method, device and computer system based on index |
CN114816950B (en) * | 2021-01-21 | 2024-03-22 | 腾讯科技(深圳)有限公司 | Data processing method and device and electronic equipment |
CN114816950A (en) * | 2021-01-21 | 2022-07-29 | 腾讯科技(深圳)有限公司 | Data monitoring method and device and electronic equipment |
CN113890821A (en) * | 2021-09-24 | 2022-01-04 | 绿盟科技集团股份有限公司 | Log association method and device and electronic equipment |
CN113890821B (en) * | 2021-09-24 | 2023-11-17 | 绿盟科技集团股份有限公司 | Log association method and device and electronic equipment |
CN115865611A (en) * | 2021-09-24 | 2023-03-28 | ***通信集团湖南有限公司 | Fault processing method and device of network equipment and electronic equipment |
CN114039837B (en) * | 2021-11-05 | 2023-10-31 | 奇安信科技集团股份有限公司 | Alarm data processing method, device, system, equipment and storage medium |
CN114039837A (en) * | 2021-11-05 | 2022-02-11 | 奇安信科技集团股份有限公司 | Alarm data processing method, device, system, equipment and storage medium |
CN114650240A (en) * | 2022-03-10 | 2022-06-21 | 网宿科技股份有限公司 | Method and device for detecting abnormity of service data |
CN115348150B (en) * | 2022-08-11 | 2023-10-24 | 睿云奇智(重庆)科技有限公司 | Abnormality alarming method and device |
CN115348150A (en) * | 2022-08-11 | 2022-11-15 | 睿云奇智(重庆)科技有限公司 | Abnormal alarming method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111666198A (en) | Log abnormity monitoring method and device and electronic equipment | |
US11009862B2 (en) | System and method for monitoring manufacturing | |
US10809704B2 (en) | Process performance issues and alarm notification using data analytics | |
CN106104496B (en) | The abnormality detection not being subjected to supervision for arbitrary sequence | |
US10444121B2 (en) | Fault detection using event-based predictive models | |
JP2018045403A (en) | Abnormality detection system and abnormality detection method | |
JP2017072882A (en) | Anomaly evaluation program, anomaly evaluation method, and information processing device | |
CA2930623A1 (en) | Method and system for aggregating and ranking of security event-based data | |
US10599501B2 (en) | Information processing device, information processing method, and recording medium | |
WO2019239542A1 (en) | Abnormality sensing apparatus, abnormality sensing method, and abnormality sensing program | |
JP2018010608A (en) | Methods and systems for context based operator assistance for control systems | |
JP6531079B2 (en) | System and method for smart alert | |
Hall et al. | The state of machine learning methodology in software fault prediction | |
CN112818066A (en) | Time sequence data anomaly detection method and device, electronic equipment and storage medium | |
JP2020052714A5 (en) | ||
Golmakani | Optimal age-based inspection scheme for condition-based maintenance using A* search algorithm | |
JP6866930B2 (en) | Production equipment monitoring equipment, production equipment monitoring method and production equipment monitoring program | |
CN111651340B (en) | Alarm data rule mining method and device and electronic equipment | |
US11747035B2 (en) | Pipeline for continuous improvement of an HVAC health monitoring system combining rules and anomaly detection | |
JP6582527B2 (en) | Alarm prediction device, alarm prediction method and program | |
US20200241517A1 (en) | Anomaly detection for predictive maintenance and deriving outcomes and workflows based on data quality | |
US10360249B2 (en) | System and method for creation and detection of process fingerprints for monitoring in a process plant | |
JP2007164346A (en) | Decision tree changing method, abnormality determination method, and program | |
US11188064B1 (en) | Process flow abnormality detection system and method | |
Moore et al. | Process visualization in medical device manufacture: an adaptation of short run SPC techniques |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200915 |
|
RJ01 | Rejection of invention patent application after publication |