CN111738467A - Running state abnormity detection method, device and equipment - Google Patents

Running state abnormity detection method, device and equipment Download PDF

Info

Publication number
CN111738467A
CN111738467A CN202010867425.8A CN202010867425A CN111738467A CN 111738467 A CN111738467 A CN 111738467A CN 202010867425 A CN202010867425 A CN 202010867425A CN 111738467 A CN111738467 A CN 111738467A
Authority
CN
China
Prior art keywords
sub
abnormal
network
security equipment
performance data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010867425.8A
Other languages
Chinese (zh)
Inventor
王滨
张峰
王星
周少鹏
陈思
陈达
陈逸恺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202010867425.8A priority Critical patent/CN111738467A/en
Publication of CN111738467A publication Critical patent/CN111738467A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/20Administration of product repair or maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Tourism & Hospitality (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Strategic Management (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Economics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Marketing (AREA)
  • Educational Administration (AREA)
  • Primary Health Care (AREA)
  • Development Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a method, a device and equipment for detecting abnormal operation state, wherein the method comprises the following steps: acquiring running performance data of the security equipment at a plurality of acquisition moments, and acquiring process information and process identification of at least one target process in running of the security equipment; determining data characteristics of the security equipment according to the operation performance data at a plurality of acquisition moments; processing the data characteristics through the trained target neural network to obtain the running state of the security equipment; if the running state is that the security equipment is abnormal, determining whether the target process is an abnormal process or not based on the process information of the target process for each target process running by the security equipment; and if so, sending the process identification of the abnormal process to the security equipment so that the security equipment can block the abnormal process according to the process identification of the abnormal process. Through the technical scheme of this application, in time discover the unusual action of security protection equipment, reduce the degree of difficulty and the work load of artifical inspection, improve the security protection equipment's security.

Description

Running state abnormity detection method, device and equipment
Technical Field
The present application relates to the field of information security, and in particular, to a method, an apparatus, and a device for detecting an abnormal operating state.
Background
With the wide-range deployment of security equipment, how to efficiently perform security management and protection on the security equipment is an urgent problem to be solved. In order to perform security management and protection on security equipment, an important process is to detect whether the operating state of the security equipment is abnormal or not and perform security management on the security equipment based on the operating state. The method for detecting the abnormity of the security equipment can extract the characteristic data of the security equipment and analyze the reliability of the security equipment based on the characteristic data of the security equipment. If the reliability is greater than the threshold value, determining that the running state of the security equipment is abnormal, and performing abnormal alarm when the running state of the security equipment is abnormal.
In the above manner, the threshold needs to be configured, and the accuracy of the threshold will affect the accuracy of the detection result. For example, when the threshold is relatively large, even if the operation state of the security device is abnormal, the detection result may indicate that the operation state of the security device is not abnormal. When the threshold value is smaller, even if the operation state of the security equipment is not abnormal, the detection result may be that the operation state of the security equipment is abnormal. In conclusion, the existing security equipment is easy to detect the error of the detection result, and the accuracy of the detection result is low.
Disclosure of Invention
The application provides a running state abnormity detection method, which comprises the following steps:
the method comprises the steps of obtaining operation performance data of security equipment at a plurality of collection moments, and obtaining process information and process identification of at least one target process in operation of the security equipment;
determining data characteristics of the security equipment according to the operation performance data at the plurality of acquisition moments, wherein the data characteristics are used for representing the operation state change of the security equipment;
processing the data characteristics through a trained target neural network to obtain the running state of the security equipment; the running state is that the security equipment is normal or abnormal;
if the running state is that the security equipment is abnormal, determining whether the target process is an abnormal process or not based on the process information of the target process aiming at each target process running by the security equipment;
if so, sending the process identification of the abnormal process to the security equipment, so that the security equipment blocks the abnormal process according to the process identification of the abnormal process.
The application provides a running state abnormity detection method, which comprises the following steps:
acquiring operation performance data of the security equipment at a plurality of acquisition moments;
selecting at least one running target process from all running processes of the security equipment based on the resource occupation condition of each running process of the security equipment;
the running performance data at the multiple collection moments, the process information and the process identification of the at least one target process are sent to a management device, so that the management device determines the running state of the security protection device according to the running performance data at the multiple collection moments, when the running state is abnormal, whether the target process is an abnormal process or not is determined according to the process information of the target process for each target process, and when the target process is an abnormal process, the process identification of the abnormal process is sent;
receiving a process identifier of the abnormal process sent by the management equipment;
and blocking the abnormal process according to the process identification of the abnormal process.
The application provides an abnormal detection device of running state, the device includes:
the security equipment monitoring system comprises an acquisition module, a monitoring module and a monitoring module, wherein the acquisition module is used for acquiring operation performance data of security equipment at a plurality of acquisition moments and acquiring process information and process identification of at least one target process in operation of the security equipment;
the determining module is used for determining the data characteristics of the security equipment according to the operation performance data at the plurality of acquisition moments, wherein the data characteristics are used for representing the operation state change of the security equipment;
the processing module is used for processing the data characteristics through the trained target neural network to obtain the running state of the security equipment; the running state is that the security equipment is normal or abnormal;
the determining module is further configured to determine, for each target process in which the security device is running, whether the target process is an abnormal process based on process information of the target process, if the running state is that the security device is abnormal;
and the sending module is used for sending the process identifier of the abnormal process to the security equipment if the target process is the abnormal process, so that the security equipment can block the abnormal process according to the process identifier of the abnormal process.
The application provides a management device, including: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the steps of:
the method comprises the steps of obtaining operation performance data of security equipment at a plurality of collection moments, and obtaining process information and process identification of at least one target process in operation of the security equipment;
determining data characteristics of the security equipment according to the operation performance data at the plurality of acquisition moments, wherein the data characteristics are used for representing the operation state change of the security equipment;
processing the data characteristics through a trained target neural network to obtain the running state of the security equipment; the running state is that the security equipment is normal or abnormal;
if the running state is that the security equipment is abnormal, determining whether the target process is an abnormal process or not based on the process information of the target process aiming at each target process running by the security equipment;
if so, sending the process identification of the abnormal process to the security equipment, so that the security equipment blocks the abnormal process according to the process identification of the abnormal process.
According to the technical scheme, the safety condition of the security equipment can be expressed by the operation performance data, the data characteristics of the security equipment can be determined based on the operation performance data at a plurality of collection moments, the data characteristics are used for representing the change of the operation state of the security equipment, if the change of the operation state is stable and the resource consumption is low, the security equipment is in a normal state, and if the change of the operation state is severe or the resource consumption is continuously high, the security equipment is in an abnormal state. Based on the characteristics, the target neural network is constructed by learning the change rule of the running state of the security equipment when the security equipment is attacked maliciously, the running performance data of the security equipment is processed according to the target neural network, and the running state of the security equipment is detected, so that security equipment managers can be helped to discover the abnormal behavior of the security equipment in time, the difficulty and the workload of manual inspection are reduced, and the safety of the security equipment is improved. The acquisition mode of the operation performance data is simple, and the detection process speed of the operation performance of the security equipment is high. The method does not need to manually configure the threshold, thereby avoiding the problems of wrong detection results, low accuracy of the detection results and the like. When the security equipment is abnormal, the abnormal process can be detected, the abnormal process is blocked, and the safety of the security equipment is protected.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments of the present application or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings of the embodiments of the present application.
FIG. 1 is a flow chart of a method for detecting an operating condition anomaly in one embodiment of the present application;
FIGS. 2A-2C are schematic views of a line image in one embodiment of the present application;
FIG. 3 is a schematic diagram of a neural network in one embodiment of the present application;
FIG. 4 is a flow chart of a method for operating condition anomaly detection in one embodiment of the subject application;
fig. 5 is a configuration diagram of an operation state abnormality detection device in an embodiment of the present application;
fig. 6 is a block diagram of a management device in an embodiment of the present application.
Detailed Description
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein is meant to encompass any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in the embodiments of the present application to describe various information, the information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. Depending on the context, moreover, the word "if" as used may be interpreted as "at … …" or "when … …" or "in response to a determination".
Before the technical solutions of the present application are introduced, concepts related to the embodiments of the present application are introduced.
Machine learning: machine learning is a way to implement artificial intelligence, and is used to study how a computer simulates or implements human learning behaviors to acquire new knowledge or skills, and reorganize an existing knowledge structure to continuously improve its performance. Deep learning, which is a subclass of machine learning, is a process of modeling a specific problem in the real world using a mathematical model to solve similar problems in the field. The neural network is an implementation of deep learning, and for convenience of description, the structure and function of the neural network are described herein by taking the neural network as an example, and for other subclasses of machine learning, the structure and function of the neural network are similar.
A neural network: the neural network may include, but is not limited to, a Convolutional Neural Network (CNN), a cyclic neural network (RNN), a fully-connected network, and the like, and the structural units of the neural network may include, but are not limited to, a convolutional layer (Conv), a pooling layer (Pool), an excitation layer, a fully-connected layer (FC), and the like.
In practical application, one or more convolution layers, one or more pooling layers, one or more excitation layers, and one or more fully-connected layers may be combined to construct a neural network according to different requirements.
In the convolutional layer, the input data features are enhanced by performing a convolution operation on the input data features using a convolution kernel, the convolution kernel may be a matrix of m × n, the input data features of the convolutional layer are convolved with the convolution kernel, the output data features of the convolutional layer may be obtained, and the convolution operation is actually a filtering process.
In the pooling layer, the input data features (such as the output of the convolutional layer) are subjected to operations of taking the maximum value, taking the minimum value, taking the average value and the like, so that the input data features are sub-sampled by utilizing the principle of local correlation, the processing amount is reduced, the feature invariance is kept, and the operation of the pooling layer is actually a down-sampling process.
In the excitation layer, the input data features can be mapped using an activation function (e.g., a nonlinear function), thereby introducing a nonlinear factor such that the neural network enhances expressive power through a combination of nonlinearities.
The activation function may include, but is not limited to, a ReLU (Rectified Linear Unit) function that is used to set features less than 0 to 0, while features greater than 0 remain unchanged.
In the fully-connected layer, the fully-connected layer is configured to perform fully-connected processing on all data features input to the fully-connected layer, so as to obtain a feature vector, and the feature vector may include a plurality of data features.
Security protection equipment: the security device may include, but is not limited to: the type of the security device is not limited, and IPC is taken as an example for description herein.
Operating performance data: the operation performance data is information which is generated during the operation of the security equipment and can express the operation state change of the security equipment, and the operation performance data comprises but is not limited to at least one of the following: CPU (Central processing unit) utilization, memory utilization, network bandwidth utilization, and the like.
With the wide-range deployment of security equipment, how to efficiently perform security management and protection on the security equipment is an urgent problem to be solved. In order to perform security management and protection on security equipment, an important process is to detect whether the operating state of the security equipment is abnormal. In order to detect whether the operation state of the security equipment is abnormal, in consideration of the fact that a deep learning method is successfully applied to many fields, and by means of the excellent performance of a neural network in an image processing task, the embodiment provides the security equipment operation state abnormality detection method based on deep learning.
The method for detecting an abnormal operating state provided in the embodiment of the present application may be divided into two stages, namely, a training stage and a detection stage, and the training stage and the detection stage are described below with reference to specific embodiments.
In the training phase, the initial neural network needs to be trained to obtain a trained target neural network, for example, each neural network parameter (such as convolutional layer parameter (i.e., convolutional kernel parameter), pooling layer parameter, excitation layer parameter, full link layer parameter, etc.) in the initial neural network is trained to obtain the target neural network. By training the neural network parameters within the initial neural network, the mapping of inputs and outputs can be fitted.
Referring to fig. 1, a schematic flow chart of the operation state anomaly detection method is shown in the training stage.
Step 101, a security device acquires normal sample performance data of an intrinsic safety device at multiple acquisition moments, wherein the normal sample performance data is sample performance data of the intrinsic safety device when a malicious program is not operated.
In one possible embodiment, the normal sample performance data may include, but is not limited to, at least one of: the method comprises the steps of obtaining sample performance data of the security equipment when a planning task and a management task are not operated, obtaining sample performance data of the security equipment when the planning task is operated, obtaining sample performance data of the security equipment when the management task is operated, and obtaining sample performance data of the security equipment when the planning task and the management task are operated.
In a possible implementation manner, the operation state change of the security device is usually stable, and the resource consumption is in a lower state, so that what causes the operation state change of the security device is usually: in case 1, the security device is executing a planning task (i.e., a periodic task, such as timing direction adjustment, timing start of a scanning task, etc.). And 2, the security equipment executes the management tasks (namely random tasks, such as the specified security equipment steering, the specified security equipment searching, the specified security equipment starting scanning task and the like) issued by the management equipment. And 3, the security equipment is infected with a malicious program, and executes an abnormal task triggered by the malicious program, wherein the malicious program can comprise a mine digging virus, a worm virus, a botnet program and the like, when the security equipment executes the abnormal task triggered by the mine digging virus, the CPU utilization rate is high, when the security equipment executes the abnormal task triggered by the worm virus, the memory utilization rate is high, and when the security equipment executes the abnormal task triggered by the botnet program, the network bandwidth utilization rate is high. The case 1 and the case 2 belong to normal behaviors of the security equipment, the case 3 belongs to abnormal behaviors of the security equipment, and the target of abnormal detection of the running state is to detect the case 3.
In view of the above findings, in this embodiment, in the training phase, the planning task and the management task may not be run on the security device, and the sample performance data of the security device when the planning task and the management task are not run is obtained, and the obtained sample performance data is used as the normal sample performance data a 1. The planning task can be operated on the security equipment, but the management task is not operated on the security equipment, the sample performance data of the security equipment during the planning task operation is obtained, and the obtained sample performance data is used as the normal sample performance data a 2. The management task can be operated on the security equipment, but the planning task is not operated on the security equipment, the sample performance data of the security equipment during operation of the management task is obtained, and the obtained sample performance data is used as the normal sample performance data a 3. The planning task and the management task can be operated on the security equipment, the sample performance data of the security equipment during the operation of the planning task and the management task can be obtained, and the obtained sample performance data can be used as the normal sample performance data a 4.
In summary, on the basis that the security device does not run the malicious program, the normal sample performance data of the security device may be obtained, where the normal sample performance data includes at least one of normal sample performance data a1, normal sample performance data a2, normal sample performance data a3, and normal sample performance data a 4.
Normal sample performance data includes, but is not limited to, at least one of: CPU utilization, memory utilization, and network bandwidth utilization. For example, in the operation process of the security device, the planning task and the management task are not operated in the security device, the CPU usage rate, the memory usage rate, the network bandwidth usage rate, and the like are obtained, and these data are used as the normal sample performance data a 1. And (3) running a planning task on the security equipment but not running a management task on the security equipment, acquiring the CPU utilization rate, the memory utilization rate, the network bandwidth utilization rate and the like, and taking the data as normal sample performance data a 2. And (3) running the management task on the security equipment but not running the planning task on the security equipment, acquiring the CPU utilization rate, the memory utilization rate, the network bandwidth utilization rate and the like, and taking the data as normal sample performance data a 3. And (3) running a planning task and a management task on the security equipment, acquiring the CPU utilization rate, the memory utilization rate, the network bandwidth utilization rate and the like, and taking the data as normal sample performance data a 4.
Illustratively, on the basis that the security device does not run the malicious program, N acquisition periods can be divided, and the duration of each acquisition period is T seconds. For example, for the 1 st to 10 th acquisition periods in the N acquisition periods, the planning task and the management task are not run on the security device, the 11 th to 20 th acquisition periods, the planning task and the management task are run on the security device, the 21 st to 30 th acquisition periods, the management task is run on the security device, the 31 st to 40 th acquisition periods, the planning task and the management task are run on the security device, the 41 th to 50 th acquisition periods, the planning task and the management task are not run on the security device, and so on until the N acquisition periods are finished.
For each acquisition period, the security device acquires the CPU usage, the memory usage, and the network bandwidth usage per second in the acquisition period (T seconds), thereby obtaining normal sample performance data at T +1 acquisition times, such as { time 0, CPU usage 0, memory usage 0, network bandwidth usage 0}, { time 1, CPU usage 1, memory usage 1, network bandwidth usage 1}, …, { time T, CPU usage T, memory usage T, network bandwidth usage T }. Time 1 represents the 1 st second of the acquisition cycle, CPU usage 1 represents the CPU usage of the 1 st second, memory usage 1 represents the memory usage of the 1 st second, network bandwidth usage 1 represents the network bandwidth usage of the 1 st second, and so on.
In summary, for each acquisition cycle of the N acquisition cycles, the normal sample performance data at T +1 acquisition times may be acquired, and a value of the number N of the acquisition cycles may be set arbitrarily, which is not limited to 50000. The tag value of the normal sample performance data may be a first value (e.g., 0), where the first value indicates that the normal sample performance data is a positive sample, i.e., indicates that the security device is normal.
102, the security equipment acquires abnormal sample performance data of the intrinsic safety equipment at a plurality of acquisition moments, wherein the abnormal sample performance data is sample performance data of the intrinsic safety equipment when the intrinsic safety equipment runs a malicious program.
In one possible embodiment, the abnormal sample performance data may include, but is not limited to, at least one of: the method comprises the steps of obtaining sample performance data of the security equipment when a planning task and a management task are not operated, obtaining sample performance data of the security equipment when the planning task is operated, obtaining sample performance data of the security equipment when the management task is operated, and obtaining sample performance data of the security equipment when the planning task and the management task are operated.
In a possible implementation manner, in the training phase, malicious programs (such as mining viruses, worms, botnet programs, and the like) can also be run on the security device, and abnormal tasks triggered by the malicious programs are executed. On this basis, the planning task and the management task are not operated in the security equipment, the sample performance data of the security equipment when the planning task and the management task are not operated is obtained, and the obtained sample performance data is used as the abnormal sample performance data b 1. The planning task can be operated on the security equipment but the management task is not operated on the security equipment, the sample performance data of the security equipment during the planning task operation is obtained, and the obtained sample performance data is used as the abnormal sample performance data b 2. The management task can be operated on the security equipment, but the planning task is not operated on the security equipment, the sample performance data of the security equipment during operation of the management task is obtained, and the obtained sample performance data is used as the abnormal sample performance data b 3. The planning task and the management task can be operated on the security equipment, the sample performance data of the security equipment during the operation of the planning task and the management task can be obtained, and the obtained sample performance data can be used as the abnormal sample performance data b 4. It should be noted that each of the above abnormal sample performance data is sample performance data obtained when the security device runs a malicious program.
In summary, on the basis that the security device runs the malicious program, the abnormal sample performance data of the security device may be obtained, where the abnormal sample performance data includes at least one of the abnormal sample performance data b1, the abnormal sample performance data b2, the abnormal sample performance data b3, and the abnormal sample performance data b 4.
Abnormal sample performance data includes, but is not limited to, at least one of: CPU utilization, memory utilization, and network bandwidth utilization. For example, in the running process of the security device, a malicious program is run on the security device, on the basis, a planning task and a management task are not run on the security device, the CPU utilization rate, the memory utilization rate, the network bandwidth utilization rate and the like are obtained, and the data are used as the abnormal sample performance data b 1. And (3) running a planning task on the security equipment but not running a management task on the security equipment, acquiring the CPU utilization rate, the memory utilization rate, the network bandwidth utilization rate and the like, and taking the data as abnormal sample performance data b2, and so on.
Illustratively, on the basis that the security device runs the malicious program, M acquisition periods can be divided, and the duration of each acquisition period is T seconds. For example, for the 1 st to 10 th acquisition periods in the M acquisition periods, the planning task and the management task are not run on the security device, the 11 th to 20 th acquisition periods, the planning task and the management task are run on the security device, the 21 st to 30 th acquisition periods, the management task is run on the security device, the 31 st to 40 th acquisition periods, the planning task and the management task are run on the security device, the 41 th to 50 th acquisition periods, the planning task and the management task are not run on the security device, and so on until the M acquisition periods are finished.
For each acquisition period, the security device may acquire the CPU usage, the memory usage, and the network bandwidth usage for each second in the acquisition period (T seconds), so as to obtain abnormal sample performance data at T +1 acquisition times, such as { time 0, CPU usage 0, memory usage 0, network bandwidth usage 0}, { time 1, CPU usage 1, memory usage 1, network bandwidth usage 1}, …, { time T, CPU usage T, memory usage T, network bandwidth usage T }.
In summary, for each acquisition cycle of the M acquisition cycles, the abnormal sample performance data at T +1 acquisition times may be acquired, and the value of the number M of the acquisition cycles may be set arbitrarily, which is not limited to 50000. The tag value of the abnormal sample performance data may be a second value (e.g., 1), where the second value indicates that the abnormal sample performance data is a negative sample, i.e., indicates that the security device is abnormal.
103, the security device sends the normal sample performance data at the multiple collection moments and the abnormal sample performance data at the multiple collection moments to the management device, so that the management device trains the initial neural network according to the normal sample performance data at the multiple collection moments and the abnormal sample performance data at the multiple collection moments to obtain a trained target neural network, and the specific training process refers to the subsequent steps.
And 104, acquiring normal sample performance data of the security equipment at a plurality of acquisition moments and abnormal sample performance data of the security equipment at a plurality of acquisition moments by the management equipment.
And 105, the management equipment determines the normal sample characteristics of the security equipment according to the normal sample performance data at the multiple collection moments and determines the abnormal sample characteristics of the security equipment according to the abnormal sample performance data at the multiple collection moments. In one possible implementation, for step 105, the following may be implemented:
in a first mode, for each of a plurality of acquisition periods (e.g., N acquisition periods), the acquisition period includes a plurality (e.g., T + 1) of normal sample performance data at an acquisition time, based on which a fold line image may be generated according to the normal sample performance data at the acquisition times, an abscissa of the fold line image may be the acquisition time, and an ordinate of the fold line image may be the normal sample performance data at the acquisition time. Then, the line image is converted into a grayscale image with a specified size, and the normal sample characteristics of the security device are determined according to the grayscale image, for example, the grayscale image can be used as the normal sample characteristics of the security device. In summary, each obtaining period corresponds to one normal sample feature, N obtaining periods correspond to N normal sample features, and the processing procedures of the normal sample features are similar, and then one normal sample feature is taken as an example.
For each of a plurality of acquisition periods (e.g., M acquisition periods), the acquisition period includes a plurality (e.g., T + 1) of abnormal sample performance data at the acquisition time, based on which a fold line image may be generated according to the abnormal sample performance data at the acquisition time, an abscissa of the fold line image may be the acquisition time, and an ordinate of the fold line image may be the abnormal sample performance data at the acquisition time. Then, the line image is converted into a grayscale image with a specified size, and the abnormal sample characteristics of the security device are determined according to the grayscale image, for example, the grayscale image can be used as the abnormal sample characteristics of the security device. In summary, each obtaining period corresponds to one abnormal sample feature, M obtaining periods correspond to M abnormal sample features, and the processing procedures of the abnormal sample features are similar, and then one abnormal sample feature is taken as an example.
Illustratively, if the normal sample performance data includes CPU usage, the normal sample characteristics include CPU usage normal sub-characteristics corresponding to CPU usage. If the normal sample performance data includes the memory usage rate, the normal sample characteristics include a memory usage rate normal sub-characteristic corresponding to the memory usage rate. If the normal sample performance data includes the network bandwidth usage rate, the normal sample characteristics include a network bandwidth usage rate normal sub-characteristic corresponding to the network bandwidth usage rate. The following describes, with reference to a specific scenario, how to acquire the CPU utilization rate normality sub-feature, the memory utilization rate normality sub-feature, and the network bandwidth utilization rate normality sub-feature.
For the obtaining mode of the CPU utilization normal sub-feature, the normal sample performance data is the CPU utilization of T +1 collection times, and a broken line image may be generated according to the CPU utilization of these collection times, in the broken line image, the abscissa is the collection time, the ordinate is the CPU utilization of the collection time, if the ordinate corresponding to the collection time 0 is the CPU utilization 0, and the ordinate corresponding to the collection time 1 is the CPU utilization 1, and so on. See fig. 2A for an example of the line image.
As can be seen from fig. 2A, in the broken-line image, for all positions in each column (one column corresponds to each acquisition time), only one position (i.e., the position of the CPU utilization rate, for example, the abscissa is the acquisition time, and the ordinate is the CPU utilization rate 0 corresponding to the acquisition time) has a numerical value (a numerical value other than 0), for example, the pixel value of the position of the CPU utilization rate is 1 or 255, and other positions except the position of the CPU utilization rate have no numerical value, and the pixel values of the positions without numerical values may be set to 0.
Then, the line image may be converted into a grayscale image of a specified size (e.g., 64 × 64), and the conversion method is not limited, and after the grayscale image is obtained, the grayscale image may be used as a CPU utilization normal sub-feature of the security device. The broken line image is converted into the gray level image, so that the noise influence caused by the fluctuation of the broken line image is reduced, and the requirement of an input data format of a neural network is met.
Aiming at the acquisition mode of the memory usage rate normal sub-feature, the normal sample performance data is the memory usage rate at T +1 acquisition times, a broken line image can be generated according to the memory usage rates at the acquisition times, in the broken line image, the abscissa is the acquisition time, the ordinate is the memory usage rate at the acquisition time, if the ordinate corresponding to the acquisition time 0 is the memory usage rate 0, the ordinate corresponding to the acquisition time 1 is the memory usage rate 1, and so on. See fig. 2B for an example of the line image. Then, the line image may be converted into a grayscale image of a specified size (e.g., 64 × 64), and after the grayscale image is obtained, the grayscale image may be used as a memory usage rate normal sub-feature of the security device.
For the acquisition mode of the normal sub-feature of the network bandwidth utilization rate, the normal sample performance data is the network bandwidth utilization rate at T +1 acquisition times, and a broken line image can be generated according to the network bandwidth utilization rates at the acquisition times, wherein in the broken line image, the abscissa is the acquisition time, and the ordinate is the network bandwidth utilization rate at the acquisition time. Referring to fig. 2C, an example of a broken line image is shown. Then, the line image is converted into a grayscale image with a specified size (such as 64 × 64), and the grayscale image is used as a network bandwidth utilization rate normal sub-feature of the security equipment.
If the abnormal sample performance data comprises the CPU utilization rate, the abnormal sample characteristics comprise CPU utilization rate abnormal sub-characteristics corresponding to the CPU utilization rate. If the abnormal sample performance data includes the memory usage rate, the abnormal sample characteristics include memory usage rate abnormal sub-characteristics corresponding to the memory usage rate. If the abnormal sample performance data comprises the network bandwidth utilization rate, the abnormal sample characteristics comprise network bandwidth utilization rate abnormal sub-characteristics corresponding to the network bandwidth utilization rate. The CPU utilization abnormal sub-feature, the memory utilization abnormal sub-feature, and the obtaining manner of the network bandwidth utilization abnormal sub-feature are similar to the obtaining manner of the CPU utilization normal sub-feature, the memory utilization normal sub-feature, and the network bandwidth utilization normal sub-feature, and are not described herein again.
In a second mode, for each of the N acquisition cycles, the acquisition cycle includes multiple (e.g., T + 1) normal sample performance data at the acquisition time, based on which a fold line image may be generated according to the normal sample performance data at the multiple acquisition times, an abscissa of the fold line image may be the acquisition time, and an ordinate of the fold line image may be the normal sample performance data at the acquisition time. Then, the line image is converted into a two-dimensional matrix of a specified scale, and the normal sample characteristics of the security device are determined according to the two-dimensional matrix, for example, the two-dimensional matrix can be used as the normal sample characteristics of the security device.
For each of the M acquisition cycles, the acquisition cycle includes a plurality (e.g., T + 1) of abnormal sample performance data at the acquisition time, based on which a fold line image may be generated according to the abnormal sample performance data at the acquisition time, an abscissa of the fold line image may be the acquisition time, and an ordinate of the fold line image may be the abnormal sample performance data at the acquisition time. Then, the line image is converted into a two-dimensional matrix of a specified scale, and the abnormal sample characteristics of the security device are determined according to the two-dimensional matrix, for example, the two-dimensional matrix can be used as the abnormal sample characteristics of the security device.
When the normal sample performance data includes the CPU usage, the normal sample characteristics include CPU usage normal sub-characteristics corresponding to the CPU usage. When the normal sample performance data includes the memory usage rate, the normal sample characteristics include memory usage rate normal sub-characteristics corresponding to the memory usage rate. When the normal sample performance data includes the network bandwidth usage rate, the normal sample characteristics include a network bandwidth usage rate normal sub-characteristic corresponding to the network bandwidth usage rate. When the abnormal sample performance data includes the CPU usage, the abnormal sample characteristics include CPU usage abnormality sub-characteristics corresponding to the CPU usage. When the abnormal sample performance data includes the memory usage rate, the abnormal sample characteristics include memory usage rate abnormal sub-characteristics corresponding to the memory usage rate. When the abnormal sample performance data comprises the network bandwidth utilization rate, the abnormal sample characteristics comprise network bandwidth utilization rate abnormal sub-characteristics corresponding to the network bandwidth utilization rate.
For the obtaining mode of the sub-feature of the CPU utilization rate being normal, a line image of the CPU utilization rate is obtained first, as shown in fig. 2A, and the obtaining mode refers to the first mode, which is not described herein again. Then, the line image is converted into a two-dimensional matrix (the two-dimensional matrix may also be referred to as a tensor) of a specified scale, for example, the line image is converted into a two-dimensional matrix of 64 × 64 in which blank spaces are 0 (that is, positions not corresponding to the CPU usage are 0) and non-blank spaces are 1 (that is, positions corresponding to the CPU usage are 1), and this conversion method is not limited, and other methods may be used to convert the line image into the two-dimensional matrix. After the two-dimensional matrix is obtained, the two-dimensional matrix can be used as a CPU utilization rate normal sub-feature of the security equipment.
The obtaining mode of the memory usage rate normal sub-feature, the obtaining mode of the network bandwidth usage rate normal sub-feature, the obtaining mode of the CPU usage rate abnormal sub-feature, the obtaining mode of the memory usage rate abnormal sub-feature and the obtaining mode of the network bandwidth usage rate abnormal sub-feature are similar to the obtaining mode of the CPU usage rate normal sub-feature, and are not repeated here.
Step 106, the management device trains the initial neural network based on the normal sample characteristics and the abnormal sample characteristics to obtain a trained target neural network, the target neural network can be used for identifying the operation state of the security device based on the data characteristics capable of representing the operation state change of the security device, the specific identification process refers to the following embodiments, and exemplarily, the operation state of the security device may be normal or abnormal.
The initial neural network may include a first sub-network, a second sub-network, a third sub-network, a fully connected sub-network and a classification layer sub-network, the normal sample features may include a CPU usage normality sub-feature, a memory usage normality sub-feature and a network bandwidth usage normality sub-feature, and the abnormal sample features may include a CPU usage abnormality sub-feature, a memory usage abnormality sub-feature and a network bandwidth usage abnormality sub-feature.
Based on the above, the CPU utilization rate normal sub-feature and the CPU utilization rate abnormal sub-feature may be input to the first sub-network, the memory utilization rate normal sub-feature and the memory utilization rate abnormal sub-feature may be input to the second sub-network, and the network bandwidth utilization rate normal sub-feature and the network bandwidth utilization rate abnormal sub-feature may be input to the third sub-network. And splicing the feature vector output by the first sub-network, the feature vector output by the second sub-network and the feature vector output by the third sub-network through the fully connected sub-networks, determining a target feature vector based on the spliced feature vectors, and outputting the target feature vector to the classification layer sub-network.
And determining a characteristic value corresponding to the target characteristic vector through the classification layer sub-network. If the initial neural network is determined not to be converged based on the characteristic value, adjusting each network parameter of the initial neural network, and retraining based on the adjusted initial neural network. And if the initial neural network is determined to be converged based on the characteristic value, determining the converged initial neural network as a trained target neural network.
For example, an initial Neural Network (i.e., an untrained Neural Network) may be pre-constructed, and the initial Neural Network may adopt a T-CNN (Triple Convolutional Neural Network) model, or may adopt other types of models, which is not limited thereto. For convenience of description, in the following embodiments, taking the T-CNN model as an example, the input data of the T-CNN model may be three parts.
Referring to fig. 3, which is a schematic diagram of an initial neural network, the initial neural network may include, but is not limited to, a first sub-network, a second sub-network, a third sub-network, a fully-connected sub-network, and a classification layer sub-network.
For the first sub-network, the first sub-network may be composed of 3 convolutional layers, 3 pooling layers and 2 fully-connected layers, the excitation function Relu is disposed in the convolutional layers, and in practical applications, the excitation function Relu may also be independent from the convolutional layers as the excitation layer. Of course, fig. 3 is only an example of the first sub-network, and the first sub-network is not limited to this, and may be composed of any number of convolution layers, any number of pooling layers, and any number of full connection layers, and the connection relationship of each network layer may also be configured arbitrarily.
The input data of the first sub-network is the CPU utilization rate normal sub-feature (the tag value is the first value) and the CPU utilization rate abnormal sub-feature (the tag value is the second value), for example, part or all of the N CPU utilization rate normal sub-features are input to the first sub-network, and part or all of the M CPU utilization rate abnormal sub-features are input to the first sub-network. The first sub-network processes the CPU utilization rate normal sub-feature and the CPU utilization rate abnormal sub-feature without limiting the processing mode, obtains a feature vector to be output, marks the feature vector as a feature vector 1, and outputs the feature vector 1 to the fully-connected sub-network.
When the first sub-network processes the CPU usage normality sub-feature and the CPU usage abnormality sub-feature, the processing may be performed by the convolutional layer C1_1, the pooling layer P1_1, the convolutional layer C1_2, the pooling layer P1_2, the convolutional layer C1_3, the pooling layer P1_3, the full-link layer F1_1, and the full-link layer F1_2 in this order.
For the second sub-network, the second sub-network may be composed of 3 convolutional layers, 3 pooling layers and 2 fully-connected layers, and the second sub-network is not limited to this, and may be composed of any number of convolutional layers, any number of pooling layers and any number of fully-connected layers, and the connection relationship of each network layer may also be configured arbitrarily.
The input data of the second sub-network can be memory usage rate normal sub-features (the tag value is a first value) and memory usage rate abnormal sub-features (the tag value is a second value), the second sub-network processes the memory usage rate normal sub-features and the memory usage rate abnormal sub-features to obtain a feature vector to be output, the feature vector is recorded as a feature vector 2, and the feature vector 2 is output to the fully-connected sub-network.
When the second sub-network processes the normal memory usage sub-feature and the abnormal memory usage sub-feature, the processing may be performed by the convolutional layer C2_1, the pooling layer P2_1, the convolutional layer C2_2, the pooling layer P2_2, the convolutional layer C2_3, the pooling layer P2_3, the fully-connected layer F2_1, and the fully-connected layer F2_2 in sequence.
For the third sub-network, the third sub-network may be composed of 3 convolutional layers, 3 pooling layers and 2 fully-connected layers, and the third sub-network is not limited to this, and may be composed of any number of convolutional layers, any number of pooling layers and any number of fully-connected layers, and the connection relationship of each network layer may also be configured arbitrarily.
The input data of the third sub-network can be network bandwidth utilization rate normal sub-features (the label value is a first value) and network bandwidth utilization rate abnormal sub-features (the label value is a second value), the third sub-network processes the network bandwidth utilization rate normal sub-features and the network bandwidth utilization rate abnormal sub-features to obtain a feature vector to be output, the feature vector is marked as a feature vector 3, and the feature vector 3 is output to the fully-connected sub-network.
When the third sub-network processes the sub-feature of normal network bandwidth utilization and the sub-feature of abnormal network bandwidth utilization, the processing may be performed by convolutional layer C3_1, pooling layer P3_1, convolutional layer C3_2, pooling layer P3_2, convolutional layer C3_3, pooling layer P3_3, fully-connected layer F3_1, and fully-connected layer F3_2 in sequence.
The input data of the fully-connected sub-network may be the eigenvector 1 output by the first sub-network, the eigenvector 2 output by the second sub-network, and the eigenvector 3 output by the third sub-network, and the fully-connected sub-network may perform splicing processing on the eigenvector 1, the eigenvector 2, and the eigenvector 3, and process the spliced eigenvector to obtain the target eigenvector, which is not limited to this processing manner. After the target feature vector is obtained, the fully connected sub-network may output the target feature vector to the classification layer sub-network.
The input data of the classification layer sub-network may be a target feature vector, and the classification layer sub-network may process the target feature vector to obtain a feature value corresponding to the target feature vector. For example, the classification layer sub-network may include a sigmod function, and for the classification layer sub-network, the target feature vector may be input into the sigmod function, and the sigmod function may output a feature value corresponding to the target feature vector.
If it is determined that the initial neural network is not converged based on the eigenvalue, adjusting each network parameter of the initial neural network, for example, adjusting each network parameter in the first sub-network, each network parameter in the second sub-network, each network parameter in the third sub-network, each network parameter in the fully-connected sub-network, and each network parameter in the classification layer sub-network, without limiting the adjustment manner, obtaining an adjusted initial neural network, and retraining based on the adjusted initial neural network. Namely, the CPU usage rate normal sub-feature and the CPU usage rate abnormal sub-feature are input into the first sub-network after adjustment, the memory usage rate normal sub-feature and the memory usage rate abnormal sub-feature are input into the second sub-network after adjustment, the network bandwidth usage rate normal sub-feature and the network bandwidth usage rate abnormal sub-feature are input into the third sub-network after adjustment, and so on.
And if the initial neural network is determined to be converged based on the characteristic value, determining the converged initial neural network as a trained target neural network. For the target neural network, the target neural network includes, but is not limited to, a first sub-network, a second sub-network, a third sub-network, a fully connected sub-network, and a classification layer sub-network.
The method can reflect the mapping relation between the characteristic vector and the characteristic value aiming at the classification layer sub-network of the target neural network, wherein the characteristic value can be a first value or a second value, the first value is used for indicating that the operation state of the security equipment is normal, and the second value is used for indicating that the operation state of the security equipment is abnormal.
In summary, the classification layer sub-network of the target neural network can reflect the mapping relationship between the feature vectors and the feature values, so that the target neural network can identify the operation state of the security equipment.
For example, a loss function may be pre-constructed, where the loss function is related to a feature value corresponding to a target feature vector, the loss function is not limited, the loss value of the loss function is determined based on the feature value corresponding to the target feature vector, and whether the initial neural network converges is determined based on the loss value, which is not described again.
For example, after the target neural network is trained, the target neural network may be stored, so that the operation state of the security device may be determined based on the target neural network in the subsequent detection process.
For example, since the resources consumed in the training process of the initial neural network are relatively large, the management device may train the initial neural network based on hardware resources (e.g., GPU (Graphics Processing Unit) resources, etc.) to obtain a trained target neural network.
In summary, the trained target neural network can be obtained, and in the detection stage, the operation performance data of the security equipment can be processed based on the target neural network, so as to obtain the operation state of the security equipment.
For the detection stage, refer to fig. 4, which is a schematic flow chart of the operation state anomaly detection method.
Step 401, the security device obtains the operation performance data of the intrinsic security device at a plurality of collection moments.
In one possible embodiment, the operational performance data includes, but is not limited to, at least one of: CPU utilization, memory utilization, and network bandwidth utilization. For example, in the operation process of the security device, the CPU utilization rate, the memory utilization rate, the network bandwidth utilization rate, and the like are obtained, and these data are used as the operation performance data.
Illustratively, the duration of the acquisition period is T seconds, and the security device acquires the CPU usage rate, the memory usage rate, and the network bandwidth usage rate of each second within the acquisition period (T seconds), so as to obtain the operation performance data of multiple acquisition times of the acquisition period, such as { time 0, CPU usage rate 0, memory usage rate 0, network bandwidth usage rate 0}, { time 1, CPU usage rate 1, memory usage rate 1, network bandwidth usage rate 1}, …, { time T, CPU usage rate T, memory usage rate T, network bandwidth usage rate T }.
Step 402, the security device obtains process information and process identification of at least one target process in which the security device is running. For example, based on the resource occupation condition of each process being run by the intrinsically safe device, at least one target process being run is selected from all processes being run by the intrinsically safe device.
Illustratively, based on the CPU resource occupation situation of each process in operation of the intrinsically safe device, P1 processes with large CPU resource occupation are selected, the selected P1 processes are used as target processes, and the process information and the process identifiers of the target processes are acquired. Based on the memory resource occupation condition of each process running by the intrinsic safety protection device, selecting P2 processes with large memory resource occupation, taking the selected P2 processes as target processes, and acquiring the process information and process identification of the target processes. Based on the network bandwidth resource occupation condition of each process running by the intrinsic safety protection device, selecting P3 processes with large network bandwidth resource occupation, taking the selected P3 processes as target processes, and acquiring the process information and process identification of the target processes.
For example, the value of P1, the value of P2, and the value of P3 may be configured according to experience, without limitation, the value of P1 may be the same as or different from the value of P2, the value of P1 may be the same as or different from the value of P3, and the value of P2 may be the same as or different from the value of P3.
In summary, the security device may select at least one target process from all the processes in operation of the security device, and obtain process information and a process identifier of each target process.
Illustratively, since the same process may exist in P1 processes and P2 processes, the same process may exist in P1 processes and P3 processes, and the same process may exist in P2 processes and P3 processes, the number of target processes is less than or equal to (P1 + P2+ P3).
Step 403, the security protection device sends the running performance data at multiple collection times, the process information and the process identifier of at least one target process to the management device, so that the management device determines the running state of the security protection device according to the running performance data at multiple collection times, determines, for each target process, whether the target process is an abnormal process based on the process information of the target process when the running state is that the security protection device is abnormal, and sends the process identifier of the abnormal process when the target process is the abnormal process.
Step 404, the management device obtains the operation performance data of the security device at a plurality of collection times, and obtains the process information and the process identifier of at least one target process in which the security device is operating.
Step 405, the management device determines data characteristics of the security device according to the operation performance data at the multiple collection times, where the data characteristics are used to indicate changes in the operation state of the security device.
Illustratively, based on the operation performance data at a plurality of acquisition moments, the data characteristics of the security equipment can be determined, and the data characteristics are used for representing the operation state change of the security equipment, and the operation state change can reflect that the security equipment is in a normal state or the security equipment is in an abnormal state. For example, when the operation state changes to a stable operation state and the resource consumption is low, the operation state change can reflect that the security equipment is in a normal state, or when the operation state changes to a severe operation state change or the resource consumption continues to be high and is not low, the operation state change can reflect that the security equipment is in an abnormal state. To sum up, because the data characteristics can represent the operation state change of the security device, and the row state change can reflect that the security device is in a normal state or the security device is in an abnormal state, the operation state of the security device can be determined based on the data characteristics, and the specific determination mode refers to the following embodiments and is not repeated here.
In one possible implementation, for step 405, the following may be implemented:
the method comprises the steps of generating a fold line image according to running performance data of a plurality of acquisition moments, wherein the abscissa of the fold line image can be the acquisition moment, and the ordinate of the fold line image can be the running performance data of the acquisition moment. Then, the line image is converted into a grayscale image with a specified size, and the data characteristics of the security device are determined according to the grayscale image, for example, the grayscale image can be used as the data characteristics of the security device.
For example, if the performance data includes CPU usage, the data characteristic may include a CPU usage sub-characteristic corresponding to the CPU usage. If the performance data includes memory usage, the data characteristics may include memory usage sub-characteristics corresponding to memory usage. If the performance data network bandwidth usage is operational, the data characteristics may include a network bandwidth usage sub-characteristic corresponding to the network bandwidth usage.
Aiming at the acquisition mode of the CPU utilization rate sub-characteristics, the operation performance data is the CPU utilization rate of T +1 acquisition moments, and a broken line image can be generated according to the CPU utilization rates of the acquisition moments, wherein in the broken line image, the abscissa is the acquisition moment, and the ordinate is the CPU utilization rate of the acquisition moment. Then, the line image can be converted into a grayscale image with a specified size (e.g. 64 × 64), and the grayscale image can be used as a CPU utilization sub-feature of the security device without limitation.
The obtaining mode of the memory utilization rate sub-feature and the obtaining mode of the network bandwidth utilization rate sub-feature are similar to the obtaining mode of the CPU utilization rate sub-feature, and are not repeated here.
And generating a fold line image according to the running performance data at a plurality of acquisition moments, wherein the abscissa of the fold line image can be the acquisition moment, and the ordinate of the fold line image can be the running performance data at the acquisition moment. Then, the line image is converted into a two-dimensional matrix of a specified scale, and the data characteristics of the security device are determined according to the two-dimensional matrix, for example, the two-dimensional matrix can be used as the data characteristics of the security device.
For example, if the performance data includes CPU usage, the data characteristic may include a CPU usage sub-characteristic corresponding to the CPU usage. If the performance data includes memory usage, the data characteristics may include memory usage sub-characteristics corresponding to memory usage. If the performance data network bandwidth usage is operational, the data characteristics may include a network bandwidth usage sub-characteristic corresponding to the network bandwidth usage.
Aiming at the acquisition mode of the CPU utilization rate sub-characteristics, the operation performance data is the CPU utilization rate of T +1 acquisition moments, and the broken line image can be generated according to the CPU utilization rates of the acquisition moments. Then, the line image is converted into a two-dimensional matrix (the two-dimensional matrix may also be referred to as a tensor) of a specified scale, for example, the line image is converted into a two-dimensional matrix of 64 × 64 in which blank positions are 0 (that is, positions not corresponding to the CPU utilization are 0) and non-blank positions are 1 (that is, positions corresponding to the CPU utilization are 1), and this conversion method is not limited, and other methods may be used to convert the line image into the two-dimensional matrix. After the two-dimensional matrix is obtained, the two-dimensional matrix can be used as a CPU utilization rate sub-characteristic of the security equipment.
The obtaining mode of the memory utilization rate sub-feature and the obtaining mode of the network bandwidth utilization rate sub-feature are similar to the obtaining mode of the CPU utilization rate sub-feature, and are not repeated here.
And 406, the management device processes the data characteristics through the trained target neural network to obtain the running state of the security device, wherein the running state is normal or abnormal.
In one possible implementation, the target neural network may include a first sub-network, a second sub-network, a third sub-network, a fully connected sub-network, and a classification layer sub-network, and the data characteristics may include a CPU usage sub-characteristic, a memory usage sub-characteristic, and a network bandwidth usage sub-characteristic. Based on the CPU utilization rate sub-characteristics, the CPU utilization rate sub-characteristics can be input into the first sub-network, and the first feature vector corresponding to the CPU utilization rate sub-characteristics can be output to the fully-connected sub-network by the first sub-network. And inputting the memory utilization rate sub-features into a second sub-network, and outputting a second feature vector corresponding to the memory utilization rate sub-features from the second sub-network to the fully-connected sub-network. And inputting the network bandwidth utilization rate sub-feature into a third sub-network, and outputting a third feature vector corresponding to the network bandwidth utilization rate sub-feature to the fully-connected sub-network by the third sub-network. And splicing the first feature vector, the second feature vector and the third feature vector through the fully-connected sub-network, determining a target feature vector based on the spliced feature vector, and outputting the target feature vector to the classification layer sub-network. And determining a characteristic value corresponding to the target characteristic vector through the classification layer sub-network. And determining the running state of the security equipment according to the characteristic value.
Referring to fig. 3, which is a schematic structural diagram of the target neural network, the first sub-network may process the CPU utilization sub-features to obtain a first feature vector, and output the first feature vector to the fully-connected sub-network. The second sub-network can process the memory usage sub-feature to obtain a second feature vector, and output the second feature vector to the fully-connected sub-network. The third sub-network can process the network bandwidth utilization rate sub-characteristics to obtain a third characteristic vector, and the third characteristic vector is output to the fully-connected sub-network.
And the fully-connected sub-network splices the first feature vector output by the first sub-network, the second feature vector output by the second sub-network and the third feature vector output by the third sub-network, determines a target feature vector based on the spliced feature vectors and outputs the target feature vector to the classification layer sub-network.
The classification layer sub-network can reflect the mapping relation between the characteristic vectors and the characteristic values, so that the classification layer sub-network can determine the characteristic values corresponding to the target characteristic vectors, determine the operation state of the security equipment according to the characteristic values, wherein the characteristic values can be a first value or a second value, the first value indicates that the operation state of the security equipment is normal, and the second value indicates that the operation state of the security equipment is abnormal.
Step 407, if the running state is that the security device is abnormal, for each target process in which the security device is running, the management device determines whether the target process is an abnormal process based on the process information of the target process. If so, step 408 may be performed.
For example, for each target process in which the security device is running, the management device may determine whether the target process is an abnormal process based on the process information of the target process. If so, step 408 is performed for the exception process. And if all the target processes are not abnormal processes, ending the processing flow.
In a possible implementation manner, the management device may pre-configure a white list, where the white list is used to record process information of a normal process, and based on this, for each target process of the security device, if the process information of the target process does not belong to the white list, it is determined that the target process is an abnormal process, and if the process information of the target process belongs to the white list, it is determined that the target process is not an abnormal process.
In another possible implementation manner, the management device may pre-configure a blacklist, where the blacklist is used to record process information of an abnormal process, and based on this, for each target process of the security device, if the process information of the target process belongs to the blacklist, it is determined that the target process is an abnormal process, and if the process information of the target process does not belong to the blacklist, it is determined that the target process is not an abnormal process.
In another possible implementation manner, the management device may pre-configure a white list and a black list, where the white list is used to record process information of a normal process, and the black list is used to record process information of an abnormal process, and based on this, for each target process of the security device, if the process information of the target process belongs to the white list, it is determined that the target process is not an abnormal process, and if the process information of the target process belongs to the black list, it is determined that the target process is an abnormal process. Of course, the above-described manners are only examples, and the implementation manner is not limited as long as whether the target process is an abnormal process can be determined.
In the above embodiment, the process information (Info _ threshold) includes, but is not limited to, at least one of the following information: a process command line, a process file path, network connection object information, etc. The network connection object information may include, but is not limited to, at least one of the following information: connection IP address, connection port, connection protocol. And if the process command line of the target process belongs to the process command lines in the white list, determining that the target process is not an abnormal process. And if the process command line of the target process does not belong to the process command lines in the white list, determining that the target process is an abnormal process. Or the blacklist is used for recording the process command line of the abnormal process, and if the process command line of the target process belongs to the process command line in the blacklist, the target process is determined to be the abnormal process. And if the process command line of the target process does not belong to the process command lines in the blacklist, determining that the target process is not an abnormal process.
Illustratively, when it is determined that a certain target process is an abnormal process, an alarm log may be generated, where the alarm log may include information of the security device, and the information of the target process is not limited thereto.
And step 408, the management equipment sends the process identifier of the abnormal process to the security equipment.
And step 409, the security equipment receives the process identifier of the abnormal process sent by the management equipment.
And step 410, the security equipment blocks the abnormal process according to the process identification of the abnormal process.
For example, the security device may send process information and process identifiers of a plurality of target processes to the management device, and after determining an abnormal process from the target processes, the management device may send the process identifier of the abnormal process to the security device, for example, send a blocking instruction to the security device, where the blocking instruction carries the process identifier of the abnormal process. And after receiving the blocking instruction, the security protection device acquires the process identification of the abnormal process from the blocking instruction and blocks the abnormal process corresponding to the process identification.
Illustratively, by blocking the abnormal process corresponding to the process identifier, the process which occupies a large resource and runs by the security equipment can be blocked, so that the resource of the security equipment is saved, and the safety of the equipment is protected.
According to the technical scheme, the safety condition of the security equipment can be expressed by the operation performance data, the data characteristics of the security equipment can be determined based on the operation performance data at a plurality of collection moments, the data characteristics are used for representing the change of the operation state of the security equipment, if the change of the operation state is stable and the resource consumption is low, the security equipment is in a normal state, and if the change of the operation state is severe or the resource consumption is continuously high, the security equipment is in an abnormal state. Based on the characteristics, the target neural network is constructed by learning the change rule of the running state of the security equipment when the security equipment is attacked maliciously, the running performance data of the security equipment is processed according to the target neural network, and the running state of the security equipment is detected, so that security equipment managers can be helped to discover the abnormal behavior of the security equipment in time, the difficulty and the workload of manual inspection are reduced, and the safety of the security equipment is improved. The acquisition mode of the operation performance data is simple, and the detection process speed of the operation performance of the security equipment is high. The method does not need to manually configure the threshold, thereby avoiding the problems of wrong detection results, low accuracy of the detection results and the like. When the security equipment is abnormal, the abnormal process can be detected, the abnormal process is blocked, and the safety of the security equipment is protected.
In one possible embodiment, the initial neural network or the target neural network may include a fully connected sub-network and a classification layer sub-network, and on this basis, may include at least one of a first sub-network, a second sub-network, and a third sub-network. For example, only the first sub-network is included, and at this time, the run performance data includes CPU usage, the normal sample performance data includes CPU usage, and the abnormal sample performance data includes CPU usage. For another example, the first sub-network and the second sub-network are included simultaneously, in this case, the operation performance data includes a CPU usage rate and a memory usage rate, the normal sample performance data includes a CPU usage rate and a memory usage rate, and the abnormal sample performance data includes a CPU usage rate and a memory usage rate. For another example, the second sub-network and the third sub-network are included simultaneously, at this time, the operation performance data includes a memory usage rate and a network bandwidth usage rate, the normal sample performance data includes a memory usage rate and a network bandwidth usage rate, and the abnormal sample performance data includes a memory usage rate and a network bandwidth usage rate. Of course, the above is merely an example, and no limitation is made thereto.
In a possible implementation manner, the training phase and the detection phase may be implemented in the same device, or implemented in different devices, for example, the management device implements the training phase and the detection phase, in the training phase, the management device trains the initial neural network to obtain a trained target neural network, and in the detection phase, the management device detects the operation state of the security device based on the target neural network. For another example, the training device implements a training phase, the management device implements a detection phase, and in the training phase, the training device trains the initial neural network to obtain a trained target neural network, and deploys the target neural network to the management device. In the detection stage, the management equipment detects the running state of the security equipment based on the target neural network.
Based on the same application concept as the method, an embodiment of the present application further provides an operating state abnormality detection apparatus, as shown in fig. 5, which is a structural diagram of the apparatus, and includes:
the acquisition module 51 is configured to acquire operation performance data of the security device at multiple acquisition moments, and acquire process information and a process identifier of at least one target process in which the security device is operating; the determining module 52 is configured to determine data characteristics of the security device according to the operation performance data at the multiple acquisition moments, where the data characteristics are used to indicate a change in an operation state of the security device; the processing module 53 is configured to process the data features through the trained target neural network to obtain an operation state of the security device; the running state is that the security equipment is normal or abnormal; the determining module 52 is further configured to determine, for each target process in which the security device is running, whether the target process is an abnormal process based on the process information of the target process, if the running state is that the security device is abnormal; a sending module 54, configured to send the process identifier of the abnormal process to the security device if the target process is the abnormal process, so that the security device blocks the abnormal process according to the process identifier of the abnormal process.
The determining module 52 is specifically configured to, when determining the data characteristics of the security device according to the operation performance data at the multiple collection times: generating a broken line image according to the operation performance data at a plurality of acquisition moments; the abscissa of the broken line image is an acquisition time, and the ordinate of the broken line image is running performance data of the acquisition time; converting the line image into a gray image with a specified size, and determining the data characteristics of the security equipment according to the gray image; or converting the line image into a two-dimensional matrix of a specified scale, and determining the data characteristics of the security equipment according to the two-dimensional matrix.
The determining module 52 is specifically configured to, when determining whether the target process is an abnormal process based on the process information of the target process: if the process information does not belong to a white list, determining that the target process is an abnormal process, and if the process information belongs to the white list, determining that the target process is not the abnormal process; or if the process information belongs to a blacklist, determining that the target process is an abnormal process, and if the process information does not belong to the blacklist, determining that the target process is not the abnormal process; wherein the process information includes at least one of the following information: process command line, process file path, network connection object information.
The obtaining module 51 is further configured to: acquiring normal sample performance data of the security equipment at a plurality of acquisition moments and abnormal sample performance data of the security equipment at a plurality of acquisition moments; the normal sample performance data is sample performance data when the security equipment does not run a malicious program, and the abnormal sample performance data is sample performance data when the security equipment runs the malicious program; determining normal sample characteristics of the security equipment according to the normal sample performance data at a plurality of acquisition moments; determining abnormal sample characteristics of the security equipment according to the abnormal sample performance data at a plurality of acquisition moments; training an initial neural network based on the normal sample characteristics and the abnormal sample characteristics to obtain a trained target neural network; the target neural network is used for identifying the operation state of the security equipment based on the data characteristics capable of representing the operation state change of the security equipment.
The embodiment of the present application further provides an operation state anomaly detection device, including: the acquisition module is used for acquiring the operation performance data of the security equipment at a plurality of acquisition moments; selecting at least one running target process from all running processes of the security equipment based on the resource occupation condition of each running process of the security equipment; the sending module is used for sending the running performance data at the multiple collection moments, the process information and the process identification of the at least one target process to the management equipment so that the management equipment can determine the running state of the security equipment according to the running performance data at the multiple collection moments, and when the running state is abnormal, the sending module determines whether the target process is an abnormal process or not according to the process information of the target process and sends the process identification of the abnormal process when the target process is the abnormal process; a receiving module, configured to receive a process identifier of the abnormal process sent by the management device; and the processing module is used for carrying out blocking processing on the abnormal process according to the process identification of the abnormal process.
The acquisition module is further configured to: acquiring normal sample performance data of the security equipment at a plurality of acquisition moments, wherein the normal sample performance data is sample performance data of the security equipment when a malicious program is not operated; acquiring abnormal sample performance data of the security equipment at a plurality of acquisition moments, wherein the abnormal sample performance data is sample performance data of the security equipment when a malicious program runs; the sending module is further configured to: and sending the normal sample performance data at the multiple collection moments and the abnormal sample performance data at the multiple collection moments to the management equipment, so that the management equipment trains the initial neural network according to the normal sample performance data at the multiple collection moments and the abnormal sample performance data at the multiple collection moments to obtain the target neural network.
Based on the same application concept as the method, in the embodiment of the present application, a management device is further provided, and a schematic diagram of a hardware architecture of the management device may be shown in fig. 6, where the schematic diagram includes: a processor 61 and a machine-readable storage medium 62, the machine-readable storage medium 62 storing machine-executable instructions executable by the processor 61; the processor 61 is configured to execute machine-executable instructions to implement the methods disclosed in the above examples of the present application. For example, the processor 61 is configured to execute machine-executable instructions to perform the following steps:
the method comprises the steps of obtaining operation performance data of security equipment at a plurality of collection moments, and obtaining process information and process identification of at least one target process in operation of the security equipment;
determining data characteristics of the security equipment according to the operation performance data at the plurality of acquisition moments, wherein the data characteristics are used for representing the operation state change of the security equipment;
processing the data characteristics through a trained target neural network to obtain the running state of the security equipment; the running state is that the security equipment is normal or abnormal;
if the running state is that the security equipment is abnormal, determining whether the target process is an abnormal process or not based on the process information of the target process aiming at each target process running by the security equipment;
if so, sending the process identification of the abnormal process to the security equipment, so that the security equipment blocks the abnormal process according to the process identification of the abnormal process.
Based on the same application concept as the method, embodiments of the present application further provide a machine-readable storage medium, where several computer instructions are stored on the machine-readable storage medium, and when the computer instructions are executed by a processor, the method disclosed in the above example of the present application can be implemented.
The machine-readable storage medium may be, for example, any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Furthermore, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. An operation state abnormality detection method, characterized by comprising:
the method comprises the steps of obtaining operation performance data of security equipment at a plurality of collection moments, and obtaining process information and process identification of at least one target process in operation of the security equipment;
determining data characteristics of the security equipment according to the operation performance data at the plurality of acquisition moments, wherein the data characteristics are used for representing the operation state change of the security equipment;
processing the data characteristics through a trained target neural network to obtain the running state of the security equipment; the running state is that the security equipment is normal or abnormal;
if the running state is that the security equipment is abnormal, determining whether the target process is an abnormal process or not based on the process information of the target process aiming at each target process running by the security equipment;
if so, sending the process identification of the abnormal process to the security equipment, so that the security equipment blocks the abnormal process according to the process identification of the abnormal process.
2. The method of claim 1, wherein determining the data characteristic of the security device from the operational performance data at the plurality of acquisition moments comprises:
generating a broken line image according to the operation performance data at a plurality of acquisition moments; the abscissa of the broken line image is an acquisition time, and the ordinate of the broken line image is running performance data of the acquisition time;
converting the line image into a gray image with a specified size, and determining the data characteristics of the security equipment according to the gray image; or converting the line image into a two-dimensional matrix of a specified scale, and determining the data characteristics of the security equipment according to the two-dimensional matrix.
3. The method of claim 1, wherein the determining whether the target process is an abnormal process based on the process information of the target process comprises:
if the process information does not belong to a white list, determining that the target process is an abnormal process, and if the process information belongs to the white list, determining that the target process is not the abnormal process; alternatively, the first and second electrodes may be,
if the process information belongs to a blacklist, determining that the target process is an abnormal process, and if the process information does not belong to the blacklist, determining that the target process is not the abnormal process;
wherein the process information includes at least one of the following information:
process command line, process file path, network connection object information.
4. The method of claim 1,
the target neural network comprises a first sub-network, a second sub-network, a third sub-network, a full-connection sub-network and a classification layer sub-network, the data characteristics comprise a CPU utilization rate sub-characteristic, a memory utilization rate sub-characteristic and a network bandwidth utilization rate sub-characteristic, the data characteristics are processed through the trained target neural network, and the running state of the security equipment is obtained, and the method comprises the following steps:
inputting the CPU utilization rate sub-feature into the first sub-network, and outputting a first feature vector corresponding to the CPU utilization rate sub-feature to the fully-connected sub-network by the first sub-network;
inputting the memory usage rate sub-feature into the second sub-network, and outputting a second feature vector corresponding to the memory usage rate sub-feature to the fully-connected sub-network by the second sub-network;
inputting the network bandwidth utilization rate sub-feature into the third sub-network, and outputting a third feature vector corresponding to the network bandwidth utilization rate sub-feature to the fully-connected sub-network by the third sub-network;
splicing the first feature vector, the second feature vector and the third feature vector through the fully-connected sub-network, determining a target feature vector based on the spliced feature vectors, and outputting the target feature vector to the classification layer sub-network;
determining a feature value corresponding to the target feature vector through the classification layer sub-network;
and determining the running state of the security equipment according to the characteristic value.
5. The method of claim 1, wherein before the processing the data features through the trained target neural network to obtain the operating state of the security device, the method further comprises:
acquiring normal sample performance data of the security equipment at a plurality of acquisition moments and abnormal sample performance data of the security equipment at a plurality of acquisition moments; the normal sample performance data is sample performance data when the security equipment does not run a malicious program, and the abnormal sample performance data is sample performance data when the security equipment runs the malicious program;
determining normal sample characteristics of the security equipment according to the normal sample performance data at a plurality of acquisition moments;
determining abnormal sample characteristics of the security equipment according to the abnormal sample performance data at a plurality of acquisition moments;
training an initial neural network based on the normal sample characteristics and the abnormal sample characteristics to obtain a trained target neural network; the target neural network is used for identifying the operation state of the security equipment based on the data characteristics capable of representing the operation state change of the security equipment.
6. The method of claim 5,
the initial neural network comprises a first sub-network, a second sub-network, a third sub-network, a full-connection sub-network and a classification layer sub-network, the normal sample characteristics comprise a CPU usage rate normal sub-characteristic, a memory usage rate normal sub-characteristic and a network bandwidth usage rate normal sub-characteristic, and the abnormal sample characteristics comprise a CPU usage rate abnormal sub-characteristic, a memory usage rate abnormal sub-characteristic and a network bandwidth usage rate abnormal sub-characteristic;
the training of the initial neural network based on the normal sample features and the abnormal sample features to obtain a trained target neural network comprises:
inputting the CPU utilization rate normal sub-feature and the CPU utilization rate abnormal sub-feature into the first sub-network, inputting the memory utilization rate normal sub-feature and the memory utilization rate abnormal sub-feature into the second sub-network, and inputting the network bandwidth utilization rate normal sub-feature and the network bandwidth utilization rate abnormal sub-feature into the third sub-network;
splicing the feature vector output by the first sub-network, the feature vector output by the second sub-network and the feature vector output by the third sub-network through the fully-connected sub-network, determining a target feature vector based on the spliced feature vectors, and outputting the target feature vector to the classification layer sub-network;
determining a feature value corresponding to the target feature vector through the classification layer sub-network;
if the initial neural network is determined not to be converged based on the characteristic value, adjusting each network parameter of the initial neural network, and retraining based on the adjusted initial neural network;
and if the initial neural network is determined to be converged based on the characteristic values, determining the converged initial neural network as a trained target neural network.
7. An operation state abnormality detection method, characterized by comprising:
acquiring operation performance data of the security equipment at a plurality of acquisition moments;
selecting at least one running target process from all running processes of the security equipment based on the resource occupation condition of each running process of the security equipment;
the running performance data at the multiple collection moments, the process information and the process identification of the at least one target process are sent to a management device, so that the management device determines the running state of the security protection device according to the running performance data at the multiple collection moments, when the running state is abnormal, whether the target process is an abnormal process or not is determined according to the process information of the target process for each target process, and when the target process is an abnormal process, the process identification of the abnormal process is sent;
receiving a process identifier of the abnormal process sent by the management equipment;
and blocking the abnormal process according to the process identification of the abnormal process.
8. The method of claim 7, further comprising:
acquiring normal sample performance data of the security equipment at a plurality of acquisition moments, wherein the normal sample performance data is sample performance data of the security equipment when a malicious program is not operated;
acquiring abnormal sample performance data of the security equipment at a plurality of acquisition moments, wherein the abnormal sample performance data is sample performance data of the security equipment when a malicious program runs;
and sending the normal sample performance data at the multiple collection moments and the abnormal sample performance data at the multiple collection moments to the management equipment, so that the management equipment trains the initial neural network according to the normal sample performance data at the multiple collection moments and the abnormal sample performance data at the multiple collection moments to obtain the target neural network.
9. An operating condition abnormality detection apparatus, characterized in that the apparatus comprises:
the security equipment monitoring system comprises an acquisition module, a monitoring module and a monitoring module, wherein the acquisition module is used for acquiring operation performance data of security equipment at a plurality of acquisition moments and acquiring process information and process identification of at least one target process in operation of the security equipment;
the determining module is used for determining the data characteristics of the security equipment according to the operation performance data at the plurality of acquisition moments, wherein the data characteristics are used for representing the operation state change of the security equipment;
the processing module is used for processing the data characteristics through the trained target neural network to obtain the running state of the security equipment; the running state is that the security equipment is normal or abnormal;
the determining module is further configured to determine, for each target process in which the security device is running, whether the target process is an abnormal process based on process information of the target process, if the running state is that the security device is abnormal;
and the sending module is used for sending the process identifier of the abnormal process to the security equipment if the target process is the abnormal process, so that the security equipment can block the abnormal process according to the process identifier of the abnormal process.
10. A management device, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the steps of:
the method comprises the steps of obtaining operation performance data of security equipment at a plurality of collection moments, and obtaining process information and process identification of at least one target process in operation of the security equipment;
determining data characteristics of the security equipment according to the operation performance data at the plurality of acquisition moments, wherein the data characteristics are used for representing the operation state change of the security equipment;
processing the data characteristics through a trained target neural network to obtain the running state of the security equipment; the running state is that the security equipment is normal or abnormal;
if the running state is that the security equipment is abnormal, determining whether the target process is an abnormal process or not based on the process information of the target process aiming at each target process running by the security equipment;
if so, sending the process identification of the abnormal process to the security equipment, so that the security equipment blocks the abnormal process according to the process identification of the abnormal process.
CN202010867425.8A 2020-08-25 2020-08-25 Running state abnormity detection method, device and equipment Pending CN111738467A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010867425.8A CN111738467A (en) 2020-08-25 2020-08-25 Running state abnormity detection method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010867425.8A CN111738467A (en) 2020-08-25 2020-08-25 Running state abnormity detection method, device and equipment

Publications (1)

Publication Number Publication Date
CN111738467A true CN111738467A (en) 2020-10-02

Family

ID=72658813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010867425.8A Pending CN111738467A (en) 2020-08-25 2020-08-25 Running state abnormity detection method, device and equipment

Country Status (1)

Country Link
CN (1) CN111738467A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111935180A (en) * 2020-09-24 2020-11-13 杭州海康威视数字技术股份有限公司 Active defense method, device and system for security equipment
CN113079151A (en) * 2021-03-26 2021-07-06 深信服科技股份有限公司 Exception handling method and device, electronic equipment and readable storage medium
CN114612887A (en) * 2021-09-01 2022-06-10 腾讯科技(深圳)有限公司 Bill abnormity detection method, device, equipment and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108965340A (en) * 2018-09-25 2018-12-07 网御安全技术(深圳)有限公司 A kind of industrial control system intrusion detection method and system
CN109802955A (en) * 2018-12-29 2019-05-24 360企业安全技术(珠海)有限公司 Authority control method and device, storage medium, computer equipment
CN110378698A (en) * 2019-07-24 2019-10-25 中国工商银行股份有限公司 Transaction risk recognition methods, device and computer system
CN111383128A (en) * 2020-03-09 2020-07-07 中国电力科学研究院有限公司 Method and system for monitoring running state of power grid embedded terminal equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108965340A (en) * 2018-09-25 2018-12-07 网御安全技术(深圳)有限公司 A kind of industrial control system intrusion detection method and system
CN109802955A (en) * 2018-12-29 2019-05-24 360企业安全技术(珠海)有限公司 Authority control method and device, storage medium, computer equipment
CN110378698A (en) * 2019-07-24 2019-10-25 中国工商银行股份有限公司 Transaction risk recognition methods, device and computer system
CN111383128A (en) * 2020-03-09 2020-07-07 中国电力科学研究院有限公司 Method and system for monitoring running state of power grid embedded terminal equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
姚锡凡: "《制造物联网技术》", 31 December 2018 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111935180A (en) * 2020-09-24 2020-11-13 杭州海康威视数字技术股份有限公司 Active defense method, device and system for security equipment
CN113079151A (en) * 2021-03-26 2021-07-06 深信服科技股份有限公司 Exception handling method and device, electronic equipment and readable storage medium
CN114612887A (en) * 2021-09-01 2022-06-10 腾讯科技(深圳)有限公司 Bill abnormity detection method, device, equipment and computer readable storage medium
CN114612887B (en) * 2021-09-01 2023-01-10 腾讯科技(深圳)有限公司 Bill abnormity detection method, device, equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN111738467A (en) Running state abnormity detection method, device and equipment
CN109858239B (en) Dynamic and static combined detection method for CPU vulnerability attack program in container
CN111931179B (en) Cloud malicious program detection system and method based on deep learning
CN111586071B (en) Encryption attack detection method and device based on recurrent neural network model
JP2021060987A (en) Method of data-efficient threat detection in computer network
KR102088509B1 (en) Method and apparatus for detection of anomaly on computer system
CN113206842A (en) Distributed safety state reconstruction method based on double-layer dynamic switching observer
EP3812937A1 (en) System and method for protection and detection of adversarial attacks against a classifier
Alabadi et al. Anomaly detection for cyber-security based on convolution neural network: A survey
CN112631611A (en) Intelligent Pompe deception contract identification method and device
CN112131907A (en) Method and device for training classification model
CN108563951B (en) Virus detection method and device
CN113360912A (en) Malicious software detection method, device, equipment and storage medium
CN109743286A (en) A kind of IP type mark method and apparatus based on figure convolutional neural networks
US20220191113A1 (en) Method and apparatus for monitoring abnormal iot device
CN113919497A (en) Attack and defense method based on feature manipulation for continuous learning ability system
CN107239698A (en) A kind of anti-debug method and apparatus based on signal transacting mechanism
CN114090406A (en) Electric power Internet of things equipment behavior safety detection method, system, equipment and storage medium
Lagraa et al. Real-time attack detection on robot cameras: A self-driving car application
CN112099882A (en) Service processing method, device and equipment
CN114024761B (en) Network threat data detection method and device, storage medium and electronic equipment
Hashemi et al. Runtime monitoring for out-of-distribution detection in object detection neural networks
Pranav et al. Detection of botnets in IoT networks using graph theory and machine learning
CN117574371A (en) Malicious code detection system for entropy sensitive calling feature of edge computing platform
CN112764764A (en) Scene model deployment method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201002

RJ01 Rejection of invention patent application after publication