WO2023179461A1 - 一种处理疑似攻击行为的方法及相关装置 - Google Patents

一种处理疑似攻击行为的方法及相关装置 Download PDF

Info

Publication number
WO2023179461A1
WO2023179461A1 PCT/CN2023/082044 CN2023082044W WO2023179461A1 WO 2023179461 A1 WO2023179461 A1 WO 2023179461A1 CN 2023082044 W CN2023082044 W CN 2023082044W WO 2023179461 A1 WO2023179461 A1 WO 2023179461A1
Authority
WO
WIPO (PCT)
Prior art keywords
blocking
suspected attack
security
attack behavior
historical
Prior art date
Application number
PCT/CN2023/082044
Other languages
English (en)
French (fr)
Inventor
王仲宇
李肖波
吴朱亮
高云鹏
谢于明
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023179461A1 publication Critical patent/WO2023179461A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1416Event detection, e.g. attack signature detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/20Network architectures or network communication protocols for network security for managing network security; network security policies in general
    • H04L63/205Network architectures or network communication protocols for network security for managing network security; network security policies in general involving negotiation or determination of the one or more network security mechanisms to be used, e.g. by negotiation between the client and the server or between peers or by selection according to the capabilities of the entities involved
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/40Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols

Definitions

  • the present application relates to the field of network security technology, and in particular to a method and related devices for handling suspected attack behavior.
  • the firewall can identify external network attacks based on blacklists and information in suspected attack packets. For example, when the Internet Protocol (IP) address in a suspected attack packet is in the blacklist, the firewall blocks the suspected attack.
  • IP Internet Protocol
  • the attacker can change the IP address that initiated the network attack and re-launch the network attack through an IP address outside the blacklist, resulting in a low blocking rate of network attacks.
  • This application provides a method and related devices for handling suspected attack behaviors, which can improve the blocking rate of network attacks.
  • this application provides a method for handling suspected attack behavior, which can be applied to analyzers.
  • the analyzer receives the alarm from the first security device.
  • the first security device refers to a device with security protection located on the packet transmission path, which may be a firewall or a security gateway.
  • the alarm contains the category of the target's suspected attack behavior, which can be identified by name or number.
  • the analyzer obtains the characteristics of historical suspected attack behaviors in the same category as the target's suspected attack behavior based on the category of the target's suspected attack behavior.
  • the characteristics include at least one of the probability value of historical suspected attack behavior being determined as an attack behavior and a feature set.
  • the feature set includes distribution characteristics of historical suspected attack behaviors in time or distribution characteristics of IP addresses that initiated historical suspected attack behaviors.
  • the analyzer generates a first blocking plan based on the characteristics.
  • the first blocking solution is used to block suspected attack behaviors of the same category as the target's suspected attack behaviors.
  • the analyzer sends a first blocking plan to the first security device, so that the first security device executes the first blocking plan.
  • the analyzer obtains the characteristics of these historical suspected attack behaviors by analyzing historical suspected attack behaviors of the same category as the target suspected attack behavior. Therefore, even if the attacker subsequently changes some information when launching the same category of attack behaviors (for example, the IP address that initiated the attack), this solution can still block these attacks based on the characteristics of historical suspected attacks of the same category, improving the blocking rate of similar suspected attacks.
  • the historical suspected attack behavior is detected by the first security device.
  • the first blocking solution generated based on the characteristics is more targeted for the first security device and can improve the blocking rate of network attack behaviors.
  • the analyzer receives the input first security blocking policy, and when the characteristics satisfy the first In the case of security blocking strategy, the first blocking plan is generated.
  • the first security blocking policy can be understood as a security blocking policy input by the user, and the first security blocking policy can be adjusted according to actual needs. For example, in some special periods, it is necessary to block external attacks as much as possible and have a greater tolerance for false blocking, so the first security blocking strategy is appropriately adjusted to block more suspected attacks; in other normal periods, In order to avoid affecting normal business, the tolerance for false blocking is low. At this time, the first security blocking strategy can be appropriately adjusted to block fewer suspected attacks.
  • the analyzer receives the first security blocking policy input by the user, and when the characteristics satisfy the first security blocking policy, generates the first blocking plan so that the blocking plan is more in line with the user's needs.
  • the first security blocking policy includes a score threshold
  • the analyzer obtains the score of the feature according to the feature and the target scoring rule of the feature. When the score of the feature is greater than the score threshold, it is determined that the feature satisfies the first score threshold.
  • a security blocking strategy When the number of features is multiple, the obtained score of the feature may be the total score obtained by weighting the respective scores of the multiple features.
  • the analyzer receives an input processing opinion, and the processing opinion is used to determine the probability value of a historical suspected attack behavior being determined as an attack behavior.
  • the processing opinion may be blocking or only warning. For example, if there are a total of 100 suspected historical attacks, and 80 of them were blocked, then the probability that the historical suspected attack behavior is determined to be an attack is 80%.
  • historical suspected attack behaviors are detected by multiple security devices, and the feature set also includes distribution characteristics of historical suspected attack behaviors on multiple security devices.
  • Historical suspected attack behaviors are detected by multiple security devices, so that the generation of the first security blocking solution can refer to the characteristics of historical suspected attack behaviors detected by more security devices.
  • the feature set also includes the distribution characteristics of historical suspected attack behaviors on multiple security devices, so that the generation of the first security blocking solution can refer to more types of features. Therefore, this application can make the first security blocking solution more accurate and comprehensive.
  • the analyzer receives the input second security blocking policy of the second security device, and when the characteristics satisfy the second security blocking policy, generates a second blocking plan and submits the input to the second security device.
  • the device sends the second blocking plan, so that the second security device executes the second blocking plan.
  • the second blocking solution is used to block suspected attack behaviors of the same category as the target's suspected attack behaviors. It can be seen that this application can use features to obtain blocking solutions that adapt to different blocking strategies, so that different security devices can complete blocking of attack behaviors based on blocking solutions that adapt to their own blocking strategies.
  • sending the first blocking plan to the first security device includes: sending the first blocking plan to the controller of the first security device, so that the controller of the first security device sends the first blocking plan to the first security device.
  • the device sends the first blocking solution.
  • this application provides a device for handling suspected attack behavior, including a transceiver unit, an acquisition unit, and a generation unit.
  • the transceiver unit is configured to receive an alarm from the first security device, where the alarm contains the category of the target's suspected attack behavior.
  • the acquisition unit is configured to acquire, according to the category of the target's suspected attack behavior, the characteristics of historical suspected attack behaviors of the same category as the target's suspected attack behavior.
  • the characteristics include the probability value of the historical suspected attack behavior being determined as an attack behavior and at least one of the feature sets.
  • the feature set includes the distribution characteristics of the historical suspected attack behaviors in time, or the distribution characteristics of the IP addresses that initiated the historical suspected attack behaviors.
  • the generation unit is used to generate a first blocking plan based on the characteristics, and the first blocking plan is used to block suspected attack behaviors of the same category as the target suspected attack behavior.
  • the transceiver unit is also configured to send the first blocking plan to the first security device, so that the first security device executes the first blocking plan.
  • the historical suspected attack behavior is detected by the first security device.
  • the obtaining unit is further configured to receive an input first security blocking policy; and the generating unit is configured to generate a first blocking plan when the characteristics satisfy the first security blocking policy.
  • the first security blocking policy includes a score threshold
  • the acquisition unit is configured to obtain the score of the feature according to the feature and the target scoring rule of the feature, and when the score of the feature is greater than the score threshold, determine the feature Meet the first security blocking strategy.
  • the acquisition unit is further configured to receive an input processing opinion for the target's suspected attack behavior, and the processing opinion is used to determine the probability value of the historical suspected attack behavior being determined as an attack behavior.
  • historical suspected attack behaviors are detected by multiple security devices, and the feature set also includes distribution characteristics of historical suspected attack behaviors on multiple security devices.
  • the obtaining unit is further configured to receive the input second security blocking policy of the second security device; and the generating unit is further configured to generate a third security blocking policy when the characteristics satisfy the second security blocking policy.
  • Two blocking plans the second blocking plan is used to block suspected attack behaviors of the same category as the target suspected attack behaviors; the transceiver unit is also used to send the second blocking plan to the second security device, so that the second security device Implement the second blocking plan.
  • the transceiver unit is configured to send the first blocking plan to the controller of the first security device, so that the controller of the first security device sends the first blocking plan to the first security device.
  • this application provides a computer device.
  • the computer device includes a memory and a processor.
  • a processor configured to execute computer programs or instructions stored in the memory, so that the computer device performs the method of any one of the first aspects.
  • the present application provides a computer-readable storage medium.
  • the computer-readable storage medium has program instructions. When the program instructions are executed directly or indirectly, any method in the first aspect is implemented.
  • the present application provides a chip system.
  • the chip system includes at least one processor.
  • the processor is configured to execute computer programs or instructions stored in the memory.
  • the computer program or instructions are executed in at least one processor, the first Either method is implemented in one aspect.
  • the present application provides a computer program product, which includes instructions that, when executed on a computer, cause the computer to perform any of the methods of the first aspect.
  • Figure 1 is a schematic diagram of the network architecture provided by an embodiment of the present application.
  • Figure 2 is a schematic diagram of a first embodiment of a method for handling suspected attack behavior according to the embodiment of the present application
  • Figure 3 is a schematic diagram of a second embodiment of a method for handling suspected attack behavior according to the embodiment of the present application
  • Figure 4 is a schematic diagram of a third embodiment of a method for handling suspected attack behavior according to the embodiment of the present application.
  • Figure 5 is a schematic diagram of a fourth embodiment of a method for handling suspected attack behavior according to the embodiment of the present application.
  • Figure 6 is a schematic diagram of a device for handling suspected attack behavior according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • the embodiments of the present application provide a method and related devices for handling suspected attack behaviors. This method can improve the blocking rate of network attack behaviors.
  • At least one of a, b, or c can represent: a, b, c, a-b, a-c, b-c, or a-b-c, where a, b, c can be single or multiple .
  • the embodiments of this application can be applied to the network architecture shown in Figure 1.
  • the network architecture includes a data center network, a campus network, and a branch network.
  • the branch network is a type of campus network and corresponds to the branches of the campus.
  • Security devices for maintaining network security are set up at the boundaries of these three networks, and the security devices of these three networks are connected to the analyzer for communication.
  • security devices include firewalls, security gateways, etc.
  • Analyzers are devices with computing capabilities, such as personal computers, servers, server clusters, virtual machines, cloud services, etc. Cloud services are, for example, public cloud, private cloud, or hybrid cloud.
  • the security device is used for the detection of suspected network attacks and local real-time defense. After detecting a suspected network attack, it sends an alarm to the analyzer.
  • the detection types include intrusion prevention detection, zombie worm detection, malicious file detection, etc. It should be noted that some suspected network attacks will be directly blocked at the security device.
  • the analyzer is used to analyze the alarms sent by the security device, and notify the security device based on the analysis results to block suspected network attacks by malicious attackers, and at the same time provide security emergency services.
  • a blacklist is used to block network attacks, after a network attack fails, the attacker can change the IP that initiated the network attack, and then re-launch the same type of network attack through an IP address outside the blacklist, resulting in network attacks.
  • the blocking rate is low.
  • the embodiment of this application provides a method for handling suspected attack behavior.
  • This method can be applied to Figure 1 Analyzer.
  • the analyzer obtains the characteristics of these historical suspected attack behaviors by analyzing historical suspected attack behaviors of the same category as the target suspected attack behavior, even if the attacker subsequently changes some information when launching the same category of attack behaviors (such as , the IP address that initiated the attack), this solution can still block these attacks based on the characteristics of historical suspected attacks of the same category, improving the blocking rate of similar suspected attacks.
  • the analyzer can also generate a blocking plan based on the user's security blocking policy, so that the blocking plan is more in line with the user's needs.
  • the analyzer can also obtain the characteristics of historical suspected attack behaviors based on the security service experts' opinions (or handling measures) on suspected attack behaviors, so as to obtain more accurate characteristics of historical suspected attack behaviors and further improve the effectiveness of the blocking solution. accuracy.
  • the security device after detecting the target's suspected attack behavior, sends an alarm of the target's suspected attack behavior. For alerts of suspected attack behavior, security service experts can recommend handling measures.
  • the analyzer analyzes and formulates the blocking plan based on the disposal measures and the user's security blocking strategy. After obtaining the blocking plan, the analyzer can send the blocking plan to the security device through the security device controller. After receiving the blocking plan, the security device can block suspected attack behaviors of the same category as the target suspected attack behavior according to the blocking plan.
  • this application provides an embodiment of a method for handling suspected attack behavior.
  • This embodiment includes steps 101 to 104.
  • Step 101 When detecting suspected attack behavior of the target, the first security device sends an alarm to the analyzer.
  • the first security device refers to a device with security protection located on the packet transmission path, which may be a firewall or a security gateway.
  • This alarm contains information about the target's suspected attack behavior, such as the category of the target's suspected attack behavior.
  • the alarm may also include other information, for example, it may also include at least one of the following information: the attacker's IP address and port number, the attacked IP address and port number, the attacker's zone (for example, trust zone) or untrusted (untrust) zone), the area where the attacker is located (trust zone or untrust zone), the identification of the first security device that detects the target's suspected attack behavior, the time when the target's suspected attack behavior occurs, and the protocol type of the attack message and the first security device's handling action for the attack (for example, blocking or only alerting).
  • the attacker refers to the device that initiates the target's suspected attack behavior
  • the attacked refers to the device to be attacked by the target's suspected attack behavior.
  • the trusted area can be the area within the LAN where the attacked person is located, and the untrusted area can be the area that is attacked. Area outside the local area network.
  • Each suspected attack behavior has a type identifier (identifier, ID).
  • identifier is, for example, name and code.
  • the category of the target's suspected attack behavior can be represented by a type identifier.
  • the categories of suspected target attack behaviors can include remote desktop protocol (Remote Desktop Protocol, RDP) local account brute force cracking attempts, and suspected structured query language (Structured Query Language, SQL) injection attack attempts.
  • RDP Remote Desktop Protocol
  • SQL Structured Query Language
  • the message protocol type may be, for example, Secure Shell Protocol (Secure Shell Protocol, SSH), RDP, File Transfer Protocol (File Transfer Protocol, FTP), etc.
  • the first security device can first handle the suspected attack behavior of the target and then send it to the analyzer. Send an alarm so that the analyzer can analyze the target's suspected attack behavior, so the alarm can include the processing action of the first security device.
  • the analyzer receives an alarm from the first security device, where the alarm contains a category of the target's suspected attack behavior.
  • Step 102 The analyzer obtains the characteristics of the historical suspected attack behavior of the same category as the target suspected attack behavior based on the category of the target suspected attack behavior.
  • the characteristics include the probability value of the historical suspected attack behavior being determined as an attack behavior and at least one of the feature sets.
  • the feature set includes the distribution characteristics of historical suspected attack behaviors in time, or the distribution characteristics of the IP addresses that initiated historical suspected attack behaviors.
  • the historical suspected attack behavior of the same category as the target suspected attack behavior can be understood as the historical suspected attack behavior with the same type identifier as the target suspected attack behavior.
  • the probability value of a historical suspected attack behavior being determined as an attack behavior is the probability that a historical suspected attack behavior is determined as an attack behavior, which can be the ratio of the number of historical suspected attack behaviors that are determined as attack behaviors to the number of historical suspected attack behaviors.
  • the analyzer can mark the suspected attack behavior as an attack behavior and update the probability value of the historical suspected attack behavior being determined as an attack behavior accordingly.
  • the analyzer can also modify the marking results of historical suspected attack behaviors based on the security service experts' opinions on handling historical suspected attack behaviors.
  • the handling opinions are blocking or only warning.
  • Security service experts’ opinions on handling suspected historical attacks include two situations. The first case is that after receiving historical suspected attack behaviors, security service experts give processing opinions; the second case is that after the security equipment has implemented processing measures for historical suspected attack behaviors, security service experts The handling measures are corrected, and the corrected handling opinions of the security service experts will be recorded.
  • the analyzer first marks the suspected attack behavior as an attack behavior. However, if the security service expert's handling opinion for the suspected attack behavior is "alarm only", the analyzer will mark the suspected attack behavior as a non-attack behavior.
  • the analyzer updates the probability that a suspected attack behavior is determined to be an attack behavior based on the corrected tag update history. For example, if there are a total of 100 suspected attacks in history, and security service experts have blocked 80 of them, then the probability that the suspected attacks in history are determined to be attacks is 80%.
  • the method provided by the embodiment of the present application may also include: the analyzer receives an input processing opinion, and the processing opinion is used to determine the probability value of the historical suspected attack behavior being determined as an attack behavior.
  • the temporal distribution characteristics of historical suspected attack behaviors describe the characteristics of historical suspected attack behaviors from the time dimension. They are used to indicate the persistence or contingency of historical attack behaviors. Specifically, they can include a variety of information.
  • the time distribution characteristics of historical suspected attack behaviors may include the number of suspected attack behaviors that occurred in multiple time periods in history; the time distribution characteristics of historical suspected attack behaviors may also include the number of suspected attack behaviors that occurred in multiple time periods in history. The ratio of the number of suspected attacks to the total number of suspected attacks in history.
  • the time period corresponding to the suspected attack behavior that occurred in history (hereinafter referred to as the preset time period) can be set according to requirements, such as the latest day or month.
  • the preset time period includes the above-mentioned multiple time periods.
  • the multiple time periods can be every day in the month.
  • the preset time period is the most recent day of every hour.
  • the temporal distribution characteristics of historical suspected attack behaviors may also include the frequency of historical suspected attack behaviors in each time period. The frequency may be the ratio of the number of historical suspected attack behaviors to the number of days. It is understandable that if historical suspected attack behaviors only appear in a certain time period in the preset time period or appear very frequently in a certain time period, and only exist sporadically in other time periods, it means that historical suspected attack behaviors are accidental. If the number of suspected attacks in history is in multiple time periods The distribution is relatively even, indicating that the suspected historical attack behavior is persistent.
  • the distribution characteristics of IP addresses that initiated historical suspected attack behaviors describe the characteristics of historical suspected attack behaviors from the perspective of the widespread distribution of attackers. They are used to indicate the widespread nature of attack sources, and can specifically include a variety of information.
  • the distribution characteristics of IP addresses that initiated historical suspected attack behaviors may include the number of IP addresses (which may also be called source IPs) that initiated historical suspected attack behaviors in the recent period.
  • the distribution characteristics of the IP addresses that initiated historical suspected attack behaviors may also include the distribution characteristics of the regions to which the IP addresses that initiated historical suspected attack behaviors belong, such as the average number of source IPs in each region. The greater the number of IPs that have initiated suspected historical attacks, the more widespread the attack source is.
  • a security device is usually responsible for a network (also called a site).
  • the features in the feature set can also be called single-site features.
  • the above-mentioned historical suspected attack behaviors can also be detected by multiple security devices. That is, multiple security devices responsible for multiple networks send their respective detected suspected attack behaviors to the analyzer, and the analyzer obtains the characteristics of historical suspected attack behaviors based on the suspected attack behaviors sent by multiple security devices from multiple networks. At this time, the features in the feature set can also be called global features.
  • multiple security devices can be located in adjacent areas. For example, multiple security devices are located in North China, or are located in South China, or are located in the same country.
  • the security equipment of the data center network, the security equipment of the campus network, and the security equipment of the branch network can correspond to one analyzer.
  • Multiple security devices include the security equipment of the data center network, the security equipment of the campus network, and the security equipment of the branch network.
  • Security devices and security devices for branch networks are possible.
  • the global characteristics can be the same as the characteristics of a single site, or they can be different from the characteristics of a single site.
  • the following uses examples to illustrate the differences between global features and single-point features.
  • the distribution characteristics of IP addresses that initiated historical suspected attack behaviors can be the number of IP addresses that initiated historical suspected attack behaviors within a preset time period; while for global characteristics, the distribution characteristics of IP addresses that initiated historical suspected attack behaviors are The distribution characteristics of IP addresses can be the average number of IPs that have initiated historical suspected attack behaviors on each security device within a preset time period, that is, the ratio of the total number of IPs that have initiated historical suspected attack behaviors to the number of security devices.
  • the feature set may not only include the previously mentioned features, but also include the distribution characteristics of historical suspected attack behaviors on multiple security devices.
  • the distribution characteristics of historical suspected attack behaviors on multiple security devices describe the characteristics of historical suspected attack behaviors from the perspective of the widespread distribution of historical suspected attack behaviors on security devices. It is used to indicate the security devices that have detected historical suspected attack behaviors. Extensive, specifically can include a variety of information. For example, it may include the number of security devices that have experienced historical suspected attack behaviors, or the ratio of the number of security devices that have experienced historical suspected attack behaviors to the total number of security devices.
  • Step 103 The analyzer generates a first blocking plan based on the characteristics.
  • the first blocking plan is used to block suspected attack behaviors of the same category as the target suspected attack behavior.
  • the analyzer can directly generate the first blocking solution based on the characteristics. For example, taking the characteristics of the probability value of historical suspected attack behavior being determined as an attack behavior and the distribution characteristics of historical suspected attack behavior in time, if the probability value of historical suspected attack behavior being determined as an attack behavior is greater than a certain probability threshold, and preset time period If the ratio of the number of days of historical suspected attack behavior to the total number of days is within a certain range, the first blocking plan is generated.
  • the first blocking plan is generated.
  • the analyzer can also score the features first, and then generate the first blocking plan based on the obtained scores. This method will be introduced in detail below with reference to Figure 4.
  • the content of the first blocking plan can be of various kinds, and the content of the first blocking plan can be as shown in the following table.
  • blocking direction is optional.
  • the security device can block all suspected attack behaviors that are of the same category as the suspected attack behaviors to be blocked.
  • the blocking direction may be determined based on the security areas where the attacker and the attacked are located in historical suspected attack behaviors. For example, if the attacker with suspected attack behavior in the past is usually located in the untrusted zone, and the attacker is usually located in the trusted zone, the blocking direction can be to block the suspected attack behavior from the untrusted zone to the trusted zone; if the historical suspected attack behavior is usually located in the trusted zone, The attacker with suspected attack behavior is usually located in the trust zone, and the attacked person is usually also located in the trust zone.
  • the blocking direction can be to block the suspected attack behavior from the trust zone to the trust zone. However, if the area where the attacker is located and the area where the attacked is located are irregular, the first blocking plan does not need to specify the blocking direction.
  • the security device may execute the first blocking plan according to a default time.
  • the default time is one week after receiving the first blocking plan.
  • the first blocking solution includes a statute of limitations
  • the statute of limitations may be determined based on the time when historical suspected attack behaviors occurred. For example, if suspected historical attacks only occurred in the past week, the statute of limitations is one month; if suspected historical attacks have occurred in the past few years, the statute of limitations can be one year.
  • an example of the contents of the first blocking solution may be: the unique identifier of the security device that performs the blocking action indicates the first security device, and the type of suspected attack behavior to be blocked is RDP local account brute force cracking. Try, the blocking direction is to block suspected attack behavior from outside the LAN where the attacker is located (untrusted zone) to within the LAN where the attacker is located (trusted zone).
  • the content of the first blocking plan may also include the protocol type of the packet to be blocked, for example, the protocol type of the packet to be blocked is an RDP attack packet.
  • Step 104 The analyzer sends the first blocking plan to the first security device, so that the first security device executes the first blocking plan.
  • the analyzer can directly send the first blocking plan to the first security device.
  • the analyzer may also indirectly send the first blocking plan to the first security device.
  • the analyzer sends a first blocking solution to the controller of the first security device, such that the controller of the first security device sends the first blocking solution to the first security device.
  • the analyzer first obtains the characteristics of historical suspected attack behaviors of the same category as the target suspected attack behavior based on the category of the target suspected attack behavior, then generates a first blocking plan based on the characteristics, and finally sends a message to the first security device.
  • the first blocking scheme is such that the first security device executes the first blocking scheme, thereby blocking suspected attack behaviors of the same category as the target suspected attack behavior.
  • this solution can still block these attacks based on the characteristics of historical suspected attacks of the same category, improving the blocking rate of similar suspected attacks.
  • Users can set corresponding blocking strategies for security devices, so that the analyzer can generate the first blocking scheme based on the user's blocking strategy and characteristics, making the first blocking scheme more in line with the user's needs. This will be introduced in detail below with reference to the embodiment of FIG. 4 .
  • this application provides another embodiment of a method for handling suspected attack behavior.
  • This embodiment includes:
  • Step 201 When detecting suspected attack behavior of the target, the first security device sends an alarm to the analyzer.
  • the analyzer receives an alarm from the first security device, where the alarm contains a category of the target's suspected attack behavior.
  • Step 202 The analyzer obtains the characteristics of historical suspected attack behaviors of the same category as the target suspected attack behavior based on the category of the target suspected attack behavior.
  • the characteristics include at least one of the probability value of the historical suspected attack behavior being determined as an attack behavior and a feature set.
  • the feature set includes the distribution characteristics of historical suspected attack behaviors in time or the distribution characteristics of the Internet protocol IP addresses that initiated historical suspected attack behaviors.
  • Steps 201 to 202 are similar to steps 101 to 102. For details, refer to the description of steps 101 to 102 in the embodiment shown in FIG. 3 . No further details will be given here.
  • Step 203 The analyzer receives the input first security blocking policy.
  • the user can input the first security blocking policy according to actual needs.
  • the first security blocking policy will be different depending on the needs. For example, in some special periods, it is necessary to block external attacks as much as possible and have a greater tolerance for false blocking. Therefore, in order to meet user needs, the first security blocking strategy can be appropriately adjusted to block more network attacks. ; In other normal periods, in order to avoid affecting normal business, the tolerance for false blocking is low. At this time, in order to satisfy users According to user needs, the first security blocking strategy can be appropriately adjusted to block fewer network attacks.
  • the content of the first security blocking strategy comes in many forms.
  • the first blocking plan is directly generated based on the characteristics, and the content of the first security blocking policy corresponds to the characteristics. For example, if the feature includes the ratio of the number of days of suspected attack behavior in the preset time period to the total number of days, the first security blocking policy may include a threshold of the ratio. The threshold of the ratio is used to compare with the actual value of the feature to determine the feature. Whether the first security blocking policy is met.
  • the first blocking plan is generated according to the score of the feature, and the content of the first security blocking policy corresponds to the score.
  • the first security blocking policy may include a score threshold, as shown in the following table.
  • Step 204 The analyzer obtains the score of the feature according to the feature and the feature's target scoring rule.
  • the target scoring rules can include scoring rules for each feature.
  • the following uses global features as an example for explanation.
  • the temporal distribution characteristics of historical suspected attack behaviors include the ratio of the number of days in which historical suspected attack behaviors were reported to the total number of days in the most recent period.
  • the target scoring rule may include the scoring rules for the temporal distribution characteristics of historical suspected attack behaviors. The details are as follows: when the ratio is greater than 80%, the corresponding score is 100 points; when the ratio is greater than 60% and less than or equal to 80%, the corresponding score is 90 points; when the ratio is greater than 30% and less than or equal to 60%, the corresponding score is for 80 points. When the ratio is greater than 15% and less than 30%, the corresponding score is 60 points. When the ratio is less than 15%, the corresponding score is 0 points.
  • the temporal distribution characteristics of historical suspected attack behaviors indicate the persistence of suspected attack behaviors, so the above target scoring rules can be considered as scoring features based on the persistence of suspected attack behaviors.
  • the larger the ratio the more persistent the historical suspected attack behavior is.
  • the above ratios and scores are examples, and are actually set according to requirements. For example, the above ratios and scores are determined according to the user's requirements for blocking attacks.
  • the target scoring rule may include the distribution characteristics of the IP addresses that initiated historical suspected attack behaviors. Scoring rules for distribution characteristics, for example: when the average quantity is greater than 30, the corresponding score is 100 points, when the average quantity is greater than 20 and less than or equal to 30, the corresponding score is 90 points, when the average quantity is greater than 10 and less than or equal to 20 , the corresponding score is 80 points. When the average quantity is greater than 4 and less than or equal to 20, the corresponding score is 60 points. When the average quantity is less than or equal to 4, the corresponding score is 0 points.
  • the distribution characteristics of IP addresses that initiated historical suspected attack behaviors indicate the widespread distribution of attackers. Therefore, the above target scoring rules can be considered as scoring characteristics based on the widespread distribution of attackers.
  • the wider the distribution of attackers the greater the necessity of blocking suspected attack behaviors of the same category, and the higher the corresponding score; conversely, the smaller the necessity of blocking suspected attack behaviors of the same category, accordingly, The corresponding score is lower.
  • the above-mentioned quantities and scores are examples and can be set according to actual requirements. For example, the above-mentioned quantities are determined according to the distribution of the average quantity, and the above-mentioned scores are determined according to the user's requirements for blocking attacks.
  • the target scoring rules may include scoring rules for the probability value of a historical suspected attack behavior being determined as an attack behavior. For example: when the probability value is 100%, the corresponding score is 100 points, and when the probability value is greater than or equal to 95% and less than 100 %, the corresponding score is 60 points, and when the probability value is less than 95%, the corresponding score is 0 points. It can be understood that the above probability values and scores are examples and are actually set according to requirements. For example, the above probability values and scores are determined according to the user's requirements for blocking attacks.
  • the target scoring rule can include the distribution characteristics of historical suspected attack behaviors on multiple security devices.
  • the distribution characteristics of historical suspected attack behaviors on multiple security devices indicate the widespread distribution of historical suspected attack behaviors on security devices. Therefore, the above target scoring rules can be considered to be based on the widespread distribution of historical suspected attack behaviors on security devices. Score.
  • the above probability values and scores are examples, and are actually set according to requirements. For example, the above ratios and scores are determined according to the user's requirements for blocking attacks.
  • the score of the feature can be obtained according to the feature and target scoring rules; when the number of features is multiple, the score of each feature can be obtained according to the features and target scoring rules, and then The analyzer can obtain the total score of a feature based on the score of each feature and the weight of each feature (i.e., a weighted sum of the scores of all features in the feature set).
  • the weight of each feature can be set based on experience. For example, the weight of the score of the distribution feature of historical suspected attack behavior on multiple security devices is 0.2, and the score of the probability value of historical suspected attack behavior is determined as the attack behavior. The weight is 0.4, the weight of the score of the distribution characteristics of historical suspected attack behavior in time is 0.2, and the weight of the score of the distribution characteristics of the IP address that initiated the historical suspected attack behavior is 0.2.
  • Step 205 If the score of the feature is greater than the score threshold, the analyzer determines that the feature satisfies the first security blocking policy.
  • Step 206 If the feature satisfies the first security blocking policy, the analyzer generates a first blocking plan.
  • step 206 Compared with step 103, the difference between step 206 and step 103 is that the first security blocking policy input by the user is used in step 206, that is, the user's needs are considered in the process of generating the first blocking plan; in step 206, the first security blocking strategy is generated.
  • the process of generating the blocking plan is similar to the process of generating the first blocking plan in step 103.
  • the content of the first blocking plan may include a blocking direction; based on this, the first blocking strategy may also include a blocking direction.
  • the blocking direction in the generated first blocking plan may be consistent with the blocking direction requirement in the first security blocking policy.
  • the blocking direction is required to block suspected attacks from the trusted zone to the untrusted zone, then the blocking direction in the first blocking solution is: blocking from the trusted zone to the untrusted zone. Suspected attacks in untrusted areas.
  • the content of the first blocking plan may also include failure, and the failure in the first blocking plan may be consistent with the time limit in the first security blocking policy.
  • Step 207 The analyzer sends the first blocking plan to the first security device, so that the first security device executes the first blocking plan.
  • Step 207 is similar to step 104, and may be understood specifically with reference to the relevant description of step 104 in the embodiment shown in FIG. 3 .
  • Step 208 Receive the input second security blocking policy of the second security device.
  • the second security device may be any security device different from the first security device, and the second security blocking policy may be the same as the first security blocking policy, or may be different from the first security blocking policy.
  • Step 209 If the characteristics satisfy the second security blocking policy, a second blocking plan is generated.
  • the second blocking plan is used to block suspected attack behaviors of the same category as the target suspected attack behavior.
  • Step 210 Send the second blocking plan to the second security device, so that the second security device executes the second blocking plan.
  • the second blocking plan is compared with the first blocking plan.
  • the categories and actions of the suspected attack behaviors to be blocked are the same, and the blocking direction and duration are the same.
  • the times can be the same or different.
  • step 206 Based on the relevant description of step 206, it can be seen that the blocking direction and duration in the first blocking scheme are consistent with the blocking direction requirements and timeliness in the first security blocking strategy. Similarly, the blocking direction and duration in the second blocking scheme The direction and duration are consistent with the blocking direction requirements and timeliness in the second security blocking policy.
  • the blocking direction in the first blocking plan is consistent with the blocking direction requirements in the second blocking plan.
  • the blocking direction is the same; when the timing in the first security blocking strategy is consistent with the timing in the second security blocking strategy, then the timing in the first blocking scheme is consistent with the timing in the second blocking scheme.
  • steps 208 to 210 are similar to step 203, step 206, and step 207, and can be understood specifically with reference to the embodiment shown in FIG. 4 .
  • the features obtained in step 202 may be global features or single point features.
  • the analyzer may send the first blocking plan to the first security device through steps 203 to 207, so that the first security device blocks the suspected attack behavior.
  • the analyzer can also send a second blocking plan to the second security device through steps 208 to 210, so that the second security device blocks Suspected aggressive behavior.
  • the analyzer can also generate blocking solutions for security devices on different networks based on the single site characteristics obtained in step 202.
  • the first security device and the second security device are on similar or even identical networks. Therefore, the single site characteristics corresponding to the first security device can also be applied to the second security device. Therefore, the analyzer can also generate a second security device for the second security device through the single site characteristics corresponding to the first security device obtained in step 202. Blocking program.
  • the analyzer can generate a blocking solution suitable for the first security device based on the single site characteristics of the first security device, and based on the second security device's Single site features generate blocking solutions suitable for secondary security devices.
  • the analyzer selects the characteristics of historical suspected attack behaviors of the same category as the target suspected attack behavior for analysis to determine the blocking plan.
  • the analyzer can obtain the opinions of security service experts on handling suspected attack behaviors to determine the above characteristics.
  • the feature can be a global feature, a single point feature, or a global feature and a single point feature.
  • the global features and single-site features are scored respectively to obtain the credibility score of the global features and the credibility score of the single-site features. .
  • the analyzer first determines whether the feature satisfies the security blocking policy through the credibility score of the global feature. If the credibility score of the global feature is greater than or equal to the score threshold in the security blocking policy, it is determined that the feature satisfies the security blocking policy, and then Issue blocking plan.
  • the analyzer determines whether the feature satisfies the first security blocking policy based on the credibility score of the single site feature.
  • the credibility score is greater than or equal to the score threshold in the first security blocking policy, then it is determined that the feature satisfies the first security blocking policy, and then the blocking plan is issued.
  • global features include a larger number of historical suspected attack behavior features, so the credibility score of global features can better reflect the ability to block a certain type of attack than the credibility score of single-site features.
  • this embodiment first uses the credibility score of global features to determine whether a certain security device issues a blocking plan, which can ensure the accuracy of the judgment.
  • the credibility score of the characteristics of a single site cannot reflect the necessity of other security equipment to block a certain type of historical suspected attack behavior. , but it can reflect the necessity of this security device to block a certain type of historical suspected attack behavior; therefore, when the credibility score of the global characteristics is not enough for the analyzer to issue a blocking plan, based on the reliability of the single site characteristics The reliability score determines whether a blocking plan is issued for this security device, which can also ensure the accuracy of the judgment.
  • this application also provides an embodiment of a device 300 for processing suspected attack behavior.
  • This embodiment includes a transceiver unit 301, an acquisition unit 302, and a generation unit 303.
  • the transceiver unit 301 is configured to receive an alarm from the first security device, where the alarm contains the category of the target's suspected attack behavior.
  • the acquisition unit 302 is configured to obtain the characteristics of historical suspected attack behaviors of the same category as the target suspected attack behavior according to the category of the target suspected attack behavior.
  • the characteristics include the probability value of the historical suspected attack behavior being determined as an attack behavior and at least one of the feature sets. kind.
  • the feature set includes the distribution characteristics of historical suspected attack behaviors in time, or the distribution characteristics of Internet Protocol IP addresses that initiated historical suspected attack behaviors.
  • the generation unit 303 is configured to generate a first blocking plan based on the characteristics.
  • the first blocking plan is used to block suspected attack behaviors of the same category as the target suspected attack behavior.
  • the transceiver unit 301 is also configured to send the first blocking plan to the first security device, so that the first security device executes the first blocking plan.
  • historical suspected attack behaviors are detected by the first security device.
  • the obtaining unit 302 is also configured to receive the input first security blocking strategy; the generating unit 303 is configured to generate a first blocking plan when the characteristics satisfy the first security blocking strategy. .
  • the first security blocking policy includes a score threshold.
  • the acquisition unit 302 is configured to obtain the score of the feature according to the feature and the target scoring rule of the feature. When the score of the feature is greater than the score threshold, determine The characteristics satisfy the first security blocking policy.
  • the acquisition unit 302 is also configured to receive an input processing opinion for the target's suspected attack behavior.
  • the processing opinion is used to determine the probability value of the historical suspected attack behavior being determined as an attack behavior.
  • historical suspected attack behaviors are detected by multiple security devices, and the feature set also includes the distribution characteristics of historical suspected attack behaviors on multiple security devices.
  • the acquisition unit 302 is configured to receive the input second security barrier of the second security device.
  • the blocking strategy; the generation unit 303 is also configured to generate a second blocking scheme when the characteristics satisfy the second security blocking strategy.
  • the second blocking scheme is used to block suspected attack behaviors of the same category as the target suspected attack behavior.
  • the transceiver unit 301 is also used to send the second blocking plan to the second security device, so that the second security device executes the second blocking plan.
  • the transceiver unit 301 is configured to send the first blocking plan to the controller of the first security device, so that the controller of the first security device sends the first blocking plan to the first security device.
  • Figure 7 is a schematic structural diagram of a computer device provided by an embodiment of the present application. As shown in Figure 7, computer equipment 900 is equipped with the above-mentioned device for handling suspected attack behavior. Computer device 900 is implemented by a general bus architecture.
  • Computer device 900 includes at least one processor 901, a communication bus 902, a memory 903, and at least one communication interface 904.
  • the processor 901 may be a general central processing unit (CPU), a network processor (NP), a microprocessor, or one or more integrated circuits used to implement the solution of the present application. , for example, application-specific integrated circuit (ASIC), programmable logic device (PLD) or a combination thereof.
  • ASIC application-specific integrated circuit
  • PLD programmable logic device
  • the above-mentioned PLD is a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), a general array logic (GAL) or any combination thereof.
  • Communication bus 902 is used to transfer information between the above-mentioned components.
  • the communication bus 902 is divided into an address bus, a data bus, a control bus, etc.
  • address bus a data bus
  • control bus a control bus
  • only one thick line is used in the figure, but it does not mean that there is only one bus or one type of bus.
  • the memory 903 is a read-only memory (ROM) or other type of static storage device that can store static information and instructions.
  • memory 903 is random access memory (RAM) or other types of dynamic storage devices that can store information and instructions.
  • the memory 903 is electrically erasable programmable read-only memory (EEPROM), compact disc read-only memory (CD-ROM) or other optical disk storage, optical disc storage ( Including compressed optical discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or can be used to carry or store desired program code in the form of instructions or data structures and can be used by a computer Any other media accessible, but not limited to this.
  • the memory 903 exists independently and is connected to the processor 901 through the communication bus 902.
  • the memory 903 and the processor 901 are integrated together.
  • Communication interface 904 uses any transceiver-like device for communicating with other devices or communication networks.
  • Communication interface 904 includes a wired communication interface.
  • the communication interface 904 also includes a wireless communication interface.
  • the wired communication interface is, for example, an Ethernet interface.
  • the Ethernet interface is an optical interface, an electrical interface, or a combination thereof.
  • the wireless communication interface is a wireless local area networks (WLAN) interface, a cellular network communication interface or a combination thereof, etc.
  • WLAN wireless local area networks
  • the communication interface 904 can be used to implement the functions of the acquisition unit in Figure 6.
  • the computer device 900 includes multiple processors, such as the processor 901 and the processor 905 shown in FIG. 7 .
  • processors is a single-core processor (single-CPU), or a multi-core processor (multi-CPU).
  • processor 901 and processor 905 each include 2 cores: CPU0 and CPU1.
  • a processor here refers to one or more devices, circuits, and/or processing cores for processing data (such as computer program instructions).
  • the memory 903 is used to store the program code 99 for executing the solution of the present application, and the processor 901 executes the program code 99 stored in the memory 903. That is to say, the computer device 900 implements the above method embodiments through the processor 901 and the program code 99 in the memory 903 .
  • An embodiment of the present application also provides a chip including one or more processors. Some or all of the processor is used to read and execute the computer program stored in the memory to perform the methods of the aforementioned embodiments.
  • the chip should include a memory, and the memory and the processor are connected to the memory through circuits or wires. Further optionally, the chip also includes a communication interface, and the processor is connected to the communication interface.
  • the communication interface is used to receive data and/or information that needs to be processed.
  • the processor obtains the data and/or information from the communication interface, processes the data and/or information, and outputs the processing results through the communication interface.
  • the communication interface may be an input-output interface.
  • some of the one or more processors may implement part of the steps in the above method through dedicated hardware. For example, processing involving a neural network model may be performed by a dedicated neural network processor. Or graphics processor to achieve.
  • the method provided by the embodiments of this application can be implemented by one chip, or can be implemented by multiple chips collaboratively.
  • Embodiments of the present application also provide a computer storage medium, which is used to store computer software instructions used for the above-mentioned computer equipment, including for executing programs designed for the computer equipment.
  • the computer device may function as the device for handling suspected attack behavior in the corresponding embodiment of FIG. 6 .
  • Embodiments of the present application also provide a computer program product.
  • the computer program product includes computer software instructions.
  • the computer software instructions can be loaded by a processor to implement the processes in the methods shown in the foregoing embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请公开了一种处理疑似攻击行为的方法及相关装置,用于提高网络攻击行为的阻断率。分析器接收来自第一安全设备的告警,该告警包含目标疑似攻击行为的类别。分析器根据目标疑似攻击行为的类别获取与目标疑似攻击行为相同类别的历史疑似攻击行为的特征,根据特征生成第一阻断方案,并向第一安全设备发送第一阻断方案,以使得第一安全设备执行第一阻断方案。该第一阻断方案用于阻断与目标疑似攻击行为相同类别的疑似攻击行为。特征包括历史疑似攻击行为被确定为攻击行为的概率值以及特征集合中的至少一种,特征集合包括历史疑似攻击行为在时间上的分布特征,或发起历史疑似攻击行为的互联网协议IP地址的分布特征。

Description

一种处理疑似攻击行为的方法及相关装置
本申请要求于2022年03月25日提交中国专利局、申请号为CN202210302447.9、申请名称为“一种处理疑似攻击行为的方法及相关装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及网络安全技术领域,尤其涉及一种处理疑似攻击行为的方法及相关装置。
背景技术
近年来,网络安全问题日益突出,各类不法组织和不法分子怀着种种目的,使用各种手段进行网络攻击,致使网络安全事件层出不穷。企业单位为了保证网络的安全性,一般都会选择在网络出口部署防火墙来阻止外部攻击。
防火墙可以基于黑名单和疑似攻击报文中的信息识别外部的网路攻击行为。例如,当疑似攻击报文中的互联网协议(Internet Protocol,IP)地址在黑名单中时,防火墙阻断该疑似攻击。
然而在一次网络攻击行为失败后,攻击者可以改变发起网络攻击行为的IP地址,从而通过黑名单外的IP地址重新发起网络攻击行为,导致网络攻击行为的阻断率较低。
发明内容
本申请提供了一种处理疑似攻击行为的方法及相关装置,该方法能够提高网络攻击行为的阻断率。
第一方面,本申请提供了一种处理疑似攻击行为的方法,可以应用于分析器。分析器接收来自第一安全设备的告警。第一安全设备是指位于报文传输路径上具有安全防护的设备,具体可以是防火墙或安全网关。告警包含目标疑似攻击行为的类别,该类别可以通过名称或编号进行标识。分析器根据目标疑似攻击行为的类别获取与目标疑似攻击行为相同类别的历史疑似攻击行为的特征。特征包括历史疑似攻击行为被确定为攻击行为的概率值和特征集合中的至少一种,特征集合包括历史疑似攻击行为在时间上的分布特征或发起历史疑似攻击行为的IP地址的分布特征。分析器根据特征生成第一阻断方案。第一阻断方案用于阻断与目标疑似攻击行为相同类别的疑似攻击行为。分析器向第一安全设备发送第一阻断方案,以使得第一安全设备执行第一阻断方案。
本方案中,分析器通过分析与目标疑似攻击行为同类别的历史疑似攻击行为,获取了这些历史疑似攻击行为的特征,因此,即使攻击者后续在发起同类别攻击行为时改变了某些信息(例如,发起攻击行为的IP地址),本方案也依然能够基于同类别的历史疑似攻击行为的特征阻断这些攻击,提升了同类疑似攻击行为的阻断率。
在一种可能的实施方式中,历史疑似攻击行为是由第一安全设备检测到的。
由于历史疑似攻击行为是由第一安全设备检测到的,所以根据特征生成的第一阻断方案对于第一安全设备来说更具针对性,能够提高网络攻击行为的阻断率。
在一种可能的实施方式中,分析器接收输入的第一安全阻断策略,并在特征满足第一 安全阻断策略的情况下,生成第一阻断方案。该第一安全阻断策略可以理解为用户输入的安全阻断策略,第一安全阻断策略可以根据实际需求进行调整。例如,在一些特殊时期,需要尽可能阻断外部攻击,对误阻断有较大的容忍度,所以适当调整第一安全阻断策略,以阻断更多的疑似攻击;在其他正常时期,为了避免影响正常业务,所以对误阻断的容忍度较低,此时可以适当调整第一安全阻断策略,以阻断更少的疑似攻击。
分析器接收用户输入的第一安全阻断策略,并在特征满足第一安全阻断策略的情况下,生成第一阻断方案,使得该阻断方案更符合用户的需求。
在一种可能的实施方式中,第一安全阻断策略中包含分数阈值,分析器根据特征和特征的目标评分规则获取特征的分数,在特征的分数大于分数阈值的情况下,确定特征满足第一安全阻断策略。当特征的数量为多个时,获取到的特征的分数可以是对多个特征各自的分数进行加权平均得到的总分数。
在一种可能的实施方式中,分析器接收输入的处理意见,处理意见用于确定历史疑似攻击行为被确定为攻击行为的概率值。其中,该处理意见可以为阻断,也可以为仅告警。例如,历史疑似攻击行为共100次,其中80次历史疑似攻击行为的处理意见为阻断,那么历史疑似攻击行为被确定为攻击行为的概率值则为80%。
在一种可能的实施方式中,历史疑似攻击行为是由多个安全设备检测到的,特征集合还包括历史疑似攻击行为在多个安全设备上的分布特征。
历史疑似攻击行为是由多个安全设备检测到的,使得第一安全阻断方案的生成可以参考更多安全设备检测到的历史疑似攻击行为的特征。特征集合还包括历史疑似攻击行为在多个安全设备上的分布特征,使得第一安全阻断方案的生成可以参考更多类型的特征。因此,本申请能够使得第一安全阻断方案更加准确、全面。
在一种可能的实施方式中,分析器接收输入的第二安全设备的第二安全阻断策略,在特征满足第二安全阻断策略的情况下,生成第二阻断方案,向第二安全设备发送第二阻断方案,以使得第二安全设备执行第二阻断方案。第二阻断方案用于阻断与目标疑似攻击行为相同类别的疑似攻击行为。由此可见,本申请利用特征可以得到适配不同阻断策略的阻断方案,使得不同的安全设备能够根据适配自身阻断策略的阻断方案,完成对攻击行为的阻断。
在一种可能的实施方式中,向第一安全设备发送第一阻断方案包括:向第一安全设备的控制器发送第一阻断方案,以使得第一安全设备的控制器向第一安全设备发送第一阻断方案。
第二方面,本申请提供了一种处理疑似攻击行为的装置,包括收发单元、获取单元和生成单元。
收发单元,用于接收来自第一安全设备的告警,告警包含目标疑似攻击行为的类别。
获取单元,用于根据目标疑似攻击行为的类别获取与目标疑似攻击行为相同类别的历史疑似攻击行为的特征。特征包括历史疑似攻击行为被确定为攻击行为的概率值以及特征集合中的至少一种,特征集合包括历史疑似攻击行为在时间上的分布特征,或发起历史疑似攻击行为的IP地址的分布特征。
生成单元,用于根据特征生成第一阻断方案,第一阻断方案用于阻断与目标疑似攻击行为相同类别的疑似攻击行为。
收发单元,还用于向第一安全设备发送第一阻断方案,以使得第一安全设备执行第一阻断方案。
在一种可能的实施方式中,历史疑似攻击行为是由第一安全设备检测到的。
在一种可能的实施方式中,获取单元,还用于接收输入的第一安全阻断策略;生成单元,用于在特征满足第一安全阻断策略的情况下,生成第一阻断方案。
在一种可能的实施方式中,第一安全阻断策略中包含分数阈值,获取单元用于根据特征和特征的目标评分规则获取特征的分数,在特征的分数大于分数阈值的情况下,确定特征满足第一安全阻断策略。
在一种可能的实施方式中,获取单元还用于接收输入的对于目标疑似攻击行为的处理意见,处理意见用于确定历史疑似攻击行为被确定为攻击行为的概率值。
在一种可能的实施方式中,历史疑似攻击行为是由多个安全设备检测到的,特征集合还包括历史疑似攻击行为在多个安全设备上的分布特征。
在一种可能的实施方式中,获取单元还用于接收输入的第二安全设备的第二安全阻断策略;生成单元,还用于在特征满足第二安全阻断策略的情况下,生成第二阻断方案,第二阻断方案用于阻断与目标疑似攻击行为相同类别的疑似攻击行为;收发单元,还用于向第二安全设备发送第二阻断方案,以使得第二安全设备执行第二阻断方案。
在一种可能的实施方式中,收发单元用于向第一安全设备的控制器发送第一阻断方案,以使得第一安全设备的控制器向第一安全设备发送第一阻断方案。
其中,以上各单元的具体实现、相关说明以及技术效果请参考第一方面的相关描述。
第三方面,本申请提供了一种计算机设备,计算机设备包括:存储器和处理器。处理器,用于执行存储器中存储的计算机程序或指令,以使计算机设备执行如第一方面中任一项的方法。
第四方面,本申请提供了一种计算机可读存储介质,计算机可读存储介质具有程序指令,当程序指令被直接或者间接执行时,使得第一方面中任一的方法被实现。
第五方面,本申请提供了一种芯片***,芯片***包括至少一个处理器,处理器用于执行存储器中存储的计算机程序或指令,当计算机程序或指令在至少一个处理器中执行时,使得第一方面中任一项的方法被实现。
第六方面,本申请提供了一种计算机程序产品,包括指令,当指令在计算机上运行时,使得计算机执行第一方面中任一项的方法。
附图说明
图1为本申请实施例提供的网络架构的示意图;
图2为本申请实施例处理疑似攻击行为的方法的第一实施例示意图;
图3为本申请实施例处理疑似攻击行为的方法的第二实施例示意图;
图4为本申请实施例处理疑似攻击行为的方法的第三实施例示意图;
图5为本申请实施例处理疑似攻击行为的方法的第四实施例示意图;
图6为本申请实施例处理疑似攻击行为的装置的一个实施例示意图;
图7为本申请实施例提供的计算机设备的一种结构示意图。
具体实施方式
本申请实施例提供了一种处理疑似攻击行为的方法及相关装置,该方法能够提高网络攻击行为的阻断率。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”以及相应术语标号等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的术语在适当情况下可以互换,这仅仅是描述本申请的实施例中对相同属性的对象在描述时所采用的区分方式。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,以便包含一系列单元的过程、方法、***、产品或设备不必限于那些单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它单元。
在本申请的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本申请中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,在本申请的描述中,“至少一项”是指一项或者多项,“多项”是指两项或两项以上。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。
本申请实施例可以应用于图1所示的网络架构中。该网络架构包含数据中心网络、园区网络(Campus network)和分支网络(Branch network),其中,分支网络是园区网络的一种,对应园区的分支机构。这三个网络的边界处均设置有用于维护网络安全的安全设备,这三个网络的安全设备与分析器通信连接。其中,安全设备例如为防火墙、安全网关等。分析器为具有计算能力的设备,例如,个人计算机、服务器、服务器集群、虚拟机、云服务等。云服务例如为公有云、私有云或混合云等。
安全设备用于疑似网络攻击的检测和本地实时防御,并在检测到疑似网络攻击后,向分析器发送告警,其中,检测类型包括入侵防御检测、僵木蠕检测、恶意文件检测等。需要说明的是,部分疑似网络攻击在安全设备处会直接被阻断。
分析器用于分析安全设备发送的告警,并根据分析结果通知安全设备阻断恶意攻击者的疑似网络攻击,同时提供安全应急服务。
若通过黑名单来阻断网络攻击,则在一次网络攻击行为失败后,攻击者可以改变发起网络攻击行为的IP,从而通过黑名单外的IP地址重新发起同类的网络攻击行为,导致网络攻击行为的阻断率较低。
为此,本申请实施例提供了一种处理疑似攻击行为的方法,该方法可以应用于图1中 分析器。在该方法中,分析器通过分析与目标疑似攻击行为同类别的历史疑似攻击行为,获取了这些历史疑似攻击行为的特征,即使攻击者后续在发起同类别攻击行为时改变了某些信息(例如,发起攻击行为的IP地址),本方案也依然能够基于同类别的历史疑似攻击行为的特征阻断这些攻击,提升了同类疑似攻击行为的阻断率。
可选地,分析器还可以基于用户的安全阻断策略生成阻断方案,以使得阻断方案更符合用户的需求。
可选地,分析器还可以基于安全服务专家对疑似攻击行为的处理意见(或处置措施)获取历史疑似攻击行为的特征,以获得更准确的历史疑似攻击行为的特征,进一步提升阻断方***性。
具体地,如图2所示,在检测到目标疑似攻击行为后,安全设备发送目标疑似攻击行为的告警。对于疑似攻击行为的告警,安全服务专家可以推荐处理处置措施。分析器根据该处置措施以及用户的安全阻断策略进行阻断方案的分析与制定。在得到阻断方案后,分析器可以通过安全设备控制器将阻断方案发送到安全设备。安全设备在收到阻断方案后,可以依照阻断方案对目标疑似攻击行为同类别的疑似攻击行为进行阻断。
下面对本申请实施例提供的方法进行具体介绍。
如图3所示,本申请提供了一种处理疑似攻击行为的方法的一个实施例,该实施例包括步骤101至步骤104。
步骤101,第一安全设备在检测到目标疑似攻击行为的情况下,向分析器发送告警。
第一安全设备是指位于报文传输路径上具有安全防护的设备,具体可以是防火墙或安全网关。
该告警包含目标疑似攻击行为的信息,例如目标疑似攻击行为的类别。告警还可以包括其他信息,例如还可以包含以下信息中的至少一种:攻击者的IP地址和端口号、被攻击者的IP地址和端口号、攻击者所在区域(例如,信任(trust)区域或非信任(untrust)区域)、被攻击者所在区域(trust区域或untrust区域)、检测到目标疑似攻击行为的第一安全设备的标识、目标疑似攻击行为发生的时间、攻击报文的协议类型以及第一安全设备对该攻击行为的处置动作(例如阻断或仅告警)。
其中,攻击者是指发起目标疑似攻击行为的设备,被攻击者是指目标疑似攻击行为所要攻击的设备,信任区域可以是被攻击者所在的局域网内的区域,非信任区域可以是被攻击者所在的局域网外的区域。
每个疑似攻击行为具有一个类型标识(identifier,ID)。具有相同的类型标识的攻击行为是同一类别的疑似攻击行为,类型标识例如为名称、编码。相应地,目标疑似攻击行为的类别可以采用类型标识表示。以名称为例,目标疑似攻击行为的类别可以为如远程桌面协议(Remote Desktop Protocol,RDP)本地账号暴力破解尝试、疑似结构化查询语言(Structured Query Language,SQL)注入攻击尝试。
报文协议类型可以例如为安全外壳协议(Secure Shell Protocol,SSH)、RDP、文件传输协议(File Transfer Protocol,FTP)等。
需要说明的是,第一安全设备可以先对目标疑似攻击行为进行处置,然后向分析器发 送告警,使得分析器对目标疑似攻击行为进行分析,所以告警中可以包含第一安全设备的处置动作。
相应地,分析器接收来自第一安全设备的告警,告警包含目标疑似攻击行为的类别。
步骤102,分析器根据目标疑似攻击行为的类别获取与目标疑似攻击行为相同类别的历史疑似攻击行为的特征,特征包括历史疑似攻击行为被确定为攻击行为的概率值以及特征集合中的至少一种,特征集合包括历史疑似攻击行为在时间上的分布特征,或发起历史疑似攻击行为的IP地址的分布特征。
基于步骤101中类型标识的说明可知,与目标疑似攻击行为相同类别的历史疑似攻击行为,可以理解为与目标疑似攻击行为相同类型标识的历史疑似攻击行为。
历史疑似攻击行为被确定为攻击行为的概率值为历史疑似攻击行为被确定为攻击行为的概率,其可以是被确定为攻击行为的历史疑似攻击行为的数量与历史疑似攻击行为的数量的比值。在每确定为一个疑似攻击行为生成阻断方案时,分析器可以标记该疑似攻击行为为攻击行为,并据此更新历史疑似攻击行为被确定为攻击行为的概率值。
可选地,分析器还可以根据安全服务专家对历史疑似攻击行为的处理意见修正对历史疑似攻击行为的标记结果。其中,处理意见为阻断或仅告警。安全服务专家给出对历史疑似攻击行为的处理意见包括两种情况。第一种情况是,在接收到历史疑似攻击行为后,安全服务专家就给出了处理意见;第二种情况是,安全设备已经对历史疑似攻击行为执行了处理措施之后,安全服务专家对该处理措施进行了纠正,此时会记录安全服务专家纠正后的处理意见。例如,分析器先将疑似攻击行为标记为攻击行为,但若收到安全服务专家针对该疑似攻击行为的处理意见为“仅告警”,则分析器将该疑似攻击行为标记为非攻击行为。分析器基于修正后的标记更新历史疑似攻击行为被确定为攻击行为的概率。例如,历史疑似攻击行为共100次,安全服务专家对其中80次历史疑似攻击行为的处理意见为阻断,那么历史疑似攻击行为被确定为攻击行为的概率值则为80%。
基于上述说明可知,本申请实施例提供的方法还可以包括:分析器接收输入的处理意见,处理意见用于确定历史疑似攻击行为被确定为攻击行为的概率值。
历史疑似攻击行为在时间上的分布特征是从时间的维度上描述历史疑似攻击行为的特征,用于指示历史攻击行为的持续性或偶然性,具体可以包括多种信息。例如,历史疑似攻击行为在时间上的分布特征可以包括,历史上多个时间段出现的疑似攻击行为的数量;历史疑似攻击行为在时间上的分布特征也可以包括,历史上多个时间段出现的疑似攻击行为的数量与历史上出现的疑似攻击行为的总数量的比值。该历史上出现的疑似攻击行为对应的时间段(以下称为预设时间段)可以根据需求进行设定,例如为最近的一天或一个月等。该预设时间段包括上述多个时间段,例如,当预设时间段为最近的一个月时,该多个时间段可以为该一个月中的每天,当预设时间段为最近的一天中的每小时。例如,历史疑似攻击行为在时间上的分布特征还可以包括历史疑似攻击行为在每个时间段出现的频率,该频率可以为历史疑似攻击行为的数量与天数的比值。可以理解的是,若历史疑似攻击行为仅在预设时间段中的某个时间段出现或者在某个时间段出现的非常多,其他时间段仅零星存在,则说明历史疑似攻击行为具有偶然性,若历史疑似攻击行为的数量在多个时间段 上的分布比较均匀,则说明历史疑似攻击行为具有持续性。
发起历史疑似攻击行为的IP地址的分布特征是从攻击者分布的广泛性的角度描述历史疑似攻击行为的特征,用于指示攻击源的广泛性,具体可以包括多种信息。例如,发起历史疑似攻击行为的IP地址的分布特征可以包括最近一段时间内发起历史疑似攻击行为的IP(也可以称为源IP)的数量。再例如,发起历史疑似攻击行为的IP地址的分布特征也可以包括发起历史疑似攻击行为的IP所属的区域的分布特征,如平均每个区域内源IP的数量。发起历史疑似攻击行为的IP的数量越多,说明攻击源越具备广泛性。
需要说明的是,上述历史疑似攻击行为可以是由第一安全设备检测到的。一个安全设备通常负责一个网络(也称为局点),此时,特征集合中的特征也可以称为单局点特征。
上述历史疑似攻击行为也可以是由多个安全设备检测到的。即,负责多个网络的多个安全设备将各自检测到的疑似攻击行为发送给分析器,分析器基于多个安全设备发送的来自于多个网络的疑似攻击行为获取历史疑似攻击行为的特征。此时,特征集合中的特征也可以称为全局特征。
由于不同地区的历史疑似攻击行为的差异可能较大,所以为了保证根据全局特征生成的第一阻断方案也能很好地适用于第一安全设备,多个安全设备可以位于邻近区域。例如,多个安全设备都位于华北地区,或都位于华南地区,或位于一个相同的国家。
以图1所示的网络架构为例,数据中心网络的安全设备、园区网络的安全设备和分支网络的安全设备可以对应一个分析器,多个安全设备包括数据中心网络的安全设备、园区网络的安全设备和分支网络的安全设备。
全局特征可以与单局点特征相同,也可以与单局点特征不同。下面通过举例说明全局特征与单局点特征不同。
例如,对于单局点特征来说,发起历史疑似攻击行为的IP地址的分布特征可以为预设时间段内发起历史疑似攻击行为的IP的数量;而对于全局特征来说,发起历史疑似攻击行为的IP地址的分布特征可以为预设时间段内在每个安全设备上发起历史疑似攻击行为的IP的平均数量,即发起历史疑似攻击行为的IP的总数量与安全设备的数量的比值。
当上述历史疑似攻击行为可以是由多个安全设备检测到的时,特征集合除了可以包含前文提及的特征外,特征集合还可以包括历史疑似攻击行为在多个安全设备上的分布特征。
历史疑似攻击行为在多个安全设备上的分布特征是从历史疑似攻击行为在安全设备上分布的广泛性的角度描述历史疑似攻击行为的特征,用于指示检测到历史疑似攻击行为的安全设备的广泛性,具体可以包括多种信息。例如,可以包括发生历史疑似攻击行为的安全设备的数量,也可以包括发生历史疑似攻击行为的安全设备的数量占安全设备的总数量的比值。
步骤103,分析器根据特征生成第一阻断方案,第一阻断方案用于阻断与目标疑似攻击行为相同类别的疑似攻击行为。
作为一种可实现的方式,分析器可以直接根据特征生成第一阻断方案。例如,以特征为历史疑似攻击行为被确定为攻击行为的概率值和历史疑似攻击行为在时间上的分布特征为例,若历史疑似攻击行为被确定为攻击行为的概率值大于某一概率阈值,且预设时间段 内的历史疑似攻击行为的天数与总天数的比值位于某一范围内,则生成第一阻断方案。
再例如,以特征为历史疑似攻击行为被确定为攻击行为的概率值和发起历史疑似攻击行为的IP地址的分布特征为例,若历史疑似攻击行为被确定为攻击行为的概率值大于某一概率阈值,且最近一段时间内发起历史疑似攻击行为的IP的数量大于某一数量阈值,则生成第一阻断方案。
作为另一种可实现的方式,分析器也可以先对特征进行打分,然后根据得到的分值生成第一阻断方案,下文会结合图4对该方法进行具体介绍。
第一阻断方案的内容可以有多种,第一阻断方案的内容可以如下表所示。
表一
在表一中,阻断方向是可选的。当第一阻断方案不包括阻断方向时,安全设备可以阻断所有的与待阻断的疑似攻击行为的类别相同的疑似攻击行为。当第一阻断方案包括阻断方向时,该阻断方向可以是基于历史疑似攻击行为中的攻击者和被攻击者所处的安全区域确定。例如,若历史疑似攻击行为的攻击者通常位于非信任区域内,而被攻击者通常位于信任区域内,则阻断方向便可以是阻断从非信任区域到信任区域的疑似攻击行为;若历史疑似攻击行为的攻击者通常位于信任区域内,而被攻击者通常也位于信任区域内,则阻断方向便可以是阻断从信任区域到信任区域的疑似攻击行为。然而,若历史疑似攻击行为的攻击者所在的区域和被攻击者所在的区域是无规律的,则第一阻断方案可以不指定阻断方向。
在表一中,时效也是可选的。当第一阻断方案不包括时效时,安全设备可以按照默认时间执行该第一阻断方案,例如,默认时间为接收到第一阻断方案后的一周。当第一阻断方案包括时效时,该时效可以是基于历史疑似攻击行为出现的时间确定的。例如,若历史疑似攻击行为仅在最近一周才出现,则时效为一个月;若历史疑似攻击行为在过去几年里一直出现,则时效可以为一年。
基于表一所述,第一阻断方案的内容的一个示例可以为:执行阻断动作的安全设备的唯一标识指示第一安全设备,待阻断的疑似攻击行为的类别为RDP本地账号暴力破解尝试,阻断方向是阻断从被攻击者所在的局域网外(非信任区域)到被攻击者所在的局域网内(信任区域)的疑似攻击行为。除此之外,第一阻断方案的内容还可以包含待阻断的报文协议类型,例如,待阻断的报文协议类型为RDP的攻击报文。
步骤104,分析器向第一安全设备发送第一阻断方案,以使得第一安全设备执行第一阻断方案。
作为一种可实现的方式,分析器可以直接向第一安全设备发送第一阻断方案。
作为另一种可实现的方式,分析器也可以间接向第一安全设备发送第一阻断方案。例如,分析器向第一安全设备的控制器发送第一阻断方案,以使得第一安全设备的控制器向第一安全设备发送第一阻断方案。
在本申请实施例中,分析器先根据目标疑似攻击行为的类别获取与目标疑似攻击行为相同类别的历史疑似攻击行为的特征,然后根据特征生成第一阻断方案,最后向第一安全设备发送第一阻断方案,以使得第一安全设备执行第一阻断方案,从而实现对与目标疑似攻击行为相同类别的疑似攻击行为的阻断。由于本申请实施例通过分析与目标疑似攻击行为同类别的历史疑似攻击行为,获取了这些历史疑似攻击行为的特征,因此,即使攻击者后续在发起同类别攻击行为时改变了某些信息(例如,发起攻击行为的IP地址),本方案也依然能够基于同类别的历史疑似攻击行为的特征阻断这些攻击,提升了同类疑似攻击行为的阻断率。
用户可以为安全设备设定相应的阻断策略,使得分析器在根据用户的阻断策略和特征生成第一阻断方案,使得第一阻断方案更符合用户的需求。下面结合图4的实施例对此进行具体介绍。
如图4所示,本申请提供了一种处理疑似攻击行为的方法的另一个实施例,该实施例包括:
步骤201,第一安全设备在检测到目标疑似攻击行为的情况下,向分析器发送告警。
相应地,分析器接收来自第一安全设备的告警,告警包含目标疑似攻击行为的类别。
步骤202,分析器根据目标疑似攻击行为的类别获取与目标疑似攻击行为相同类别的历史疑似攻击行为的特征,特征包括历史疑似攻击行为被确定为攻击行为的概率值和特征集合中的至少一种,特征集合包括历史疑似攻击行为在时间上的分布特征或发起历史疑似攻击行为的互联网协议IP地址的分布特征。
步骤201至步骤202与步骤101至步骤102类似,具体参照图3所示的实施例中步骤101至步骤102的说明。此处不再赘述。
步骤203,分析器接收输入的第一安全阻断策略。
需要说明的是,用户可以根据实际需求输入第一安全阻断策略,需求不同,第一安全阻断策略不同。例如,在一些特殊时期,需要尽可能阻断外部攻击,对误阻断有较大的容忍度,所以为了满足用户需求,可以适当调整第一安全阻断策略,以阻断更多的网络攻击;在其他正常时期,为了避免影响正常业务,所以对误阻断的容忍度较低,此时为了满足用 户需求,可以适当调整第一安全阻断策略,以阻断较少的网络攻击。
第一安全阻断策略的内容有多种形式。
作为一种可实现的方式,第一阻断方案是根据特征直接生成的,第一安全阻断策略的内容与特征对应。例如,特征包含预设时间段内历史疑似攻击行为的天数与总天数的比值,则第一安全阻断策略可以包含比值的阈值,该比值的阈值用于与特征的实际值比较,以判断特征是否满足第一安全阻断策略。
作为另一种可实现的方式,第一阻断方案是根据特征的分值生成的,则第一安全阻断策略的内容与分值对应。例如,第一安全阻断策略可以包含分数阈值,具体可以如下表所示。
表二
步骤204,分析器根据特征和特征的目标评分规则获取特征的分数。
目标评分规则可以包含每个特征的打分规则,下面以全局特征为例进行说明。
例如,历史疑似攻击行为在时间上的分布特征包括最近一段时间内上报历史疑似攻击行为的天数与总天数的比值,则目标评分规则可以包括历史疑似攻击行为在时间上的分布特征的打分规则,具体如下:当比值大于80%时,对应的分数为100分,当比值大于60%小于等于80%时,对应的分数为90分,当比值大于30%且小于等于60%时,对应的分数为 80分,当比值大于15%且小于30%时,对应的分数为60分,当比值小于15%时,对应的分数为0分。历史疑似攻击行为在时间上的分布特征指示疑似攻击行为的持续性,所以上述目标评分规则可以认为是根据疑似攻击行为的持续性对特征进行打分。比值越大说明历史疑似攻击行为越具有持续性,相应地,疑似攻击行为确定为攻击行为的可能性越高,因此,需要阻断的必要越大,对应的分数越高。可以理解的是,上述比值和分值均为示例,实际根据需求进行设定,例如,根据用户对攻击的阻断要求确定上述比值和分值。
例如,发起历史疑似攻击行为的IP地址的分布特征包括预设时间段内在每个安全设备上发起历史疑似攻击行为的IP的平均数量,则目标评分规则可以包括发起历史疑似攻击行为的IP地址的分布特征的打分规则,例如:当平均数量大于30时,对应的分数为100分,当平均数量大于20且小于等于30时,对应的分数为90分,当平均数量大于10且小于等于20时,对应的分数为80分,当平均数量大于4且小于等于20时,对应的分数为60分,当平均数量小于等于4时,对应的分数为0分。发起历史疑似攻击行为的IP地址的分布特征指示攻击者分布的广泛性,所以上述目标评分规则可以认为是根据攻击者分布的广泛性对特征进行打分。预设时间段内在每个安全设备上发起历史疑似攻击行为的IP的平均数量越多,说明攻击者分布越广泛。攻击者分布越广泛,说明同类别的疑似攻击行为被阻断的必要性越大,则对应的分数越高;反之,说明同类别的疑似攻击行为被阻断的必要性越小,相应地,对应的分数越低。可以理解的是,上述数量和分数均为示例,实际可以根据需求进行设定,例如,根据平均数量的分布确定上述各个数量,根据用户对攻击的阻断要求确定上述分值。
例如,目标评分规则可以包括历史疑似攻击行为被确定为攻击行为的概率值的打分规则,例如:当概率值为100%时,对应的分数为100分,当概率值大于等于95%且小于100%时,对应的分数为60分,当概率值小于95%时,对应的分数为0分。可以理解的是,上述概率值和分值均为示例,实际根据需求进行设定,例如,根据用户对攻击的阻断要求确定上述概率值和分值。
例如,历史疑似攻击行为在多个安全设备上的分布特征包括发生历史疑似攻击行为的安全设备的数量占安全设备的总数量的比值,则目标评分规则可以包括历史疑似攻击行为在多个安全设备上的分布特征的打分规则,例如:当比值大于30%时,对应的分数为100分,当比值大于20%且小于等于30%时,对应的分数为90分,当比值大于10%且小于等于20%时,对应的分数为80分,当比值大于5%且小于等于10%时,对应的分数为60分,当比值小于等于5%时,对应的分数为0分。历史疑似攻击行为在多个安全设备上的分布特征指示历史疑似攻击行为在安全设备上分布的广泛性,所以上述目标评分规则可以认为是根据历史疑似攻击行为在安全设备上分布的广泛性对特征进行打分。发生历史疑似攻击行为的安全设备的数量占安全设备的总数量的比值越高,说明历史疑似攻击行为在安全设备上分布越广泛。在安全设备上分布越广泛,说明同类别的疑似攻击行为被阻断的必要性越大,则对应的分数越高;反之,说明同类别的疑似攻击行为被阻断的必要性越小,则对应的分数越低。可以理解的是,上述概率值和分值均为示例,实际根据需求进行设定,例如,根据用户对攻击的阻断要求确定上述比值和分值。
需要说明的是,当特征的数量为一个时,根据特征和目标评分规则即可得到特征的分数;当特征的数量为多个时,根据特征和目标评分规则可以得到每个特征的分数,然后分析器可以根据每个特征的分数和每个特征的权重得到特征的总分数(即对特征集合中所有特征的分数进行加权求和)。
其中,每个特征的权重可以根据经验进行设定,例如,历史疑似攻击行为在多个安全设备上的分布特征的分数的权重为0.2,历史疑似攻击行为被确定为攻击行为的概率值的分数的权重为0.4,历史疑似攻击行为在时间上的分布特征的分数的权重为0.2,发起历史疑似攻击行为的IP地址的分布特征的分数的权重为0.2。
步骤205,分析器在特征的分数大于分数阈值的情况下,确定特征满足第一安全阻断策略。
以表二的第一安全阻断策略为例,在宽松状态下,当特征的分数大于80分时,可以确定特征满足第一安全阻断策略;在严格状态下,当特征的分数大于90分时,可以确定特征满足第一安全阻断策略。
需要说明的是,确定特征满足第一安全阻断策略的方式有多种,只有在第一安全阻断策略中包含分数阈值的情况下,才会根据特征的分数确定特征满足第一安全阻断策略,所以步骤204至步骤205是可选的。
步骤206,分析器在特征满足第一安全阻断策略的情况下,生成第一阻断方案。
步骤206与步骤103相比,不同的是,步骤206中利用了用户输入的第一安全阻断策略,即在生成第一阻断方案的过程中考虑了用户的需求;步骤206中生成第一阻断方案的过程与步骤103中生成第一阻断方案的过程类似,具体可参阅步骤103的相关说明进行理解。
另外,基于前文的说明可知,第一阻断方案的内容可以包含阻断方向;基于此,第一阻断策略还可以包括阻断方向。在该实施例中,生成的第一阻断方案中的阻断方向可以与第一安全阻断策略中的阻断方向要求一致。例如,在第一安全阻断策略中,阻断方向要求为阻断从信任区域到非信任区域的疑似攻击行为,则第一阻断方案中的阻断方向便为:阻断从信任区域到非信任区域的疑似攻击行为。
同样地,第一阻断方案的内容还可以包含失效,第一阻断方案中的失效可以与第一安全阻断策略中的时效保持一致。
步骤207,分析器向第一安全设备发送第一阻断方案,以使得第一安全设备执行第一阻断方案。
步骤207与步骤104类似,具体可参照图3所示的实施例中步骤104的相关说明进行理解。
步骤208,接收输入的第二安全设备的第二安全阻断策略。
其中,第二安全设备可以为与第一安全设备不同的任一安全设备,第二安全阻断策略可以与第一安全阻断策略相同,也可以与第一安全阻断策略不同。
步骤209,在特征满足第二安全阻断策略的情况下,生成第二阻断方案,第二阻断方案用于阻断与目标疑似攻击行为相同类别的疑似攻击行为。
步骤210,向第二安全设备发送第二阻断方案,以使得第二安全设备执行第二阻断方案。
其中,以表一所示的第一阻断方案的内容为例,第二阻断方案与第一阻断方案相比,待阻断的疑似攻击行为的类别、动作相同,阻断方向和持续时间可以相同,也可以不同。
基于步骤206的相关说明可知,第一阻断方案中的阻断方向和持续时间与第一安全阻断策略中的阻断方向要求和时效一致,同理,第二阻断方案中的阻断方向和持续时间与第二安全阻断策略中的阻断方向要求和时效一致。
所以当第一安全阻断策略中的阻断方向要求与第二安全阻断策略中的阻断方向要求一致时,则第一阻断方案中的阻断方向与第二阻断方案中的阻断方向一致;当第一安全阻断策略中的时效与第二安全阻断策略中的时效一致时,则第一阻断方案中的时效与第二阻断方案中的时效一致。
需要说明的是,步骤208至步骤210与步骤203、步骤206、步骤207类似,具体可参照图4所示的实施例进行理解。
在该实施例中,通过步骤202得到的特征可以是全局特征,也可以是单局点特征。当通过步骤202得到的特征是单局点特征时,分析器可以通过步骤203至步骤207向第一安全设备发送第一阻断方案,以使得第一安全设备阻断疑似攻击行为。当通过步骤203得到的特征是全局特征时,分析器除了执行步骤203至步骤207外,还可以通过步骤208至步骤210向第二安全设备发送第二断方案,以使得第二安全设备阻断疑似攻击行为。
在某些场景下,分析器也可以通过步骤202获取到的单局点特征为不同网络的安全设备生成阻断方案,例如,第一安全设备和第二安全设备所在的网络类似甚至是相同,所以第一安全设备对应的单局点特征也可以适用于第二安全设备,因此,分析器也可以通过步骤202获取到的第一安全设备对应的单局点特征为第二安全设备生成第二阻断方案。
而当第一安全设备和第二安全设备所在的网络差别较大时,分析器可以基于第一安全设备的单局点特征生成适用于第一安全设备的阻断方案,基于第二安全设备的单局点特征生成适用于第二安全设备的阻断方案。
基于上文的两个实施例,本申请实施例提供的方法可以概括为图5所示。
具体地,如图5所示,分析器选取与目标疑似攻击行为同类别的历史疑似攻击行为的特征进行分析以确定阻断方案。分析器可以获取安全服务专家对疑似攻击行为的处理意见以确定上述特征。该特征可以是全局特征,也可以是单局点特征,还可以是全局特征和单局点特征。
在该实施例中,以同时选取全局特征和单局点特征为例,分别对全局特征和单局点特征进行打分,以得到全局特征的可信度分数以及单局点特征的可信度分数。
分析器先通过全局特征的可信度分数判断特征是否满足安全阻断策略,若全局特征的可信度分数大于或等于安全阻断策略中的分数阈值,则确定特征满足安全阻断策略,然后下发阻断方案。
若全局特征的可信度分数小于第一安全阻断策略中的分数阈值,则分析器通过单局点特征的可信度分数判断特征是否满足第一安全阻断策略,若单局点特征的可信度分数大于 或等于第一安全阻断策略中的分数阈值,则确定特征满足第一安全阻断策略,然后下发阻断方案。
若单局点特征的可信度分数小于第一安全阻断策略中的分数阈值,则不下发阻断方案。
相比于单局点特征,全局特征包含了更多数量的历史疑似攻击行为的特征,所以全局特征的可信度分数比单局点特征的可信度分数更能反映出阻断某一类历史疑似攻击行为的必要性,因此该实施例先利用全局特征的可信度分数判断是否为某一个安全设备下发阻断方案,能够保证判断的准确性。
由于单局点特征表示某一类历史疑似攻击行为在一个安全设备上的特征,所以单局点特征的可信度分数尽管不能反映出其他安全设备阻断某一类历史疑似攻击行为的必要性,但能够反映出这一个安全设备阻断某一类历史疑似攻击行为的必要性;因此当全局特征的可信度分数不足以让分析器下发阻断方案时,根据单局点特征的可信度分数判断是否为这一个安全设备下发阻断方案,也能够保证判断的准确性。
如图6所示,本申请还提供了一种处理疑似攻击行为的装置300的一个实施例,该实施例包括收发单元301、获取单元302和生成单元303。
可以理解的是,上述处理疑似攻击行为的方法还可以由其他设备执行,例如,安全设备。
收发单元301,用于接收来自第一安全设备的告警,告警包含目标疑似攻击行为的类别。
获取单元302,用于根据目标疑似攻击行为的类别获取与目标疑似攻击行为相同类别的历史疑似攻击行为的特征,特征包括历史疑似攻击行为被确定为攻击行为的概率值以及特征集合中的至少一种。特征集合包括历史疑似攻击行为在时间上的分布特征,或发起历史疑似攻击行为的互联网协议IP地址的分布特征。
生成单元303,用于根据特征生成第一阻断方案,第一阻断方案用于阻断与目标疑似攻击行为相同类别的疑似攻击行为。
收发单元301,还用于向第一安全设备发送第一阻断方案,以使得第一安全设备执行第一阻断方案。
作为一种可实现的方式,历史疑似攻击行为是由第一安全设备检测到的。
作为一种可实现的方式,获取单元302,还用于接收输入的第一安全阻断策略;生成单元303,用于在特征满足第一安全阻断策略的情况下,生成第一阻断方案。
作为一种可实现的方式,第一安全阻断策略中包含分数阈值,获取单元302,用于根据特征和特征的目标评分规则获取特征的分数,在特征的分数大于分数阈值的情况下,确定特征满足第一安全阻断策略。
作为一种可实现的方式,获取单元302,还用于接收输入的对于目标疑似攻击行为的处理意见,处理意见用于确定历史疑似攻击行为被确定为攻击行为的概率值。
作为一种可实现的方式,历史疑似攻击行为是由多个安全设备检测到的,特征集合还包括历史疑似攻击行为在多个安全设备上的分布特征。
作为一种可实现的方式,获取单元302,用于接收输入的第二安全设备的第二安全阻 断策略;生成单元303,还用于在特征满足第二安全阻断策略的情况下,生成第二阻断方案,第二阻断方案用于阻断与目标疑似攻击行为相同类别的疑似攻击行为;收发单元301,还用于向第二安全设备发送第二阻断方案,以使得第二安全设备执行第二阻断方案。
作为一种可实现的方式,收发单元301用于向第一安全设备的控制器发送第一阻断方案,以使得第一安全设备的控制器向第一安全设备发送第一阻断方案。
图7为本申请实施例提供的一种计算机设备的结构示意图。如图7所示,计算机设备900搭载有上述的处理疑似攻击行为的装置。计算机设备900由一般性的总线体系结构来实现。
计算机设备900包括至少一个处理器901、通信总线902、存储器903以及至少一个通信接口904。
可选地,处理器901可以是通用中央处理器(central processing unit,CPU)、网络处理器(network processor,NP)、微处理器、或者是一个或多个用于实现本申请方案的集成电路,例如,专用集成电路(application-specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)或其组合。上述PLD是复杂可编程逻辑器件(complex programmable logic device,CPLD),现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。
通信总线902用于在上述组件之间传送信息。通信总线902分为地址总线、数据总线、控制总线等。为便于表示,图中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
可选地,存储器903是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其它类型的静态存储设备。可替换的,存储器903是随机存取存储器(random access memory,RAM)或者可存储信息和指令的其它类型的动态存储设备。可替换的,存储器903是电可擦可编程只读存储器(electrically erasable programmable read-only Memory,EEPROM)、只读光盘(compact disc read-only memory,CD-ROM)或其它光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其它磁存储设备,或者是能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其它介质,但不限于此。可选地,存储器903是独立存在的,并通过通信总线902与处理器901相连接。可选地,存储器903和处理器901集成在一起。
通信接口904使用任何收发器一类的装置,用于与其它设备或通信网络通信。通信接口904包括有线通信接口。可选地,通信接口904还包括无线通信接口。其中,有线通信接口例如为以太网接口。以太网接口是光接口,电接口或其组合。无线通信接口为无线局域网(wireless local area networks,WLAN)接口,蜂窝网络通信接口或其组合等。
通信接口904可以用于实现图6中获取单元的功能。
在具体实现中,作为一种实施例,计算机设备900包括多个处理器,如图7中所示的处理器901和处理器905。这些处理器中的每一个是一个单核处理器(single-CPU),或者是一个多核处理器(multi-CPU)。例如,处理器901和处理器905均包括2个核:CPU0和 CPU1。这里的处理器指一个或多个设备、电路、和/或用于处理数据(如计算机程序指令)的处理核。
在一些实施例中,存储器903用于存储执行本申请方案的程序代码99,处理器901执行存储器903中存储的程序代码99。也就是说,计算机设备900通过处理器901以及存储器903中的程序代码99,来实现上述的方法实施例。
本申请实施例还提供一种芯片,包括一个或多个处理器。所述处理器中的部分或全部用于读取并执行存储器中存储的计算机程序,以执行前述各实施例的方法。
可选地,该芯片该包括存储器,该存储器与该处理器通过电路或电线与存储器连接。进一步可选地,该芯片还包括通信接口,处理器与该通信接口连接。通信接口用于接收需要处理的数据和/或信息,处理器从该通信接口获取该数据和/或信息,并对该数据和/或信息进行处理,并通过该通信接口输出处理结果。该通信接口可以是输入输出接口。
在一些实现方式中,所述一个或多个处理器中还可以有部分处理器是通过专用硬件的方式来实现以上方法中的部分步骤,例如涉及神经网络模型的处理可以由专用神经网络处理器或图形处理器来实现。
本申请实施例提供的方法可以由一个芯片实现,也可以由多个芯片协同实现。
本申请实施例还提供了一种计算机存储介质,该计算机存储介质用于储存为上述计算机设备所用的计算机软件指令,其包括用于执行为计算机设备所设计的程序。
该计算机设备可以如前述图6对应实施例中处理疑似攻击行为的装置的功能。
本申请实施例还提供了一种计算机程序产品,该计算机程序产品包括计算机软件指令,该计算机软件指令可通过处理器进行加载来实现前述各个实施例所示的方法中的流程。
以上,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。

Claims (17)

  1. 一种处理疑似攻击行为的方法,其特征在于,包括:
    接收来自第一安全设备的告警,所述告警包含目标疑似攻击行为的类别;
    根据所述目标疑似攻击行为的类别获取与所述目标疑似攻击行为相同类别的历史疑似攻击行为的特征,所述特征包括所述历史疑似攻击行为被确定为攻击行为的概率值以及特征集合中的至少一种特征,所述特征集合包括所述历史疑似攻击行为在时间上的分布特征,或发起所述历史疑似攻击行为的互联网协议IP地址的分布特征;
    根据所述特征生成第一阻断方案,所述第一阻断方案用于阻断与所述目标疑似攻击行为相同类别的疑似攻击行为;
    向所述第一安全设备发送所述第一阻断方案,以使得所述第一安全设备执行所述第一阻断方案。
  2. 根据权利要求1所述的方法,其特征在于,所述历史疑似攻击行为是由所述第一安全设备检测到的。
  3. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:
    接收输入的第一安全阻断策略;
    所述根据所述特征生成第一阻断方案包括:
    在所述特征满足所述第一安全阻断策略的情况下,生成所述第一阻断方案。
  4. 根据权利要求3所述的方法,其特征在于,所述第一安全阻断策略包含分数阈值,所述方法还包括:
    根据所述特征和所述特征的目标评分规则获取所述特征的分数;
    在所述特征的分数大于所述分数阈值的情况下,确定所述特征满足所述第一安全阻断策略。
  5. 根据权利要求1至4任一项所述的方法,其特征在于,所述方法还包括:
    接收输入的处理意见,所述处理意见用于确定所述历史疑似攻击行为被确定为攻击行为的概率值。
  6. 根据权利要求1所述的方法,其特征在于,所述历史疑似攻击行为是由多个安全设备检测到的,所述特征集合还包括所述历史疑似攻击行为在所述多个安全设备上的分布特征。
  7. 根据权利要求6所述的方法,其特征在于,所述方法还包括:
    接收输入的第二安全设备的第二安全阻断策略;
    在所述特征满足所述第二安全阻断策略的情况下,生成第二阻断方案,所述第二阻断方案用于阻断与所述目标疑似攻击行为相同类别的疑似攻击行为;
    向所述第二安全设备发送所述第二阻断方案,以使得所述第二安全设备执行所述第二阻断方案。
  8. 一种处理疑似攻击行为的装置,其特征在于,包括:
    收发单元,用于接收来自第一安全设备的告警,所述告警包含目标疑似攻击行为的类别;
    获取单元,用于根据所述目标疑似攻击行为的类别获取与所述目标疑似攻击行为相同类别的历史疑似攻击行为的特征,所述特征包括所述历史疑似攻击行为被确定为攻击行为的概率值以及特征集合中的至少一种特征,所述特征集合包括所述历史疑似攻击行为在时间上的分布特征,或发起所述历史疑似攻击行为的互联网协议IP地址的分布特征;
    生成单元,用于根据所述特征生成第一阻断方案,所述第一阻断方案用于阻断与所述目标疑似攻击行为相同类别的疑似攻击行为;
    所述收发单元,还用于向所述第一安全设备发送所述第一阻断方案,以使得所述第一安全设备执行所述第一阻断方案。
  9. 根据权利要求8所述的装置,其特征在于,所述历史疑似攻击行为是由所述第一安全设备检测到的。
  10. 根据权利要求8或9所述的装置,其特征在于,
    所述获取单元,还用于接收输入的第一安全阻断策略;
    所述生成单元,还用于在所述特征满足所述第一安全阻断策略的情况下,生成所述第一阻断方案。
  11. 根据权利要求10所述的装置,其特征在于,所述第一安全阻断策略包含分数阈值,
    所述获取单元,用于根据所述特征和所述特征的目标评分规则获取所述特征的分数;
    所述审查单元,用于在所述特征的分数大于所述分数阈值的情况下,确定所述特征满足所述第一安全阻断策略。
  12. 根据权利要求8至11任一项所述的装置,其特征在于,所述获取单元还用于:
    接收输入的处理意见,所述处理意见用于确定所述历史疑似攻击行为被确定为攻击行为的概率值。
  13. 根据权利要求8所述的装置,其特征在于,所述历史疑似攻击行为是由多个安全设备检测到的,所述特征集合还包括所述历史疑似攻击行为在所述多个安全设备上的分布特征。
  14. 根据权利要求13所述的装置,其特征在于,
    所述获取单元,还用于接收输入的第二安全设备的第二安全阻断策略;
    所述生成单元,还用于在所述特征满足所述第二安全阻断策略的情况下,生成第二阻断方案,所述第二阻断方案用于阻断与所述目标疑似攻击行为相同类别的疑似攻击行为;
    所述收发单元,还用于向所述第二安全设备发送所述第二阻断方案,以使得所述第二安全设备执行所述第二阻断方案。
  15. 一种计算机设备,其特征在于,所述计算机设备包括存储器和处理器,所述处理器,用于执行存储器中存储的计算机程序或指令,以使所述计算机设备执行如权利要求1-7中任一项所述的方法。
  16. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质包括程序指令,当所述程序指令被直接或者间接执行时,使得如权利要求1至7中任一所述的方法被实现。
  17. 一种计算机程序产品,其特征在于,包括指令,当所述指令在计算机上运行时,使得所述计算机执行权利要求1至7中任一项所述的方法。
PCT/CN2023/082044 2022-03-25 2023-03-17 一种处理疑似攻击行为的方法及相关装置 WO2023179461A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210302447.9A CN116846571A (zh) 2022-03-25 2022-03-25 一种处理疑似攻击行为的方法及相关装置
CN202210302447.9 2022-03-25

Publications (1)

Publication Number Publication Date
WO2023179461A1 true WO2023179461A1 (zh) 2023-09-28

Family

ID=88099878

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/082044 WO2023179461A1 (zh) 2022-03-25 2023-03-17 一种处理疑似攻击行为的方法及相关装置

Country Status (2)

Country Link
CN (1) CN116846571A (zh)
WO (1) WO2023179461A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103139184A (zh) * 2011-12-02 2013-06-05 中国电信股份有限公司 智能网络防火墙设备及网络攻击防护方法
US20160226894A1 (en) * 2015-02-04 2016-08-04 Electronics And Telecommunications Research Institute System and method for detecting intrusion intelligently based on automatic detection of new attack type and update of attack type model
CN111614662A (zh) * 2020-05-19 2020-09-01 网神信息技术(北京)股份有限公司 针对勒索病毒的拦截方法和装置
CN112468520A (zh) * 2021-01-28 2021-03-09 腾讯科技(深圳)有限公司 一种数据检测方法、装置、设备及可读存储介质
WO2021046811A1 (zh) * 2019-09-12 2021-03-18 奇安信安全技术(珠海)有限公司 一种攻击行为的判定方法、装置及计算机存储介质
CN113194058A (zh) * 2020-01-14 2021-07-30 深信服科技股份有限公司 Web攻击检测方法、设备、网站应用层防火墙及介质
CN113496033A (zh) * 2020-04-08 2021-10-12 腾讯科技(深圳)有限公司 访问行为识别方法和装置及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103139184A (zh) * 2011-12-02 2013-06-05 中国电信股份有限公司 智能网络防火墙设备及网络攻击防护方法
US20160226894A1 (en) * 2015-02-04 2016-08-04 Electronics And Telecommunications Research Institute System and method for detecting intrusion intelligently based on automatic detection of new attack type and update of attack type model
WO2021046811A1 (zh) * 2019-09-12 2021-03-18 奇安信安全技术(珠海)有限公司 一种攻击行为的判定方法、装置及计算机存储介质
CN113194058A (zh) * 2020-01-14 2021-07-30 深信服科技股份有限公司 Web攻击检测方法、设备、网站应用层防火墙及介质
CN113496033A (zh) * 2020-04-08 2021-10-12 腾讯科技(深圳)有限公司 访问行为识别方法和装置及存储介质
CN111614662A (zh) * 2020-05-19 2020-09-01 网神信息技术(北京)股份有限公司 针对勒索病毒的拦截方法和装置
CN112468520A (zh) * 2021-01-28 2021-03-09 腾讯科技(深圳)有限公司 一种数据检测方法、装置、设备及可读存储介质

Also Published As

Publication number Publication date
CN116846571A (zh) 2023-10-03

Similar Documents

Publication Publication Date Title
US11212306B2 (en) Graph database analysis for network anomaly detection systems
US10887330B2 (en) Data surveillance for privileged assets based on threat streams
US10855700B1 (en) Post-intrusion detection of cyber-attacks during lateral movement within networks
Valdes et al. Probabilistic alert correlation
US10587640B2 (en) System and method for attribution of actors to indicators of threats to a computer system and prediction of future threat actions
US10581915B2 (en) Network attack detection
US10708290B2 (en) System and method for prediction of future threat actions
US20200244676A1 (en) Detecting outlier pairs of scanned ports
US8161538B2 (en) Stateful application firewall
US11470110B2 (en) Identifying and classifying community attacks
US20140214938A1 (en) Identifying participants for collaboration in a threat exchange community
CN108809749B (zh) 基于采样率来执行流的上层检查
US20120233098A1 (en) Multiple Hypothesis Tracking
US11770396B2 (en) Port scan detection using destination profiles
JP2019523584A (ja) ネットワーク攻撃防御システムおよび方法
US20120233097A1 (en) Multiple Hypothesis Tracking
CN109561097B (zh) 结构化查询语言注入安全漏洞检测方法、装置、设备及存储介质
WO2019051166A1 (en) DETECTION AND REDUCTION OF THE EFFECTS OF CYBERSECURITY THREATS ON A COMPUTER NETWORK
CN113645233B (zh) 流量数据的风控智能决策方法、装置、电子设备和介质
Manimaran et al. The conjectural framework for detecting DDoS attack using enhanced entropy based threshold technique (EEB-TT) in cloud environment
Meng et al. Design of cloud-based parallel exclusive signature matching model in intrusion detection
Prashanth et al. Using random forests for network-based anomaly detection at active routers
TWI682644B (zh) 網路節點的移動防護方法及網路防護伺服器
WO2023179461A1 (zh) 一种处理疑似攻击行为的方法及相关装置
US20230262077A1 (en) Cybersecurity systems and methods for protecting, detecting, and remediating critical application security attacks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23773709

Country of ref document: EP

Kind code of ref document: A1