US20230030659A1 - System and method for detecting lateral movement and data exfiltration - Google Patents
System and method for detecting lateral movement and data exfiltration Download PDFInfo
- Publication number
- US20230030659A1 US20230030659A1 US17/816,040 US202217816040A US2023030659A1 US 20230030659 A1 US20230030659 A1 US 20230030659A1 US 202217816040 A US202217816040 A US 202217816040A US 2023030659 A1 US2023030659 A1 US 2023030659A1
- Authority
- US
- United States
- Prior art keywords
- network
- compromise
- data
- order indicator
- transmitted over
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims description 59
- 230000001010 compromised effect Effects 0.000 claims description 20
- 238000004458 analytical method Methods 0.000 claims description 11
- 238000012544 monitoring process Methods 0.000 claims description 9
- 235000012907 honey Nutrition 0.000 claims 13
- 230000000694 effects Effects 0.000 abstract description 32
- 230000006399 behavior Effects 0.000 description 61
- 238000004891 communication Methods 0.000 description 21
- 238000001514 detection method Methods 0.000 description 13
- 238000003860 storage Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 8
- 238000004519 manufacturing process Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 239000003795 chemical substances by application Substances 0.000 description 5
- 238000012546 transfer Methods 0.000 description 5
- 208000015181 infectious disease Diseases 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000002776 aggregation Effects 0.000 description 3
- 238000004220 aggregation Methods 0.000 description 3
- 230000002596 correlated effect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 230000007774 longterm Effects 0.000 description 3
- 230000002265 prevention Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000035945 sensitivity Effects 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 238000012896 Statistical algorithm Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000000875 corresponding effect Effects 0.000 description 2
- 230000006378 damage Effects 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000006855 networking Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000007619 statistical method Methods 0.000 description 2
- 244000035744 Hura crepitans Species 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000002155 anti-virotic effect Effects 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 238000010411 cooking Methods 0.000 description 1
- 238000010219 correlation analysis Methods 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000008595 infiltration Effects 0.000 description 1
- 238000001764 infiltration Methods 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 238000012012 milestone trend analyses Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 230000003449 preventive effect Effects 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 230000003442 weekly effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1408—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
- H04L63/1416—Event detection, e.g. attack signature detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
- G06F21/552—Detecting local intrusion or implementing counter-measures involving long-term monitoring or reporting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
- G06F21/554—Detecting local intrusion or implementing counter-measures involving event detection and direct action
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
- G06F21/56—Computer malware detection or handling, e.g. anti-virus arrangements
- G06F21/561—Virus type analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
- G06F21/56—Computer malware detection or handling, e.g. anti-virus arrangements
- G06F21/562—Static detection
- G06F21/564—Static detection by virus signature recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
- G06F21/56—Computer malware detection or handling, e.g. anti-virus arrangements
- G06F21/566—Dynamic detection, i.e. detection performed at run-time, e.g. emulation, suspicious activities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/57—Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
- G06F21/577—Assessing vulnerabilities and evaluating computer system security
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/08—Network architectures or network communication protocols for network security for authentication of entities
- H04L63/0876—Network architectures or network communication protocols for network security for authentication of entities based on the identity of the terminal or configuration, e.g. MAC address, hardware or software configuration or device fingerprint
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/10—Network architectures or network communication protocols for network security for controlling access to devices or network resources
- H04L63/102—Entity profiles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1408—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
- H04L63/1425—Traffic logging, e.g. anomaly detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1441—Countermeasures against malicious traffic
- H04L63/145—Countermeasures against malicious traffic the attack involving the propagation of malware through the network, e.g. viruses, trojans or worms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2221/00—Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F2221/21—Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F2221/2111—Location-sensitive, e.g. geographical location, GPS
Definitions
- Embodiments of the invention relate to protecting computers and networks from malicious software and activities.
- embodiments of the invention relate to a system and method for detection of lateral movement and data exfiltration.
- a technique used to gain unauthorized access to computers and databases includes loading malicious software or malware onto a computer. Such malware is designed to disrupt computer operation, gather sensitive information, or to grant access to the computer to unauthorized individuals.
- malware infection As the awareness of malware increases, the techniques used to load malware onto computers (also called a malware infection) has grown more sophisticated. As a result, legacy security solutions that use a structured process (e.g., signature and heuristics matching) or analyze agent behavior in an isolated context fail to detect threat activities including, but not limited to, loading malware, lateral movement, data exfiltration, fraudulent transactions, and inside attacks.
- a structured process e.g., signature and heuristics matching
- agent behavior in an isolated context fail to detect threat activities including, but not limited to, loading malware, lateral movement, data exfiltration, fraudulent transactions, and inside attacks.
- a system configured to detect a threat activity on a network.
- the system including a digital device configured to detect one or more first order indicators of compromise on a network, detect one or more second order indicators of compromise on the network, generate a risk score based on correlating said first order indicator of compromise on the network with the second order indicator of compromise on said network, and generate at least one incident alert based on comparing the risk score to a threshold.
- FIG. 1 illustrates a block diagram of a network environment that includes a system configured to detect threat activities according to an embodiment
- FIG. 2 illustrates a flow diagram of a method to detect threat activities on a network according to an embodiment
- FIG. 3 illustrates a block diagram of a method to detect one or more second order indicators of compromise according to an embodiment
- FIG. 4 illustrates an embodiment of a client, an end-user device, or a digital device according to an embodiment
- FIG. 5 illustrates an embodiment of a system for detecting threat activities according to an embodiment.
- Embodiments of a system to detect threat activities are configured to detect one or more threat activities at advanced stages of a threat kill chain, including lateral movement of malware objects inside networks and enterprises, data gathering and exfiltration, and compromised or fraudulent business transactions.
- the system is configured to extend protection coverage to the complete kill chain.
- a system is configured to monitor simultaneously north-south traffic and east-west traffic.
- Such a system configured with multiple collectors for monitoring north-south traffic and east-west traffic.
- North-south traffic monitoring will detect threat activity between internal (e.g., corporate network or LAN) devices and external (e.g., extranet or Internet) devices, including but not limited to web servers.
- East-west traffic monitoring will detect threat activities among internal devices, including those in a demilitarized zone (“DMZ”) otherwise known as a perimeter network.
- DMZ demilitarized zone
- East-west traffic can contain the same set of network protocols seen on north-south boundaries, as well as network protocols meant for internal access and data sharing.
- east-west protocols include, but are not limited to, reliable user datagram protocol (“RDP”) for remote access to windows computers, active directory services, and server message block (“SMB”) for file sharing.
- RDP reliable user datagram protocol
- SMB server message block
- Embodiments of the system that deploy collectors to monitor both north-south and east-west traffic, and analyze them through first order and second order indicators extraction, and correlate them in a centralized location and interface, that is a single pane of glass, provide the benefit of detecting threats regardless of which stage they are at in the kill chain, or if the threat is an external or an inside attack, or if the threat is the lateral movement of an infiltration.
- FIG. 1 illustrates a block diagram of a network environment 100 that includes a system configured to detect threat activities according to an embodiment.
- Systems and methods embodied in the network environment 100 may detect threat activity, malicious activity, identify malware, identify exploits, take preventive action, generate signatures, generate reports, determine malicious behavior, determine targeted information, recommend steps to prevent attack, and/or provide recommendations to improve security.
- the network environment 100 comprises a data center network 102 and a production network 104 that communicate over a communication network 106 .
- the data center network 102 comprises a security server 108 .
- the production network 104 comprises a plurality of end-user devices 110 .
- the security server 108 and the end-user devices 110 may include digital devices.
- a digital device is any device with one or more a processing units and memory.
- FIGS. 4 and 5 illustrate embodiments of a digital device.
- the security server 108 is a digital device configured to detect threat activities.
- the security server 108 receives suspicious data from one or more data collectors.
- the data collectors may be resident within or in communication with network devices such as Intrusion Prevention System (“IPS”) collectors 112 a and 112 b , firewalls 114 a and 114 b , ICAP/WCCP collectors 116 , milter mail plug-in collectors 118 , switch collectors 120 , and/or access points 124 .
- IPS Intrusion Prevention System
- IPS Intrusion Prevention System
- ICAP/WCCP collectors 116 ICAP/WCCP collectors
- milter mail plug-in collectors 118 milter mail plug-in collectors 118
- switch collectors 120 switch collectors 120
- access points 124 access points
- data collectors may be at one or more points within the communication network 106 .
- a data collector which may include a tap or span port (e.g., span port IDS collector at switch 120 ) for example, is configured to intercept network data from a network.
- the data collector may be configured to detect suspicious data. Suspicious data is any data collected by the data collector that has been flagged as suspicious by the data collector and/or any data that is to be further processed within the virtualization environment.
- the data collectors may filter the data before flagging the data as suspicious and/or providing the collected data to the security server 108 .
- the data collectors may filter out plain text but collect executables or batch files.
- the data collectors may perform intelligent collecting. For example, data may be hashed and compared to a whitelist.
- the whitelist may identify data that is safe.
- the whitelist may identify digitally signed data by trusted entities or data received from a known trusted source as safe. Further, the whitelist may identify previously received information that has been determined to be safe. If data has been previously received, tested within the environments, and determined to be sufficiently trustworthy, the data collector may allow the data to continue through the network.
- the data collectors may be updated by the security server 108 to help the data collectors recognize sufficiently trustworthy data and to take corrective action (e.g., quarantine and alert an administrator) if untrustworthy data is recognized. For an embodiment, if data is not identified as safe, the data collectors may flag the data as suspicious for further assessment.
- one or more agents or other modules may monitor network traffic for common behaviors and may configure a data collector to collect data when data is directed in a manner that falls outside normal parameters.
- the agent may determine or be configured to detect that a computer has been deactivated, a particular computer does not typically receive any data, data received by a particular computer typically comes from a limited number of sources, or a particular computer typically does not send data of a given pattern to certain destinations. If data is directed to a digital device in a manner that is not typical, the data collector may flag such data as suspicious and provide the suspicious data to the security server 108 .
- Network devices include any device configured to receive and provide data over a network. Examples of network devices include, but are not limited to, routers, bridges, security appliances, firewalls, web servers, mail servers, wireless access points (e.g., hotspots), and switches.
- network devices include IPS collectors 112 a and 112 b , firewalls 114 a and 114 b , Internet content adaptation protocol (“ICAP”)/web cache communication protocol (“WCCP”) servers 116 , devices including milter mail plug-ins 118 , switches 120 , and/or access points 124 .
- ICAP Internet content adaptation protocol
- WCCP web cache communication protocol
- the IPS collectors 112 a and 112 b may include any anti-malware device including IPS systems, intrusion detection and prevention systems (“IDPS”), or any other kind of network security appliances.
- the firewalls 114 a and 114 b may include software and/or hardware firewalls.
- the firewalls 114 a and 114 b may be embodied within routers, access points, servers (e.g., web servers), mail filters, or appliances.
- ICAP/WCCP servers 116 include any web server or web proxy server configured to allow access to a network and/or the Internet.
- Network devices including milter mail plug-ins 118 may include any mail server or device that provides mail and/or filtering functions and may include digital devices that implement milter, mail transfer agents (“MTAs”), sendmail, and postfix, for example.
- Switches 120 include any switch or router.
- the data collector may be implemented as a TAP, SPAN port, and/or intrusion detection system (“IDS”).
- Access points 124 include any device configured to provide wireless connectivity with one or more other digital devices.
- the production network 104 is any network that allows one or more end-user devices 110 to communicate over the communication network 106 .
- the communication network 106 is any network that may carry data (encoded, compressed, and/or otherwise) from one digital device to another.
- the communication network 106 may comprise a LAN and/or WAN. Further, the communication network 106 may comprise any number of networks.
- the communication network 106 is the Internet.
- FIG. 1 is exemplary and does not limit systems and methods described herein to the use of only those technologies depicted.
- data collectors may be implemented in any web or web proxy server and is not limited to only the servers that implement Internet content adaption protocol (“ICAP”) and/or web cache communication protocol (“WCCP”).
- ICAP Internet content adaption protocol
- WCCP web cache communication protocol
- data collectors may be implemented in any mail server and is not limited to mail servers that implement milter.
- Data collectors may be implemented at any point in one or more networks.
- FIG. 1 depicts a limited number of digital devices, collectors, routers, access points, and firewalls, there may be any kind and number of devices.
- security servers 108 there may be any number of security servers 108 , end-user devices 110 , intrusion prevention system (“IPS”) collectors 112 a and 112 b , firewalls 114 a and 114 b , ICAP/WCCP collectors 116 , milter mail plug-ins 118 , switches 120 , and/or access points 124 .
- IPS intrusion prevention system
- ICAP/WCCP collectors 116 ICAP/WCCP collectors 116
- milter mail plug-ins 118 e.g., switches 120 , and/or access points 124
- data center networks 102 and/or production networks 104 there may be any number of data center networks 102 and/or production networks 104 .
- the data collectors can take the form of a hardware appliance, pure software running on a native operating system (“OS”), or virtual appliance for virtualized
- FIG. 2 illustrates a block diagram of a method to detect threat activities on a network according to an embodiment.
- the method includes detecting one or more first order indicators of compromise ( 202 ).
- one or more data collectors are configured to intercept network data between network devices.
- a data collector is configured to detect network traffic and to determine traffic patterns across the protocol stack between network devices.
- a data collector is configured to determine traffic patterns between network devices on one or more of protocol stack layers including, but not limited to, layers 2-7.
- the data collector may be configured to detect and to determine traffic patterns for address resolution protocol (“ARP”) traffic, dynamic host configuration protocol (“DHCP”) traffic, Internet control message protocol (“ICMP”) traffic between media access control (“MAC”) or Internet protocol (“IP”) addresses, transmission control protocol (“TCP”)/IP and user datagram protocol (“UDP”)/IP traffic between IP and port number pairs, up the stack to hypertext transfer protocol (“HTTP”), secure shell (“SSH”), server message block (“SMB”) application protocols, patterns between application clients and servers, and industry-specific applications like wire transfer transaction processing, and patterns between bank accounts.
- ARP address resolution protocol
- DHCP dynamic host configuration protocol
- ICMP Internet control message protocol
- TCP transmission control protocol
- UDP user datagram protocol
- IP IP
- HTTP hypertext transfer protocol
- SSH secure shell
- SMB server message block
- the data collector is configured to extract file objects transferred along with their meta information such as HTTP headers, URLs, source and destination IP addresses, and to transmit the file objects to a security server.
- the method optionally includes detecting one or more indicators of a compromised entity ( 204 ).
- a compromised entity includes, but is not limited to, a digital device, an application, and a rogue user on a network.
- a system includes one or more data collectors configured to operate as a honey-host used to attract the interest of any one of malicious software, a rogue user, a digital device performing threat activities.
- An indicator of a compromised entity includes, but is not limited to, an attempt to prove a honey-host for open ports (that is, test the honey-host for open ports) to gain access to the honey-host, and an attempt to examine or move data on a honey-host.
- a data collector is configured to have one or more of an interesting domain name and IP address (that is, a domain name and/or an IP address that a compromised entity would attempt to access); it is given an owner/user with interesting corporate roles, relevant data files, documents, and user activity traces.
- a data collector for an embodiment, is configured to log any probing, login, and access activities against this honey-host. The logged data will, e.g., identify which device, application, and user(s) have attempted any activities, at what time, along with the details of the activity. For example, if files are copied from the honey-host, a data collector is configured to log information including the file name, hash, and where the file is moved to.
- Second order indicators of compromise include, but are not limited to, behavior patterns of a network device observed from the network device and behavior patterns of an end-user device.
- a data collector is configured to generate an event log and/or a transaction log for the network device.
- a data collector for an embodiment, is configured to detect behavior patterns of an end-user device, endpoint device, and/or other client peers on a network, for example as used by a user, based on network traffic, from a business activity level.
- An example of detecting behavior patterns of an end-user device in a software engineering environment includes detecting that an individual developer's workstation started to pull down a large amount of source code from one or more repositories while at the same time, very little push (updating to the repositories) is taking place. This kind of suspicious conditions is detected by continuous monitoring and building typical pattern profiles. In such an example, the system is configured to generate an alert when the observed behavior patterns deviate from the typical profiles.
- Another example of detecting behavior patterns of an end-user device includes in a bank environment which well-known interaction patterns among the desktop computers of different functional departments are established from the business workflow using techniques including those described herein.
- a human resource manager's machine will not generally communicate with a wire-transfer processing server; any direct network connectivity outside an information technology (“IT”) maintenance window will generate an alert for a second order suspicious indicator.
- IT information technology
- the detection a data collector is configured to detect these types of behavior patterns using techniques including those described herein.
- the method also includes generating a risk score based on correlating the one or more first order indicators of compromise with the one or more second order indicators of compromise ( 208 ).
- the risk score is generated based on an asset value assigned to a network device or end-user device and the current security posture of the network device or end-user device. For example, if the network device is assigned a high asset value the generated risk score would be higher for the same correlation result than a network device assigned a lower asset value.
- a device having a security posture above a normal or a defined average level would result in the generated risk score being higher than a correlation result of a network device having a security posture at or below normal.
- a generated risk score based on a security posture is more dynamic than a generated risk score based on an asset value.
- a security server is configured to determine that some group of devices or users may be subject to a targeted attack and warrant special monitoring.
- the security posture can be increased for the devices or users for the given period of time (e.g., a high security posture) so that the risk score for any threat against these devices or users will be applied an escalation factor in order to prioritize the response.
- a data collector is configured to correlate one or more first order indicators of compromise with the one or more second order indicators of compromise based on network patterns/data received from one or more data collectors using techniques including those described herein. Further, the data collector is configured to generate a risk score based on the correlation result using techniques including those described herein.
- a security server is configured to correlate one or more first order indicators of compromise with the one or more second order indicators of compromise based on network patterns/data received from one or more data collectors using techniques including those described herein. Further, the security server is configured to generate a risk score based on the correlation result using techniques including those described herein.
- a hierarchy of data aggregation and abstraction can be created to scale the coverage to larger networks and to support filtered sharing of threat intelligence data.
- a single collector may cover the network of a single department at a given site having many departments.
- the data from multiple collectors of corresponding departments at the site can be aggregated to represent the entire site.
- the method includes generating at least one incident alert based on comparing the risk score to a threshold ( 210 ).
- the incident alert includes lateral movement and data exfiltration incident alerts.
- multiple alerts will aggregate in time when multiple events of the same type happen within a short period of time to the same target device. The aggregation is achieved by representing the number of occurrences of the same events within the given interval by one single alert as an incident. This will result in a more meaningful alert to the end user, without loss of important information, while avoiding generating many events to the user.
- an alert may be generated based on a security policy.
- a security policy may include, but is not limited to, a watch-list of one or more critical internal IP addresses and a red-list, which includes known malicious addresses, of one or more external IP addresses.
- an incident alert would be generated when a network device or end-user device communicates with an IP address on the watch-list and/or on the red-list.
- a data collector is configured to generate at least one incident alert based on comparing the risk score to a threshold using the techniques describe herein.
- a security server is configured to generating at least one incident alert based on comparing the risk score to a threshold using the techniques describe herein.
- a hierarchy of data aggregation and abstraction can be created to scale the coverage to larger networks and to support filtered sharing of threat intelligence data.
- a single data collector may cover the network of a single department at a given site; data from multiple collectors of corresponding departments can be aggregated to represent the given site as a whole.
- FIG. 3 illustrates a block diagram of a method to detect one or more second order indicators of compromise according to an embodiment.
- the method includes generating a behavior profile for at least one network device or end-user device ( 302 ).
- a behavior profile is generated at multiple levels of activities across the protocol stack using heuristics or supervised or unsupervised machine learning.
- An example of a behavior profile for an end-user device includes, but is not limited to, a network user's role in a network and authorization to use the end-user device on the network, one or more activities a network user performs on the end-user device, a list of one or more IP addresses that this device connects to on a weekly basis, a distribution of the time duration for one or more connections, a total amount of data exchanged, a breakdown of the amount of data in each direction, and a characterization of variances in any of the above information over a period of time.
- the detection mechanism such as a data collector, maintains a behavior profile on a rolling basis (a long-term behavior profile).
- the detection mechanism is configured to build a real-time behavior profile; e.g., based on normalized daily stats.
- a real-time behavior profile e.g., based on normalized daily stats.
- the difference between a long-term behavior and a real-time behavior profile will raise an alert for a threat activity.
- the behavior profiles are generated for each of the monitored network devices and end-user devices during a training phase.
- a security server is configured to generate one or more behavior profiles based on one or more of network traffic patterns, behavior patterns of a network device, and behavior patterns of an end-user device.
- a data collector is configured to generate one or more behavior profiles based on one or more of network traffic patterns, behavior patterns of a network device, and behavior patterns of an end-user device.
- the method also includes detecting one or more real-time observations ( 304 ). These real-time observations are represented as real-time behavior profiles. Detecting one or more real-time observations, according to an embodiment, is part of a production phase.
- a data collector is configured to intercept network data and analyze network traffic patterns across the protocol stack in real time to generate real-time observations using techniques including those described herein.
- a data collector is configured to intercept network data and transmit the network data to security server using techniques including those described herein.
- the security server is configured to receive the network data and analyze the network traffic patterns across the protocol stack in real time to generate real-time observations.
- the method includes comparing the one or more real-time observations to at least one behavior profile ( 306 ), i.e., a long-term behavior profile.
- a behavior profile i.e., a long-term behavior profile.
- real-time observations are compared against one or more generated behavior profiles. Comparing the one or more real-time observations to at least one behavior profile, according to an embodiment, is part of a production phase.
- a data collector is configured to compare the real-time observations of a network device and/or an end-user device to at least one behavior profile generated for the network device or the end-user device using techniques including those described herein.
- a security server is configured to compare the real-time observations of a network device or an end-user device to at least one behavior profile generated for the network device or the end-user device based on information received from one or more data collectors using techniques including those described herein.
- the method includes generating one or more anomalies based on a comparison of the real-time observations to the behavior profile ( 308 ).
- Generating one or more anomalies based on a comparison of the real-time observations to the behavior profile is part of a production phase.
- the anomalies generated include, but are not limited to, network anomalies, device anomalies and user anomalies.
- a network anomaly includes the real-time networking traffic pattern observations that differ from the one or more behavior profiles.
- a device anomaly includes the real-time device behavior observations that differ from the one or more behavior profiles of the network device or the end-user device.
- a user anomaly includes the real-time observations of behavior of an end-user device under the control of a network user that differs from the one or more behavior profiles of the end-user device under control of the user.
- the real-time observations may be compared against one or more of a network anomaly, a device anomaly, and a user anomaly.
- a security server is configured to generate one or more anomalies based on data received from one or more data collectors including the data described herein. Further the anomalies, according to an embodiment, are correlated with the first order indicators of compromise based on one or more of a security policy, an IP address, a device finger print, a business application, and a user identity.
- a data collector is configured to generate one or more anomalies using techniques including those described herein.
- one or more indicators of a compromised entity detected by one or more honey-hosts are correlated with both one or more first order indicators and one or more second order indicators, to identify any of a network device, an application that may have been compromised, and to identify a user, such as a rogue user, on an end-user device initiating suspicious activities on the network.
- Some examples of suspicious activities include probing for high-value assets or gathering sensitive information from the honey-hosts.
- An exemplary implementation includes a system configured to detect sensitive data using expression matching using techniques including those described herein.
- sensitive data includes social security numbers, credit card numbers, and documents including keywords.
- the system according to an embodiment is configured to detect suspicious outbound data movement using techniques including those described herein.
- the system is configured to detect suspicious outbound data movement using heuristics rule watching for an HTTP POST request where the headers claim to be a plaintext body but the body content of the request shows a high entropy value, which suggests compression or encryption. Further, the system is configured to detect abnormal network transactions based on one or more anomalies using techniques including those described herein. For example, the system is configured to detect transactions falling outside of a determined behavior pattern such as a large amount of encrypted file transfers to a host for the first time. The system is also configured to detect and record all malware infections and command-and-control (“CNC”) activities in a network using the techniques including those described herein.
- CNC command-and-control
- a security policy is used to define the type of traffic patterns and/or behaviors that are detected and the alerts generated for a given security posture.
- a security posture may include a range from DefCon1 to DefCon5.
- DefCon5 is the highest security posture indicating a high awareness or sensitivity to anomalies or other detections
- DefCon3 may be considered a normal posture indicating an average level of awareness or sensitivity to anomalies or other detections.
- the security posture levels under DefCon3 would indicate a lower level of awareness or sensitivity to anomalies than DefCon3.
- anomaly events such as detecting sensitive data movement, suspicious outbound data movement, or abnormal network transactions will be correlated with detected events such as detected malware infections and control-and-command activities.
- the system will generate an incident alert if an IP address of an anomaly event matches with an IP address related to the detected events or an IP address included in a security policy.
- the security posture is DefCon5
- the system is configured to generate an incident alert upon a based on any anomaly event, without the requirement for an IP address to be included in the security policy or the detection of one or more detected events.
- FIG. 4 illustrates an embodiment of a client, an end-user device, or a digital device that includes one or more processing units (CPUs) 402 , one or more network or other communications interfaces 404 , memory 414 , and one or more communication buses 406 for interconnecting these components.
- the client may include a user interface 408 comprising a display device 410 , a keyboard 412 , a touchscreen 413 and/or other input/output device.
- Memory 414 may include high speed random access memory and may also include non-volatile memory, such as one or more magnetic or optical storage disks.
- the memory 414 may include mass storage that is remotely located from CPUs 402 .
- memory 414 or alternatively one or more storage devices (e.g., one or more nonvolatile storage devices) within memory 414 , includes a computer readable storage medium.
- the memory 414 may store the following elements, or a subset or superset of such elements:
- the client may be any device that includes, but is not limited to, a mobile phone, a computer, a tablet computer, a personal digital assistant (PDA) or other mobile device.
- a mobile phone a computer
- a tablet computer a tablet computer
- PDA personal digital assistant
- FIG. 5 illustrates an embodiment of a server or a network device, such as a system that implements the methods described herein.
- the system includes one or more processing units (CPUs) 504 , one or more communication interface 506 , memory 508 , and one or more communication buses 510 for interconnecting these components.
- the system 502 may optionally include a user interface 526 comprising a display device 528 , a keyboard 530 , a touchscreen 532 , and/or other input/output devices.
- Memory 508 may include high-speed random access memory and may also include non-volatile memory, such as one or more magnetic or optical storage disks.
- the memory 508 may include mass storage that is remotely located from CPUs 504 .
- memory 508 or alternatively one or more storage devices (e.g., one or more nonvolatile storage devices) within memory 508 , includes a computer readable storage medium.
- the memory 508 may store the following elements, or a subset or superset of such elements: an operating system 512 , a network communication module 514 , a collection module 516 , a data flagging module 518 , a virtualization module 520 , an emulation module 522 , a control module 524 , a reporting module 526 , a signature module 528 , and a quarantine module 530 .
- An operating system 512 that includes procedures for handling various basic system services and for performing hardware dependent tasks.
- a network communication module 514 (or instructions) that is used for connecting the system to other computers, clients, peers, systems or devices via the one or more communication network interfaces 506 and one or more communication networks, such as the Internet, other wide area networks, local area networks, metropolitan area networks, and other type of networks.
- a collection module 516 (or instructions) for detecting one or more of any of network traffic patterns, real-time observations, first order indicators, second order indicators, indicator of a compromised entity, and other suspicious data using techniques including those described herein. Further, the collection module 516 is configured to receive network data (e.g., potentially suspicious data) from one or more sources. Network data is data or network traffic that is provided on a network from one digital device to another. The collection module, for an embodiment, is configured to generate one or more behavior profiles for a network device using techniques including those describe herein.
- the collection module 516 may flag the network data as suspicious data based on, for example, whitelists, blacklists, heuristic analysis, statistical analysis, rules, atypical behavior, triggers in a honey-host, or other determinations using techniques including those described herein.
- the sources comprise data collectors configured to receive network data.
- firewalls, IPS, servers, routers, switches, access points and the like may, either individually or collectively, function as or include a data collector.
- the data collector may forward network data to the collection module 516 .
- the data collectors filter the data before providing the data to the collection module 516 .
- the data collector may be configured to collect or intercept data using techniques including those described herein.
- the data collector may be configured to follow configured rules. For example, if data is directed between two known and trustworthy sources (e.g., the data is communicated between two devices on a whitelist), the data collector may not collect the data.
- a rule may be configured to intercept a class of data (e.g., all MS Word documents that may include macros or data that may comprise a script).
- rules may be configured to target a class of attack or payload based on the type of malware attacks on the target network in the past.
- the system may make recommendations (e.g., via the reporting module 526 ) and/or configure rules for the collection module 516 and/or the data collectors.
- the data collectors may include any number of rules regarding when data is collected or what data is collected.
- the data collectors located at various positions in the network may not perform any assessment or determination regarding whether the collected data is suspicious or trustworthy.
- the data collector may collect all or a portion of the network traffic/data and provide the collected network traffic/data to the collection module 516 which may perform analysis and/or filtering using techniques including those described herein.
- a data flagging module 518 may analyze the data and/or perform one or more assessments to the collected data received by the collection module 516 and/or the data collector to determine if the intercepted network data is suspicious using techniques including those describe herein.
- the data flagging module 518 may apply rules, compare real-time observations with one or more behavior profiles, generate one or more anomalies based on a comparison of real-time observations with at least one behavior profile, and/or correlate one or more first order indicators of compromise with one or more second order indicators of compromise to generate a risk score as discussed herein to determine if the collected data should be flagged as suspicious.
- collected network traffic/data may be initially identified as suspicious until determined otherwise (e.g., associated with a whitelist) or heuristics find no reason that the network data should be flagged as suspicious.
- the data flagging module 518 may perform packet analysis to look for suspicious characteristics in the header, footer, destination IP, origin IP, payload, and the like using techniques including those described herein. Those skilled in the art will appreciate that the data flagging module 518 may perform a heuristic analysis, a statistical analysis, and/or signature identification (e.g., signature-based detection involves searching for known patterns of suspicious data within the collected data's code) to determine if the collected network data is suspicious.
- signature-based detection involves searching for known patterns of suspicious data within the collected data's code
- the data flagging module 518 may be resident at the data collector, at the system, partially at the data collector, partially at a security server 108 , or on a network device.
- a router may comprise a data collector and a data flagging module 518 configured to perform one or more heuristic assessments on the collected network data.
- a software-defined networking (“SDN”) switch is an example of a network device configured to implement data-flagging and filtering functions. If the collected network data is determined to be suspicious, the router may direct the collected data to the security server 108 .
- the data flagging module 518 may be updated.
- the security server 108 may provide new entries for a whitelist, entries for a blacklist, heuristic algorithms, statistical algorithms, updated rules, and/or new signatures to assist the data flagging module 518 to determine if network data is suspicious.
- the whitelists, entries for whitelists, blacklists, entries for blacklists, heuristic algorithms, statistical algorithms, and/or new signatures may be generated by one or more security servers 108 (e.g., via the reporting module 526 ).
- the virtualization module 520 and emulation module 522 may analyze suspicious data for untrusted behavior (e.g., malware or distributed attacks).
- the virtualization module 520 is configured to instantiate one or more virtualization environments to process and monitor suspicious data.
- the suspicious data may operate as if within a target digital device.
- the virtualization module 520 may monitor the operations of the suspicious data within the virtualization environment to determine that the suspicious data is probably trustworthy, malware, or requiring further action (e.g., further monitoring in one or more other virtualization environments and/or monitoring within one or more emulation environments).
- the virtualization module 520 may determine that suspicious data is malware but continue to process the suspicious data to generate a full picture of the malware, identify the vector of attack, determine the type, extent, and scope of the malware's payload, determine the target of the attack, and detect if the malware is to work with any other malware.
- the security server 108 may extend predictive analysis to actual applications for complete validation.
- a report may be generated (e.g., by the reporting module 526 ) describing the malware, identify vulnerabilities, generate or update signatures for the malware, generate or update heuristics or statistics for malware detection, generate a report identifying the targeted information (e.g., credit card numbers, passwords, or personal information) and/or generate an incident alert as described herein.
- the virtualization module 520 may flag suspicious data as requiring further emulation and analytics in the back end if the data has suspicious behavior such as, but not limited to, preparing an executable that is not executed, performing functions without result, processing that suddenly terminates, loading data into memory that is not accessed or otherwise executed, scanning ports, or checking in specific portions of memory when those locations in memory may be empty.
- the virtualization module 520 may monitor the operations performed by or for the suspicious data and perform a variety of checks to determine if the suspicious data is behaving in a suspicious manner.
- a virtualization module is configured to instantiate a browser cooking environment such as those described herein.
- the emulation module 522 is configured to process suspicious data in an emulated environment.
- malware may require resources that are not available or may detect a virtualization environment. When malware requires unavailable resources, the malware may “go benign” or act in a non-harmful manner.
- malware may detect a virtualization environment by scanning for specific files and/or memory necessary for hypervisor, kernel, or other virtualization data to execute. If malware scans portions of its environment and determines that a virtualization environment may be running, the malware may “go benign” and either terminate or perform nonthreatening functions.
- the emulation module 522 processes data flagged as behaving suspiciously by the virtualization environment.
- the emulation module 522 may process the suspicious data in a bare metal environment (i.e., a pure hardware sandbox) where the suspicious data may have direct memory access.
- the behavior of the suspicious data as well as the behavior of the emulation environment may be monitored and/or logged to track the suspicious data's operations.
- the emulation module 522 may track what resources (e.g., applications and/or operating system files) are called in processing the suspicious data.
- the emulation module 522 records responses to the suspicious data in the emulation environment. If a divergence in the operations of the suspicious data between the virtualization environment and the emulation environment is detected, the virtualization environment may be reconfigured based on behavior seen from the emulation environment. The new configuration may include removing one or more tracing instrumentation against the suspicious data. The suspicious data may receive the expected response within the new virtualization environment and continue to operate as if the suspicious data was within the targeted digital device. The role of the emulation environment and the virtualization environment and the order of using the environments may be swapped.
- control module 524 (or instructions) control module 524 synchronizes the virtualization module 520 and the emulation module 522 .
- the control module 524 synchronizes the virtualization and emulation environments.
- the control module 524 may direct the virtualization module 520 to instantiate a plurality of different virtualization environments with different resources.
- the control module 524 may compare the operations of different virtualization environments to each other in order to track points of divergence.
- the control module 524 may identify suspicious data as operating in one manner when the virtualization environment includes, but is not limited to, Internet Explorer v. 7.0 or v. 8.0, but operating in a different manner when interacting with Internet Explorer v. 9.0 (e.g., when the suspicious data exploits a vulnerability that may be present in one version of an application but not present in another version).
- the control module 524 may track operations in one or more virtualization environments and one or more emulation environments. For example, the control module 524 may identify when the suspicious data behaves differently in a virtualization environment in comparison with an emulation environment. Divergence and correlation analysis is when operations performed by or for suspicious data in a virtual environment is compared to operations performed by or for suspicious data in a different virtual environment or emulation environment. For example, the control module 524 may compare monitored steps of suspicious data in a virtual environment to monitored steps of the same suspicious data in an emulation environment. The functions or steps of or for the suspicious data may be similar but suddenly diverge.
- the suspicious data may have not detected evidence of a virtual environment in the emulation environment and, unlike the virtualization environment where the suspicious data went benign, the suspicious data undertakes actions characteristic of malware (e.g., hijacks a formerly trusted data or processes).
- control module 524 may re-provision or instantiate a virtualization environment with information from the emulation environment (e.g., switch between user-space API hooking and kernel tracing) that may not be previously present in the originally instantiation of the virtualization environment.
- the suspicious data may then be monitored in the new virtualization environment to further detect suspicious behavior or untrusted behavior.
- suspicious behavior of an object is behavior that may be untrusted or malicious.
- Untrusted behavior is behavior that indicates a significant threat.
- control module 524 is configured to compare the operations of each virtualization environment in order to identify suspicious or untrusted behavior. For example, if the suspicious data takes different operations depending on the version of a browser or other specific resource when compared to other virtualization environments, the control module 524 may identify the suspicious data as malware. Once the control module 524 identifies the suspicious data as malware or otherwise untrusted, the control module 524 may continue to monitor the virtualization environment to determine the vector of attack of the malware, the payload of the malware, and the target (e.g., control of the digital device, password access, credit card information access, and/or ability to install a bot, keylogger, and/or rootkit). For example, the operations performed by and/or for the suspicious data may be monitored in order to further identify the malware, determine untrusted acts, and log the effect or probable effect.
- target e.g., control of the digital device, password access, credit card information access, and/or ability to install a bot, keylogger, and/or rootkit.
- a reporting module 526 (or instructions) is configured to generate a data model based on a generated list of events. Further a reporting module 526 is configured to generate reports such as an incident alert as describe herein. For an embodiment, the reporting module 526 generates a report to identify malware, one or more vectors of attack, one or more payloads, target of valuable data, vulnerabilities, command and control protocols, and/or behaviors that are characteristics of the malware. The reporting module 526 may also make recommendations to safeguard information based on the attack (e.g., move credit card information to a different digital device, require additional security such as VPN access only, or the like).
- the reporting module 526 generates malware information that may be used to identify malware or suspicious behavior.
- the reporting module 526 may generate malware information based on the monitored information of the virtualization environment.
- the malware information may include a hash of the suspicious data or a characteristic of the operations of or for the suspicious data.
- the malware information may identify a class of suspicious behavior as being one or more steps being performed by or for suspicious data at specific times. As a result, suspicious data and/or malware may be identified based on the malware information without virtualizing or emulating an entire attack.
- a signature module 528 (or instructions) is configured to classify network traffic/data based on said list of events. Further a signature module 528 is configured to store signature files that may be used to identify malware and/or traffic patterns. The signature files may be generated by the reporting module 312 and/or the signature module 528 .
- the security server 108 may generate signatures, malware information, whitelist entries, and/or blacklist entries to share with other security servers.
- the signature module 528 may include signatures generated by other security servers or other digital devices.
- the signature module 528 may include signatures generated from a variety of different sources including, but not limited to, other security firms, antivirus companies, and/or other third-parties.
- the signature module 528 may provide signatures which are used to determine if network traffic/data is suspicious or is malware. For example, if network traffic/data matches the signature of known malware, then the network data may be classified as malware. If network data matches a signature that is suspicious, then the network data may be flagged as suspicious data. The malware and/or the suspicious data may be processed within a virtualization environment and/or the emulation environment as discussed herein.
- a quarantine module 530 (or instructions) is configured to quarantine suspicious data and/or network traffic/data.
- the quarantine module 530 may quarantine the suspicious data, network data, and/or any data associated with the suspicious data and/or network data.
- the quarantine module 530 may quarantine all data from a particular digital device that has been identified as being infected or possibly infected.
- the quarantine module 530 is configured to alert a security administrator or the like (e.g., via email, call, voicemail, or SMS text message) when malware or possible malware has been found.
- FIG. 5 illustrates system 502 as a computer it could be a distributed system, such as a server system.
- the figures are intended more as functional descriptions of the various features which may be present in a client and a set of servers than as structural schematics of the embodiments described herein.
- items shown separately could be combined and some items could be separated.
- some items illustrated as separate modules in FIG. 5 could be implemented on a single server or client and a single item could be implemented by one or more servers or clients.
- the actual number of servers, clients, or modules used to implement a system 502 and how features are allocated among them will vary from one implementation to another, and may depend in part on the amount of data traffic that the system must handle during peak usage periods as well as during average usage periods.
- some modules or functions of modules illustrated in FIG. 4 may be implemented on one or more one or more systems remotely located from other systems that implement other modules or functions of modules illustrated in FIG. 5 .
Landscapes
- Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Virology (AREA)
- Computing Systems (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- General Health & Medical Sciences (AREA)
- Power Engineering (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Description
- This application is a continuation of U.S. patent application Ser. No. 16/437,262, filed on Jun. 11, 2019 (now U.S. Pat. No. 11,405,410), which is a continuation of U.S. patent application Ser. No. 14/936,612, filed on Nov. 9, 2015 (now U.S. Pat. No. 10,326,778), which is a continuation-in-part of U.S. patent application Ser. No. 14/629,444, filed on Feb. 23, 2015 (now U.S. Pat. No. 9,686,293), which claims priority from U.S. Provisional Patent Application No. 61/944,006, filed on Feb. 24, 2014, each are hereby incorporated by reference in their entireties.
- Embodiments of the invention relate to protecting computers and networks from malicious software and activities. In particular, embodiments of the invention relate to a system and method for detection of lateral movement and data exfiltration.
- As computer networks grow and the amount of data stored on computers and databases interconnected by those networks grows, so have attempts to gain unauthorized access to these computers and databases. Such attempts to gain unauthorized access to computers and databases may include methodical reconnaissance of potential victims to identify traffic patterns and existing defenses. A technique used to gain unauthorized access to computers and databases includes loading malicious software or malware onto a computer. Such malware is designed to disrupt computer operation, gather sensitive information, or to grant access to the computer to unauthorized individuals.
- As the awareness of malware increases, the techniques used to load malware onto computers (also called a malware infection) has grown more sophisticated. As a result, legacy security solutions that use a structured process (e.g., signature and heuristics matching) or analyze agent behavior in an isolated context fail to detect threat activities including, but not limited to, loading malware, lateral movement, data exfiltration, fraudulent transactions, and inside attacks.
- The failure to detect these types of threat activities on a computer or network can result in loss of high value data, down time or destruction of infected computers and/or the networks, lost productivity, and a high cost to recover and repair the infected computers and/or networks. Further, current security solutions that are focused on detecting the threat acts of infecting or penetrating a target system fail to detect the increasingly sophisticated malware on the complex business applications and network technologies used in current systems, because complex applications and protocols allow threat acts to hide more easily to evade detection. Further, the current security solutions fail to detect data exfiltration by the malware, which prevents an enterprise from properly assessing and controlling any damage that occurs from malware infecting a system. These types of detection security solutions fail to detect social-engineering attacks on employees and infection of malware caused by rogue or disgruntled employees.
- A system configured to detect a threat activity on a network. The system including a digital device configured to detect one or more first order indicators of compromise on a network, detect one or more second order indicators of compromise on the network, generate a risk score based on correlating said first order indicator of compromise on the network with the second order indicator of compromise on said network, and generate at least one incident alert based on comparing the risk score to a threshold.
- Other features and advantages of embodiments will be apparent from the accompanying drawings and from the detailed description that follows.
- Embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
-
FIG. 1 illustrates a block diagram of a network environment that includes a system configured to detect threat activities according to an embodiment; -
FIG. 2 illustrates a flow diagram of a method to detect threat activities on a network according to an embodiment; -
FIG. 3 illustrates a block diagram of a method to detect one or more second order indicators of compromise according to an embodiment; -
FIG. 4 illustrates an embodiment of a client, an end-user device, or a digital device according to an embodiment; and -
FIG. 5 illustrates an embodiment of a system for detecting threat activities according to an embodiment. - Embodiments of a system to detect threat activities are configured to detect one or more threat activities at advanced stages of a threat kill chain, including lateral movement of malware objects inside networks and enterprises, data gathering and exfiltration, and compromised or fraudulent business transactions. The system is configured to extend protection coverage to the complete kill chain.
- A system, according to an embodiment, is configured to monitor simultaneously north-south traffic and east-west traffic. Such a system configured with multiple collectors for monitoring north-south traffic and east-west traffic. North-south traffic monitoring will detect threat activity between internal (e.g., corporate network or LAN) devices and external (e.g., extranet or Internet) devices, including but not limited to web servers. East-west traffic monitoring will detect threat activities among internal devices, including those in a demilitarized zone (“DMZ”) otherwise known as a perimeter network. East-west traffic can contain the same set of network protocols seen on north-south boundaries, as well as network protocols meant for internal access and data sharing. Examples of east-west protocols include, but are not limited to, reliable user datagram protocol (“RDP”) for remote access to windows computers, active directory services, and server message block (“SMB”) for file sharing. Embodiments of the system that deploy collectors to monitor both north-south and east-west traffic, and analyze them through first order and second order indicators extraction, and correlate them in a centralized location and interface, that is a single pane of glass, provide the benefit of detecting threats regardless of which stage they are at in the kill chain, or if the threat is an external or an inside attack, or if the threat is the lateral movement of an infiltration.
-
FIG. 1 illustrates a block diagram of anetwork environment 100 that includes a system configured to detect threat activities according to an embodiment. Systems and methods embodied in thenetwork environment 100 may detect threat activity, malicious activity, identify malware, identify exploits, take preventive action, generate signatures, generate reports, determine malicious behavior, determine targeted information, recommend steps to prevent attack, and/or provide recommendations to improve security. Thenetwork environment 100 comprises adata center network 102 and aproduction network 104 that communicate over acommunication network 106. Thedata center network 102 comprises asecurity server 108. Theproduction network 104 comprises a plurality of end-user devices 110. Thesecurity server 108 and the end-user devices 110 may include digital devices. A digital device is any device with one or more a processing units and memory.FIGS. 4 and 5 illustrate embodiments of a digital device. - The
security server 108 is a digital device configured to detect threat activities. For an embodiment, thesecurity server 108 receives suspicious data from one or more data collectors. The data collectors may be resident within or in communication with network devices such as Intrusion Prevention System (“IPS”)collectors 112 a and 112 b, firewalls 114 a and 114 b, ICAP/WCCP collectors 116, milter mail plug-incollectors 118,switch collectors 120, and/oraccess points 124. Those skilled in the art will appreciate that a collector and a network device may be two separate digital devices (e.g., see F/W collector and IDS collector). - For an embodiment, data collectors may be at one or more points within the
communication network 106. A data collector, which may include a tap or span port (e.g., span port IDS collector at switch 120) for example, is configured to intercept network data from a network. The data collector may be configured to detect suspicious data. Suspicious data is any data collected by the data collector that has been flagged as suspicious by the data collector and/or any data that is to be further processed within the virtualization environment. - The data collectors may filter the data before flagging the data as suspicious and/or providing the collected data to the
security server 108. For example, the data collectors may filter out plain text but collect executables or batch files. Further, according to an embodiment, the data collectors may perform intelligent collecting. For example, data may be hashed and compared to a whitelist. The whitelist may identify data that is safe. In one example, the whitelist may identify digitally signed data by trusted entities or data received from a known trusted source as safe. Further, the whitelist may identify previously received information that has been determined to be safe. If data has been previously received, tested within the environments, and determined to be sufficiently trustworthy, the data collector may allow the data to continue through the network. Those skilled in the art will appreciate that the data collectors (or agents associated with the data collectors) may be updated by thesecurity server 108 to help the data collectors recognize sufficiently trustworthy data and to take corrective action (e.g., quarantine and alert an administrator) if untrustworthy data is recognized. For an embodiment, if data is not identified as safe, the data collectors may flag the data as suspicious for further assessment. - Those skilled in the art will appreciate that one or more agents or other modules may monitor network traffic for common behaviors and may configure a data collector to collect data when data is directed in a manner that falls outside normal parameters. For example, the agent may determine or be configured to detect that a computer has been deactivated, a particular computer does not typically receive any data, data received by a particular computer typically comes from a limited number of sources, or a particular computer typically does not send data of a given pattern to certain destinations. If data is directed to a digital device in a manner that is not typical, the data collector may flag such data as suspicious and provide the suspicious data to the
security server 108. - Network devices include any device configured to receive and provide data over a network. Examples of network devices include, but are not limited to, routers, bridges, security appliances, firewalls, web servers, mail servers, wireless access points (e.g., hotspots), and switches. For some embodiments, network devices include
IPS collectors 112 a and 112 b, firewalls 114 a and 114 b, Internet content adaptation protocol (“ICAP”)/web cache communication protocol (“WCCP”)servers 116, devices including milter mail plug-ins 118, switches 120, and/or access points 124. TheIPS collectors 112 a and 112 b may include any anti-malware device including IPS systems, intrusion detection and prevention systems (“IDPS”), or any other kind of network security appliances. The firewalls 114 a and 114 b may include software and/or hardware firewalls. For an embodiment, the firewalls 114 a and 114 b may be embodied within routers, access points, servers (e.g., web servers), mail filters, or appliances. - ICAP/
WCCP servers 116 include any web server or web proxy server configured to allow access to a network and/or the Internet. Network devices including milter mail plug-ins 118 may include any mail server or device that provides mail and/or filtering functions and may include digital devices that implement milter, mail transfer agents (“MTAs”), sendmail, and postfix, for example.Switches 120 include any switch or router. In some examples, the data collector may be implemented as a TAP, SPAN port, and/or intrusion detection system (“IDS”). Access points 124 include any device configured to provide wireless connectivity with one or more other digital devices. - The
production network 104 is any network that allows one or more end-user devices 110 to communicate over thecommunication network 106. Thecommunication network 106 is any network that may carry data (encoded, compressed, and/or otherwise) from one digital device to another. In some examples, thecommunication network 106 may comprise a LAN and/or WAN. Further, thecommunication network 106 may comprise any number of networks. For some embodiments, thecommunication network 106 is the Internet. -
FIG. 1 is exemplary and does not limit systems and methods described herein to the use of only those technologies depicted. For example, data collectors may be implemented in any web or web proxy server and is not limited to only the servers that implement Internet content adaption protocol (“ICAP”) and/or web cache communication protocol (“WCCP”). Similarly, data collectors may be implemented in any mail server and is not limited to mail servers that implement milter. Data collectors may be implemented at any point in one or more networks. - Those skilled in the art will appreciate that although
FIG. 1 depicts a limited number of digital devices, collectors, routers, access points, and firewalls, there may be any kind and number of devices. For example, there may be any number ofsecurity servers 108, end-user devices 110, intrusion prevention system (“IPS”)collectors 112 a and 112 b, firewalls 114 a and 114 b, ICAP/WCCP collectors 116, milter mail plug-ins 118, switches 120, and/or access points 124. Further, there may be any number ofdata center networks 102 and/orproduction networks 104. The data collectors can take the form of a hardware appliance, pure software running on a native operating system (“OS”), or virtual appliance for virtualized platforms like Amazon Web Services (“AWS”) and VMWare. -
FIG. 2 illustrates a block diagram of a method to detect threat activities on a network according to an embodiment. The method includes detecting one or more first order indicators of compromise (202). For an embodiment one or more data collectors are configured to intercept network data between network devices. For example, a data collector is configured to detect network traffic and to determine traffic patterns across the protocol stack between network devices. A data collector, for an embodiment, is configured to determine traffic patterns between network devices on one or more of protocol stack layers including, but not limited to, layers 2-7. For example, the data collector may be configured to detect and to determine traffic patterns for address resolution protocol (“ARP”) traffic, dynamic host configuration protocol (“DHCP”) traffic, Internet control message protocol (“ICMP”) traffic between media access control (“MAC”) or Internet protocol (“IP”) addresses, transmission control protocol (“TCP”)/IP and user datagram protocol (“UDP”)/IP traffic between IP and port number pairs, up the stack to hypertext transfer protocol (“HTTP”), secure shell (“SSH”), server message block (“SMB”) application protocols, patterns between application clients and servers, and industry-specific applications like wire transfer transaction processing, and patterns between bank accounts. The data collector is configured to extract file objects transferred along with their meta information such as HTTP headers, URLs, source and destination IP addresses, and to transmit the file objects to a security server. - Further, the method optionally includes detecting one or more indicators of a compromised entity (204). A compromised entity includes, but is not limited to, a digital device, an application, and a rogue user on a network. For an embodiment, a system includes one or more data collectors configured to operate as a honey-host used to attract the interest of any one of malicious software, a rogue user, a digital device performing threat activities. An indicator of a compromised entity includes, but is not limited to, an attempt to prove a honey-host for open ports (that is, test the honey-host for open ports) to gain access to the honey-host, and an attempt to examine or move data on a honey-host. For example, a data collector is configured to have one or more of an interesting domain name and IP address (that is, a domain name and/or an IP address that a compromised entity would attempt to access); it is given an owner/user with interesting corporate roles, relevant data files, documents, and user activity traces. Further, a data collector, for an embodiment, is configured to log any probing, login, and access activities against this honey-host. The logged data will, e.g., identify which device, application, and user(s) have attempted any activities, at what time, along with the details of the activity. For example, if files are copied from the honey-host, a data collector is configured to log information including the file name, hash, and where the file is moved to.
- Further, the method includes detecting one or more second order indicators of compromise (206). Second order indicators of compromise include, but are not limited to, behavior patterns of a network device observed from the network device and behavior patterns of an end-user device. For example, a data collector is configured to generate an event log and/or a transaction log for the network device. Further, a data collector, for an embodiment, is configured to detect behavior patterns of an end-user device, endpoint device, and/or other client peers on a network, for example as used by a user, based on network traffic, from a business activity level. An example of detecting behavior patterns of an end-user device in a software engineering environment includes detecting that an individual developer's workstation started to pull down a large amount of source code from one or more repositories while at the same time, very little push (updating to the repositories) is taking place. This kind of suspicious conditions is detected by continuous monitoring and building typical pattern profiles. In such an example, the system is configured to generate an alert when the observed behavior patterns deviate from the typical profiles. Another example of detecting behavior patterns of an end-user device includes in a bank environment which well-known interaction patterns among the desktop computers of different functional departments are established from the business workflow using techniques including those described herein. In such an environment, a human resource manager's machine will not generally communicate with a wire-transfer processing server; any direct network connectivity outside an information technology (“IT”) maintenance window will generate an alert for a second order suspicious indicator. Thus, the detection a data collector is configured to detect these types of behavior patterns using techniques including those described herein.
- The method also includes generating a risk score based on correlating the one or more first order indicators of compromise with the one or more second order indicators of compromise (208). For an embodiment, the risk score is generated based on an asset value assigned to a network device or end-user device and the current security posture of the network device or end-user device. For example, if the network device is assigned a high asset value the generated risk score would be higher for the same correlation result than a network device assigned a lower asset value. In addition, a device having a security posture above a normal or a defined average level (e.g., more sensitive to attacks) would result in the generated risk score being higher than a correlation result of a network device having a security posture at or below normal. For an embodiment, a generated risk score based on a security posture is more dynamic than a generated risk score based on an asset value. For example, based on the threat intelligence in the wild (that is, threat intelligence gain through monitoring other networks), a security server is configured to determine that some group of devices or users may be subject to a targeted attack and warrant special monitoring. In such an example, the security posture can be increased for the devices or users for the given period of time (e.g., a high security posture) so that the risk score for any threat against these devices or users will be applied an escalation factor in order to prioritize the response.
- For an embodiment, a data collector is configured to correlate one or more first order indicators of compromise with the one or more second order indicators of compromise based on network patterns/data received from one or more data collectors using techniques including those described herein. Further, the data collector is configured to generate a risk score based on the correlation result using techniques including those described herein. For another embodiment, a security server is configured to correlate one or more first order indicators of compromise with the one or more second order indicators of compromise based on network patterns/data received from one or more data collectors using techniques including those described herein. Further, the security server is configured to generate a risk score based on the correlation result using techniques including those described herein. Using many data collectors communicating with a security server, a hierarchy of data aggregation and abstraction can be created to scale the coverage to larger networks and to support filtered sharing of threat intelligence data. For example, a single collector may cover the network of a single department at a given site having many departments. The data from multiple collectors of corresponding departments at the site can be aggregated to represent the entire site.
- Further, the method includes generating at least one incident alert based on comparing the risk score to a threshold (210). The incident alert includes lateral movement and data exfiltration incident alerts. For an embodiment, multiple alerts will aggregate in time when multiple events of the same type happen within a short period of time to the same target device. The aggregation is achieved by representing the number of occurrences of the same events within the given interval by one single alert as an incident. This will result in a more meaningful alert to the end user, without loss of important information, while avoiding generating many events to the user.
- In addition, an alert may be generated based on a security policy. For example, a security policy may include, but is not limited to, a watch-list of one or more critical internal IP addresses and a red-list, which includes known malicious addresses, of one or more external IP addresses. In such an example, an incident alert would be generated when a network device or end-user device communicates with an IP address on the watch-list and/or on the red-list. For an embodiment, a data collector is configured to generate at least one incident alert based on comparing the risk score to a threshold using the techniques describe herein. For another embodiment, a security server is configured to generating at least one incident alert based on comparing the risk score to a threshold using the techniques describe herein. Between the data collectors and security server, a hierarchy of data aggregation and abstraction can be created to scale the coverage to larger networks and to support filtered sharing of threat intelligence data. For example, a single data collector may cover the network of a single department at a given site; data from multiple collectors of corresponding departments can be aggregated to represent the given site as a whole.
-
FIG. 3 illustrates a block diagram of a method to detect one or more second order indicators of compromise according to an embodiment. The method includes generating a behavior profile for at least one network device or end-user device (302). For example, a behavior profile is generated at multiple levels of activities across the protocol stack using heuristics or supervised or unsupervised machine learning. An example of a behavior profile for an end-user device includes, but is not limited to, a network user's role in a network and authorization to use the end-user device on the network, one or more activities a network user performs on the end-user device, a list of one or more IP addresses that this device connects to on a weekly basis, a distribution of the time duration for one or more connections, a total amount of data exchanged, a breakdown of the amount of data in each direction, and a characterization of variances in any of the above information over a period of time. The detection mechanism, such as a data collector, maintains a behavior profile on a rolling basis (a long-term behavior profile). At the same time, according to an embodiment, the detection mechanism is configured to build a real-time behavior profile; e.g., based on normalized daily stats. For an embodiment, the difference between a long-term behavior and a real-time behavior profile will raise an alert for a threat activity. - According to an embodiment, the behavior profiles are generated for each of the monitored network devices and end-user devices during a training phase. For an embodiment, a security server is configured to generate one or more behavior profiles based on one or more of network traffic patterns, behavior patterns of a network device, and behavior patterns of an end-user device. For another embodiment, a data collector is configured to generate one or more behavior profiles based on one or more of network traffic patterns, behavior patterns of a network device, and behavior patterns of an end-user device.
- The method also includes detecting one or more real-time observations (304). These real-time observations are represented as real-time behavior profiles. Detecting one or more real-time observations, according to an embodiment, is part of a production phase. For an embodiment, a data collector is configured to intercept network data and analyze network traffic patterns across the protocol stack in real time to generate real-time observations using techniques including those described herein. For another embodiment, a data collector is configured to intercept network data and transmit the network data to security server using techniques including those described herein. The security server is configured to receive the network data and analyze the network traffic patterns across the protocol stack in real time to generate real-time observations.
- Moreover, the method includes comparing the one or more real-time observations to at least one behavior profile (306), i.e., a long-term behavior profile. For example, real-time observations are compared against one or more generated behavior profiles. Comparing the one or more real-time observations to at least one behavior profile, according to an embodiment, is part of a production phase. For an embodiment, a data collector is configured to compare the real-time observations of a network device and/or an end-user device to at least one behavior profile generated for the network device or the end-user device using techniques including those described herein. For another embodiment, a security server is configured to compare the real-time observations of a network device or an end-user device to at least one behavior profile generated for the network device or the end-user device based on information received from one or more data collectors using techniques including those described herein.
- Further, the method includes generating one or more anomalies based on a comparison of the real-time observations to the behavior profile (308). Generating one or more anomalies based on a comparison of the real-time observations to the behavior profile, according to an embodiment, is part of a production phase. The anomalies generated include, but are not limited to, network anomalies, device anomalies and user anomalies. A network anomaly includes the real-time networking traffic pattern observations that differ from the one or more behavior profiles. A device anomaly includes the real-time device behavior observations that differ from the one or more behavior profiles of the network device or the end-user device. A user anomaly includes the real-time observations of behavior of an end-user device under the control of a network user that differs from the one or more behavior profiles of the end-user device under control of the user. In addition to comparing the real-time observations against the one or more behavior profiles, the real-time observations may be compared against one or more of a network anomaly, a device anomaly, and a user anomaly. For an embodiment, a security server is configured to generate one or more anomalies based on data received from one or more data collectors including the data described herein. Further the anomalies, according to an embodiment, are correlated with the first order indicators of compromise based on one or more of a security policy, an IP address, a device finger print, a business application, and a user identity. For another embodiment, a data collector is configured to generate one or more anomalies using techniques including those described herein.
- Further, one or more indicators of a compromised entity detected by one or more honey-hosts are correlated with both one or more first order indicators and one or more second order indicators, to identify any of a network device, an application that may have been compromised, and to identify a user, such as a rogue user, on an end-user device initiating suspicious activities on the network. Some examples of suspicious activities include probing for high-value assets or gathering sensitive information from the honey-hosts. An exemplary implementation includes a system configured to detect sensitive data using expression matching using techniques including those described herein. For example, sensitive data includes social security numbers, credit card numbers, and documents including keywords. The system according to an embodiment is configured to detect suspicious outbound data movement using techniques including those described herein. For an embodiment, the system is configured to detect suspicious outbound data movement using heuristics rule watching for an HTTP POST request where the headers claim to be a plaintext body but the body content of the request shows a high entropy value, which suggests compression or encryption. Further, the system is configured to detect abnormal network transactions based on one or more anomalies using techniques including those described herein. For example, the system is configured to detect transactions falling outside of a determined behavior pattern such as a large amount of encrypted file transfers to a host for the first time. The system is also configured to detect and record all malware infections and command-and-control (“CNC”) activities in a network using the techniques including those described herein.
- Moreover, the system is configured to support a general policy framework. For example, a security policy is used to define the type of traffic patterns and/or behaviors that are detected and the alerts generated for a given security posture. For example, a security posture may include a range from DefCon1 to DefCon5. In such an example, DefCon5 is the highest security posture indicating a high awareness or sensitivity to anomalies or other detections and DefCon3 may be considered a normal posture indicating an average level of awareness or sensitivity to anomalies or other detections. And, the security posture levels under DefCon3 would indicate a lower level of awareness or sensitivity to anomalies than DefCon3.
- For a system configured to use a security posture as described above, under a DefCon3, anomaly events such as detecting sensitive data movement, suspicious outbound data movement, or abnormal network transactions will be correlated with detected events such as detected malware infections and control-and-command activities. The system will generate an incident alert if an IP address of an anomaly event matches with an IP address related to the detected events or an IP address included in a security policy. In the case when the security posture is DefCon5, the system is configured to generate an incident alert upon a based on any anomaly event, without the requirement for an IP address to be included in the security policy or the detection of one or more detected events.
-
FIG. 4 illustrates an embodiment of a client, an end-user device, or a digital device that includes one or more processing units (CPUs) 402, one or more network orother communications interfaces 404,memory 414, and one ormore communication buses 406 for interconnecting these components. The client may include auser interface 408 comprising adisplay device 410, akeyboard 412, atouchscreen 413 and/or other input/output device.Memory 414 may include high speed random access memory and may also include non-volatile memory, such as one or more magnetic or optical storage disks. Thememory 414 may include mass storage that is remotely located fromCPUs 402. Moreover,memory 414, or alternatively one or more storage devices (e.g., one or more nonvolatile storage devices) withinmemory 414, includes a computer readable storage medium. Thememory 414 may store the following elements, or a subset or superset of such elements: -
- an
operating system 416 that includes procedures for handling various basic system services and for performing hardware dependent tasks; - a network communication module 418 (or instructions) that is used for connecting the client to other computers, clients, servers, systems or devices via the one or more communications network interfaces 404 and one or more communications networks, such as the Internet, other wide area networks, local area networks, metropolitan area networks, and other type of networks; and
- a
client application 420 including, but not limited to, a web browser, a document viewer and other applications; and - a
webpage 422 including one generated by theclient application 420 configured to receive a user input to communicate across a network with other computers or devices.
- an
- According to an embodiment, the client may be any device that includes, but is not limited to, a mobile phone, a computer, a tablet computer, a personal digital assistant (PDA) or other mobile device.
-
FIG. 5 illustrates an embodiment of a server or a network device, such as a system that implements the methods described herein. The system, according to an embodiment, includes one or more processing units (CPUs) 504, one ormore communication interface 506,memory 508, and one ormore communication buses 510 for interconnecting these components. Thesystem 502 may optionally include auser interface 526 comprising adisplay device 528, akeyboard 530, atouchscreen 532, and/or other input/output devices.Memory 508 may include high-speed random access memory and may also include non-volatile memory, such as one or more magnetic or optical storage disks. Thememory 508 may include mass storage that is remotely located fromCPUs 504. Moreover,memory 508, or alternatively one or more storage devices (e.g., one or more nonvolatile storage devices) withinmemory 508, includes a computer readable storage medium. Thememory 508 may store the following elements, or a subset or superset of such elements: anoperating system 512, anetwork communication module 514, acollection module 516, adata flagging module 518, avirtualization module 520, anemulation module 522, acontrol module 524, areporting module 526, asignature module 528, and aquarantine module 530. Anoperating system 512 that includes procedures for handling various basic system services and for performing hardware dependent tasks. A network communication module 514 (or instructions) that is used for connecting the system to other computers, clients, peers, systems or devices via the one or more communication network interfaces 506 and one or more communication networks, such as the Internet, other wide area networks, local area networks, metropolitan area networks, and other type of networks. - A collection module 516 (or instructions) for detecting one or more of any of network traffic patterns, real-time observations, first order indicators, second order indicators, indicator of a compromised entity, and other suspicious data using techniques including those described herein. Further, the
collection module 516 is configured to receive network data (e.g., potentially suspicious data) from one or more sources. Network data is data or network traffic that is provided on a network from one digital device to another. The collection module, for an embodiment, is configured to generate one or more behavior profiles for a network device using techniques including those describe herein. Thecollection module 516 may flag the network data as suspicious data based on, for example, whitelists, blacklists, heuristic analysis, statistical analysis, rules, atypical behavior, triggers in a honey-host, or other determinations using techniques including those described herein. In some embodiments, the sources comprise data collectors configured to receive network data. For example, firewalls, IPS, servers, routers, switches, access points and the like may, either individually or collectively, function as or include a data collector. The data collector may forward network data to thecollection module 516. - For an embodiment, the data collectors filter the data before providing the data to the
collection module 516. For example, the data collector may be configured to collect or intercept data using techniques including those described herein. In some embodiments, the data collector may be configured to follow configured rules. For example, if data is directed between two known and trustworthy sources (e.g., the data is communicated between two devices on a whitelist), the data collector may not collect the data. In various embodiments, a rule may be configured to intercept a class of data (e.g., all MS Word documents that may include macros or data that may comprise a script). In some embodiments, rules may be configured to target a class of attack or payload based on the type of malware attacks on the target network in the past. In some embodiments, the system may make recommendations (e.g., via the reporting module 526) and/or configure rules for thecollection module 516 and/or the data collectors. Those skilled in the art will appreciate that the data collectors may include any number of rules regarding when data is collected or what data is collected. - For an embodiment, the data collectors located at various positions in the network may not perform any assessment or determination regarding whether the collected data is suspicious or trustworthy. For example, the data collector may collect all or a portion of the network traffic/data and provide the collected network traffic/data to the
collection module 516 which may perform analysis and/or filtering using techniques including those described herein. - A data flagging module 518 (or instructions) may analyze the data and/or perform one or more assessments to the collected data received by the
collection module 516 and/or the data collector to determine if the intercepted network data is suspicious using techniques including those describe herein. Thedata flagging module 518 may apply rules, compare real-time observations with one or more behavior profiles, generate one or more anomalies based on a comparison of real-time observations with at least one behavior profile, and/or correlate one or more first order indicators of compromise with one or more second order indicators of compromise to generate a risk score as discussed herein to determine if the collected data should be flagged as suspicious. - For an embodiment, collected network traffic/data may be initially identified as suspicious until determined otherwise (e.g., associated with a whitelist) or heuristics find no reason that the network data should be flagged as suspicious. The
data flagging module 518 may perform packet analysis to look for suspicious characteristics in the header, footer, destination IP, origin IP, payload, and the like using techniques including those described herein. Those skilled in the art will appreciate that thedata flagging module 518 may perform a heuristic analysis, a statistical analysis, and/or signature identification (e.g., signature-based detection involves searching for known patterns of suspicious data within the collected data's code) to determine if the collected network data is suspicious. A machine-learning based classification model may also be applied for the determination. - The
data flagging module 518 may be resident at the data collector, at the system, partially at the data collector, partially at asecurity server 108, or on a network device. For example, a router may comprise a data collector and adata flagging module 518 configured to perform one or more heuristic assessments on the collected network data. A software-defined networking (“SDN”) switch is an example of a network device configured to implement data-flagging and filtering functions. If the collected network data is determined to be suspicious, the router may direct the collected data to thesecurity server 108. - For an embodiment, the
data flagging module 518 may be updated. In one example, thesecurity server 108 may provide new entries for a whitelist, entries for a blacklist, heuristic algorithms, statistical algorithms, updated rules, and/or new signatures to assist thedata flagging module 518 to determine if network data is suspicious. The whitelists, entries for whitelists, blacklists, entries for blacklists, heuristic algorithms, statistical algorithms, and/or new signatures may be generated by one or more security servers 108 (e.g., via the reporting module 526). - The
virtualization module 520 andemulation module 522 may analyze suspicious data for untrusted behavior (e.g., malware or distributed attacks). Thevirtualization module 520 is configured to instantiate one or more virtualization environments to process and monitor suspicious data. Within the virtualization environment, the suspicious data may operate as if within a target digital device. Thevirtualization module 520 may monitor the operations of the suspicious data within the virtualization environment to determine that the suspicious data is probably trustworthy, malware, or requiring further action (e.g., further monitoring in one or more other virtualization environments and/or monitoring within one or more emulation environments). - For an embodiment, the
virtualization module 520 may determine that suspicious data is malware but continue to process the suspicious data to generate a full picture of the malware, identify the vector of attack, determine the type, extent, and scope of the malware's payload, determine the target of the attack, and detect if the malware is to work with any other malware. In this way, thesecurity server 108 may extend predictive analysis to actual applications for complete validation. A report may be generated (e.g., by the reporting module 526) describing the malware, identify vulnerabilities, generate or update signatures for the malware, generate or update heuristics or statistics for malware detection, generate a report identifying the targeted information (e.g., credit card numbers, passwords, or personal information) and/or generate an incident alert as described herein. - For an embodiment, the
virtualization module 520 may flag suspicious data as requiring further emulation and analytics in the back end if the data has suspicious behavior such as, but not limited to, preparing an executable that is not executed, performing functions without result, processing that suddenly terminates, loading data into memory that is not accessed or otherwise executed, scanning ports, or checking in specific portions of memory when those locations in memory may be empty. Thevirtualization module 520 may monitor the operations performed by or for the suspicious data and perform a variety of checks to determine if the suspicious data is behaving in a suspicious manner. Further, a virtualization module is configured to instantiate a browser cooking environment such as those described herein. - The
emulation module 522 is configured to process suspicious data in an emulated environment. Those skilled in the art will appreciate that malware may require resources that are not available or may detect a virtualization environment. When malware requires unavailable resources, the malware may “go benign” or act in a non-harmful manner. In another example, malware may detect a virtualization environment by scanning for specific files and/or memory necessary for hypervisor, kernel, or other virtualization data to execute. If malware scans portions of its environment and determines that a virtualization environment may be running, the malware may “go benign” and either terminate or perform nonthreatening functions. - For an embodiment, the
emulation module 522 processes data flagged as behaving suspiciously by the virtualization environment. Theemulation module 522 may process the suspicious data in a bare metal environment (i.e., a pure hardware sandbox) where the suspicious data may have direct memory access. The behavior of the suspicious data as well as the behavior of the emulation environment may be monitored and/or logged to track the suspicious data's operations. For example, theemulation module 522 may track what resources (e.g., applications and/or operating system files) are called in processing the suspicious data. - For an embodiment, the
emulation module 522 records responses to the suspicious data in the emulation environment. If a divergence in the operations of the suspicious data between the virtualization environment and the emulation environment is detected, the virtualization environment may be reconfigured based on behavior seen from the emulation environment. The new configuration may include removing one or more tracing instrumentation against the suspicious data. The suspicious data may receive the expected response within the new virtualization environment and continue to operate as if the suspicious data was within the targeted digital device. The role of the emulation environment and the virtualization environment and the order of using the environments may be swapped. - A control module 524 (or instructions)
control module 524 synchronizes thevirtualization module 520 and theemulation module 522. For an embodiment, thecontrol module 524 synchronizes the virtualization and emulation environments. For example, thecontrol module 524 may direct thevirtualization module 520 to instantiate a plurality of different virtualization environments with different resources. Thecontrol module 524 may compare the operations of different virtualization environments to each other in order to track points of divergence. For example, thecontrol module 524 may identify suspicious data as operating in one manner when the virtualization environment includes, but is not limited to, Internet Explorer v. 7.0 or v. 8.0, but operating in a different manner when interacting with Internet Explorer v. 9.0 (e.g., when the suspicious data exploits a vulnerability that may be present in one version of an application but not present in another version). - The
control module 524 may track operations in one or more virtualization environments and one or more emulation environments. For example, thecontrol module 524 may identify when the suspicious data behaves differently in a virtualization environment in comparison with an emulation environment. Divergence and correlation analysis is when operations performed by or for suspicious data in a virtual environment is compared to operations performed by or for suspicious data in a different virtual environment or emulation environment. For example, thecontrol module 524 may compare monitored steps of suspicious data in a virtual environment to monitored steps of the same suspicious data in an emulation environment. The functions or steps of or for the suspicious data may be similar but suddenly diverge. In one example, the suspicious data may have not detected evidence of a virtual environment in the emulation environment and, unlike the virtualization environment where the suspicious data went benign, the suspicious data undertakes actions characteristic of malware (e.g., hijacks a formerly trusted data or processes). - When divergence is detected and further observation is needed, the
control module 524 may re-provision or instantiate a virtualization environment with information from the emulation environment (e.g., switch between user-space API hooking and kernel tracing) that may not be previously present in the originally instantiation of the virtualization environment. The suspicious data may then be monitored in the new virtualization environment to further detect suspicious behavior or untrusted behavior. Those skilled in the art will appreciate that suspicious behavior of an object is behavior that may be untrusted or malicious. Untrusted behavior is behavior that indicates a significant threat. - For an embodiment, the
control module 524 is configured to compare the operations of each virtualization environment in order to identify suspicious or untrusted behavior. For example, if the suspicious data takes different operations depending on the version of a browser or other specific resource when compared to other virtualization environments, thecontrol module 524 may identify the suspicious data as malware. Once thecontrol module 524 identifies the suspicious data as malware or otherwise untrusted, thecontrol module 524 may continue to monitor the virtualization environment to determine the vector of attack of the malware, the payload of the malware, and the target (e.g., control of the digital device, password access, credit card information access, and/or ability to install a bot, keylogger, and/or rootkit). For example, the operations performed by and/or for the suspicious data may be monitored in order to further identify the malware, determine untrusted acts, and log the effect or probable effect. - A reporting module 526 (or instructions) is configured to generate a data model based on a generated list of events. Further a
reporting module 526 is configured to generate reports such as an incident alert as describe herein. For an embodiment, thereporting module 526 generates a report to identify malware, one or more vectors of attack, one or more payloads, target of valuable data, vulnerabilities, command and control protocols, and/or behaviors that are characteristics of the malware. Thereporting module 526 may also make recommendations to safeguard information based on the attack (e.g., move credit card information to a different digital device, require additional security such as VPN access only, or the like). - For an embodiment, the
reporting module 526 generates malware information that may be used to identify malware or suspicious behavior. For example, thereporting module 526 may generate malware information based on the monitored information of the virtualization environment. The malware information may include a hash of the suspicious data or a characteristic of the operations of or for the suspicious data. In one example, the malware information may identify a class of suspicious behavior as being one or more steps being performed by or for suspicious data at specific times. As a result, suspicious data and/or malware may be identified based on the malware information without virtualizing or emulating an entire attack. - A signature module 528 (or instructions) is configured to classify network traffic/data based on said list of events. Further a
signature module 528 is configured to store signature files that may be used to identify malware and/or traffic patterns. The signature files may be generated by the reporting module 312 and/or thesignature module 528. In various embodiments, thesecurity server 108 may generate signatures, malware information, whitelist entries, and/or blacklist entries to share with other security servers. As a result, thesignature module 528 may include signatures generated by other security servers or other digital devices. Those skilled in the art will appreciate that thesignature module 528 may include signatures generated from a variety of different sources including, but not limited to, other security firms, antivirus companies, and/or other third-parties. - For an embodiment, the
signature module 528 may provide signatures which are used to determine if network traffic/data is suspicious or is malware. For example, if network traffic/data matches the signature of known malware, then the network data may be classified as malware. If network data matches a signature that is suspicious, then the network data may be flagged as suspicious data. The malware and/or the suspicious data may be processed within a virtualization environment and/or the emulation environment as discussed herein. - A quarantine module 530 (or instructions) is configured to quarantine suspicious data and/or network traffic/data. For an embodiment, when the
security server 108 identifies malware or probable malware, thequarantine module 530 may quarantine the suspicious data, network data, and/or any data associated with the suspicious data and/or network data. For example, thequarantine module 530 may quarantine all data from a particular digital device that has been identified as being infected or possibly infected. For an embodiment, thequarantine module 530 is configured to alert a security administrator or the like (e.g., via email, call, voicemail, or SMS text message) when malware or possible malware has been found. - Although
FIG. 5 illustratessystem 502 as a computer it could be a distributed system, such as a server system. The figures are intended more as functional descriptions of the various features which may be present in a client and a set of servers than as structural schematics of the embodiments described herein. Thus, one of ordinary skill in the art would understand that items shown separately could be combined and some items could be separated. For example, some items illustrated as separate modules inFIG. 5 could be implemented on a single server or client and a single item could be implemented by one or more servers or clients. The actual number of servers, clients, or modules used to implement asystem 502 and how features are allocated among them will vary from one implementation to another, and may depend in part on the amount of data traffic that the system must handle during peak usage periods as well as during average usage periods. In addition, some modules or functions of modules illustrated inFIG. 4 may be implemented on one or more one or more systems remotely located from other systems that implement other modules or functions of modules illustrated inFIG. 5 . - In the foregoing specification, specific exemplary embodiments of the invention have been described. It will, however, be evident that various modifications and changes may be made thereto. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
Claims (21)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/816,040 US11902303B2 (en) | 2014-02-24 | 2022-07-29 | System and method for detecting lateral movement and data exfiltration |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201461944006P | 2014-02-24 | 2014-02-24 | |
US14/629,444 US9686293B2 (en) | 2011-11-03 | 2015-02-23 | Systems and methods for malware detection and mitigation |
US14/936,612 US10326778B2 (en) | 2014-02-24 | 2015-11-09 | System and method for detecting lateral movement and data exfiltration |
US16/437,262 US11405410B2 (en) | 2014-02-24 | 2019-06-11 | System and method for detecting lateral movement and data exfiltration |
US17/816,040 US11902303B2 (en) | 2014-02-24 | 2022-07-29 | System and method for detecting lateral movement and data exfiltration |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/437,262 Continuation US11405410B2 (en) | 2014-02-24 | 2019-06-11 | System and method for detecting lateral movement and data exfiltration |
Publications (2)
Publication Number | Publication Date |
---|---|
US20230030659A1 true US20230030659A1 (en) | 2023-02-02 |
US11902303B2 US11902303B2 (en) | 2024-02-13 |
Family
ID=67984346
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/437,262 Active 2035-04-12 US11405410B2 (en) | 2014-02-24 | 2019-06-11 | System and method for detecting lateral movement and data exfiltration |
US17/816,040 Active US11902303B2 (en) | 2014-02-24 | 2022-07-29 | System and method for detecting lateral movement and data exfiltration |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/437,262 Active 2035-04-12 US11405410B2 (en) | 2014-02-24 | 2019-06-11 | System and method for detecting lateral movement and data exfiltration |
Country Status (1)
Country | Link |
---|---|
US (2) | US11405410B2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230109224A1 (en) * | 2020-09-28 | 2023-04-06 | T-Mobile Usa, Inc. | Network security system including a multi-dimensional domain name system to protect against cybersecurity threats |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11405410B2 (en) | 2014-02-24 | 2022-08-02 | Cyphort Inc. | System and method for detecting lateral movement and data exfiltration |
US10652263B2 (en) * | 2014-07-21 | 2020-05-12 | David Paul Heilig | Identifying malware-infected network devices through traffic monitoring |
US10999304B2 (en) | 2018-04-11 | 2021-05-04 | Palo Alto Networks (Israel Analytics) Ltd. | Bind shell attack detection |
US11184377B2 (en) | 2019-01-30 | 2021-11-23 | Palo Alto Networks (Israel Analytics) Ltd. | Malicious port scan detection using source profiles |
US11184376B2 (en) | 2019-01-30 | 2021-11-23 | Palo Alto Networks (Israel Analytics) Ltd. | Port scan detection using destination profiles |
US11184378B2 (en) | 2019-01-30 | 2021-11-23 | Palo Alto Networks (Israel Analytics) Ltd. | Scanner probe detection |
US11343263B2 (en) * | 2019-04-15 | 2022-05-24 | Qualys, Inc. | Asset remediation trend map generation and utilization for threat mitigation |
US11509686B2 (en) * | 2019-05-14 | 2022-11-22 | Vmware, Inc. | DHCP-communications monitoring by a network controller in software defined network environments |
US20210194915A1 (en) * | 2019-12-03 | 2021-06-24 | Sonicwall Inc. | Identification of potential network vulnerability and security responses in light of real-time network risk assessment |
US11509680B2 (en) * | 2020-09-30 | 2022-11-22 | Palo Alto Networks (Israel Analytics) Ltd. | Classification of cyber-alerts into security incidents |
US11874933B2 (en) | 2021-12-29 | 2024-01-16 | Qualys, Inc. | Security event modeling and threat detection using behavioral, analytical, and threat intelligence attributes |
US11799880B2 (en) | 2022-01-10 | 2023-10-24 | Palo Alto Networks (Israel Analytics) Ltd. | Network adaptive alert prioritization system |
Citations (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030084349A1 (en) * | 2001-10-12 | 2003-05-01 | Oliver Friedrichs | Early warning system for network attacks |
US20040128543A1 (en) * | 2002-12-31 | 2004-07-01 | International Business Machines Corporation | Method and system for morphing honeypot with computer security incident correlation |
US20040168173A1 (en) * | 1999-11-15 | 2004-08-26 | Sandia National Labs | Method and apparatus providing deception and/or altered execution of logic in an information system |
US20050166072A1 (en) * | 2002-12-31 | 2005-07-28 | Converse Vikki K. | Method and system for wireless morphing honeypot |
US20060018466A1 (en) * | 2004-07-12 | 2006-01-26 | Architecture Technology Corporation | Attack correlation using marked information |
US20060101516A1 (en) * | 2004-10-12 | 2006-05-11 | Sushanthan Sudaharan | Honeynet farms as an early warning system for production networks |
US20060161816A1 (en) * | 2004-12-22 | 2006-07-20 | Gula Ronald J | System and method for managing events |
US20080098476A1 (en) * | 2005-04-04 | 2008-04-24 | Bae Systems Information And Electronic Systems Integration Inc. | Method and Apparatus for Defending Against Zero-Day Worm-Based Attacks |
US20080256619A1 (en) * | 2007-04-16 | 2008-10-16 | Microsoft Corporation | Detection of adversaries through collection and correlation of assessments |
US20090028135A1 (en) * | 2007-07-27 | 2009-01-29 | Redshift Internetworking, Inc. | System and method for unified communications threat management (uctm) for converged voice, video and multi-media over ip flows |
US7854003B1 (en) * | 2004-03-19 | 2010-12-14 | Verizon Corporate Services Group Inc. & Raytheon BBN Technologies Corp. | Method and system for aggregating algorithms for detecting linked interactive network connections |
US20120084866A1 (en) * | 2007-06-12 | 2012-04-05 | Stolfo Salvatore J | Methods, systems, and media for measuring computer security |
US20130097701A1 (en) * | 2011-10-18 | 2013-04-18 | Mcafee, Inc. | User behavioral risk assessment |
US8549643B1 (en) * | 2010-04-02 | 2013-10-01 | Symantec Corporation | Using decoys by a data loss prevention system to protect against unscripted activity |
US9043905B1 (en) * | 2012-01-23 | 2015-05-26 | Hrl Laboratories, Llc | System and method for insider threat detection |
US9043920B2 (en) * | 2012-06-27 | 2015-05-26 | Tenable Network Security, Inc. | System and method for identifying exploitable weak points in a network |
US20160034361A1 (en) * | 2013-04-16 | 2016-02-04 | Hewlett-Packard Development Company, L.P. | Distributed event correlation system |
US9311479B1 (en) * | 2013-03-14 | 2016-04-12 | Fireeye, Inc. | Correlation and consolidation of analytic data for holistic view of a malware attack |
US9356942B1 (en) * | 2012-03-05 | 2016-05-31 | Neustar, Inc. | Method and system for detecting network compromise |
US20170223046A1 (en) * | 2016-01-29 | 2017-08-03 | Acalvio Technologies, Inc. | Multiphase threat analysis and correlation engine |
US20170230402A1 (en) * | 2016-02-09 | 2017-08-10 | Ca, Inc. | Automated data risk assessment |
US20170339186A1 (en) * | 2016-05-22 | 2017-11-23 | Guardicore Ltd. | Protection of cloud-provider system using scattered honeypots |
US20180337941A1 (en) * | 2017-05-18 | 2018-11-22 | Qadium, Inc. | Correlation-driven threat assessment and remediation |
US10277629B1 (en) * | 2016-12-20 | 2019-04-30 | Symantec Corporation | Systems and methods for creating a deception computing system |
US10404728B2 (en) * | 2016-09-13 | 2019-09-03 | Cisco Technology, Inc. | Learning internal ranges from network traffic data to augment anomaly detection systems |
US10601848B1 (en) * | 2017-06-29 | 2020-03-24 | Fireeye, Inc. | Cyber-security system and method for weak indicator detection and correlation to generate strong indicators |
US10855700B1 (en) * | 2017-06-29 | 2020-12-01 | Fireeye, Inc. | Post-intrusion detection of cyber-attacks during lateral movement within networks |
US11075930B1 (en) * | 2018-06-27 | 2021-07-27 | Fireeye, Inc. | System and method for detecting repetitive cybersecurity attacks constituting an email campaign |
US11637857B1 (en) * | 2004-04-01 | 2023-04-25 | Fireeye Security Holdings Us Llc | System and method for detecting malicious traffic using a virtual machine configured with a select software environment |
Family Cites Families (142)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2002003219A1 (en) | 2000-06-30 | 2002-01-10 | Plurimus Corporation | Method and system for monitoring online computer network behavior and creating online behavior profiles |
GB0022485D0 (en) | 2000-09-13 | 2000-11-01 | Apl Financial Services Oversea | Monitoring network activity |
US20060265746A1 (en) * | 2001-04-27 | 2006-11-23 | Internet Security Systems, Inc. | Method and system for managing computer security information |
US7237264B1 (en) * | 2001-06-04 | 2007-06-26 | Internet Security Systems, Inc. | System and method for preventing network misuse |
DE60316543T2 (en) | 2002-03-29 | 2008-07-03 | Global Dataguard, Inc., Dallas | ADAPTIVE BEHAVIOR-RELATED IMPACT DETECTION |
US20040111632A1 (en) | 2002-05-06 | 2004-06-10 | Avner Halperin | System and method of virus containment in computer networks |
US7418729B2 (en) | 2002-07-19 | 2008-08-26 | Symantec Corporation | Heuristic detection of malicious computer code by page tracking |
US7114183B1 (en) * | 2002-08-28 | 2006-09-26 | Mcafee, Inc. | Network adaptive baseline monitoring system and method |
US7363656B2 (en) | 2002-11-04 | 2008-04-22 | Mazu Networks, Inc. | Event detection/anomaly correlation heuristics |
US7913303B1 (en) * | 2003-01-21 | 2011-03-22 | International Business Machines Corporation | Method and system for dynamically protecting a computer system from attack |
US7246156B2 (en) * | 2003-06-09 | 2007-07-17 | Industrial Defender, Inc. | Method and computer program product for monitoring an industrial network |
US20050081053A1 (en) | 2003-10-10 | 2005-04-14 | International Business Machines Corlporation | Systems and methods for efficient computer virus detection |
US7464158B2 (en) | 2003-10-15 | 2008-12-09 | International Business Machines Corporation | Secure initialization of intrusion detection system |
US7725937B1 (en) | 2004-02-09 | 2010-05-25 | Symantec Corporation | Capturing a security breach |
US8793787B2 (en) | 2004-04-01 | 2014-07-29 | Fireeye, Inc. | Detecting malicious network content using virtual environment components |
US8375444B2 (en) | 2006-04-20 | 2013-02-12 | Fireeye, Inc. | Dynamic signature creation and enforcement |
US8204984B1 (en) | 2004-04-01 | 2012-06-19 | Fireeye, Inc. | Systems and methods for detecting encrypted bot command and control communication channels |
US8584239B2 (en) | 2004-04-01 | 2013-11-12 | Fireeye, Inc. | Virtual machine with dynamic data flow analysis |
US7779463B2 (en) | 2004-05-11 | 2010-08-17 | The Trustees Of Columbia University In The City Of New York | Systems and methods for correlating and distributing intrusion alert information among collaborating computer systems |
US20070180490A1 (en) * | 2004-05-20 | 2007-08-02 | Renzi Silvio J | System and method for policy management |
US7886293B2 (en) | 2004-07-07 | 2011-02-08 | Intel Corporation | Optimizing system behavior in a virtual machine environment |
US20060161982A1 (en) | 2005-01-18 | 2006-07-20 | Chari Suresh N | Intrusion detection system |
US7721331B1 (en) | 2005-05-19 | 2010-05-18 | Adobe Systems Inc. | Methods and apparatus for performing a pre-processing activity |
US20070143851A1 (en) * | 2005-12-21 | 2007-06-21 | Fiberlink | Method and systems for controlling access to computing resources based on known security vulnerabilities |
US7530105B2 (en) | 2006-03-21 | 2009-05-05 | 21St Century Technologies, Inc. | Tactical and strategic attack detection and prediction |
US7664626B1 (en) | 2006-03-24 | 2010-02-16 | Symantec Corporation | Ambiguous-state support in virtual machine emulators |
US7735116B1 (en) * | 2006-03-24 | 2010-06-08 | Symantec Corporation | System and method for unified threat management with a relational rules methodology |
US8151323B2 (en) | 2006-04-12 | 2012-04-03 | Citrix Systems, Inc. | Systems and methods for providing levels of access and action control via an SSL VPN appliance |
US8621607B2 (en) | 2006-05-18 | 2013-12-31 | Vmware, Inc. | Computational system including mechanisms for tracking taint |
US20140373144A9 (en) | 2006-05-22 | 2014-12-18 | Alen Capalik | System and method for analyzing unauthorized intrusion into a computer network |
US8151352B1 (en) | 2006-07-14 | 2012-04-03 | Bitdefender IPR Managament Ltd. | Anti-malware emulation systems and methods |
WO2008043110A2 (en) | 2006-10-06 | 2008-04-10 | Smobile Systems, Inc. | System and method of malware sample collection on mobile networks |
US8601575B2 (en) * | 2007-03-30 | 2013-12-03 | Ca, Inc. | Statistical method and system for network anomaly detection |
US20090013405A1 (en) | 2007-07-06 | 2009-01-08 | Messagelabs Limited | Heuristic detection of malicious code |
US8060074B2 (en) | 2007-07-30 | 2011-11-15 | Mobile Iron, Inc. | Virtual instance architecture for mobile device management systems |
US8176477B2 (en) | 2007-09-14 | 2012-05-08 | International Business Machines Corporation | Method, system and program product for optimizing emulation of a suspected malware |
US9009828B1 (en) * | 2007-09-28 | 2015-04-14 | Dell SecureWorks, Inc. | System and method for identification and blocking of unwanted network traffic |
US8108912B2 (en) | 2008-05-29 | 2012-01-31 | Red Hat, Inc. | Systems and methods for management of secure data in cloud-based network |
US8381231B2 (en) | 2008-09-09 | 2013-02-19 | Dell Products L.P. | Deployment and management of virtual containers |
US7540030B1 (en) | 2008-09-15 | 2009-05-26 | Kaspersky Lab, Zao | Method and system for automatic cure against malware |
US8667583B2 (en) | 2008-09-22 | 2014-03-04 | Microsoft Corporation | Collecting and analyzing malware data |
US8533844B2 (en) | 2008-10-21 | 2013-09-10 | Lookout, Inc. | System and method for security data collection and analysis |
US9367680B2 (en) | 2008-10-21 | 2016-06-14 | Lookout, Inc. | System and method for mobile communication device application advisement |
US8997219B2 (en) | 2008-11-03 | 2015-03-31 | Fireeye, Inc. | Systems and methods for detecting malicious PDF network content |
US8850571B2 (en) | 2008-11-03 | 2014-09-30 | Fireeye, Inc. | Systems and methods for detecting malicious network content |
US8326987B2 (en) | 2008-11-12 | 2012-12-04 | Lin Yeejang James | Method for adaptively building a baseline behavior model |
CN101436967A (en) | 2008-12-23 | 2009-05-20 | 北京邮电大学 | Method and system for evaluating network safety situation |
US10057285B2 (en) | 2009-01-30 | 2018-08-21 | Oracle International Corporation | System and method for auditing governance, risk, and compliance using a pluggable correlation architecture |
EP2222048A1 (en) | 2009-02-24 | 2010-08-25 | BRITISH TELECOMMUNICATIONS public limited company | Detecting malicious behaviour on a computer network |
US8266698B1 (en) | 2009-03-09 | 2012-09-11 | Symantec Corporation | Using machine infection characteristics for behavior-based detection of malware |
CN101854340B (en) | 2009-04-03 | 2015-04-01 | 瞻博网络公司 | Behavior based communication analysis carried out based on access control information |
KR101493076B1 (en) | 2009-04-07 | 2015-02-12 | 삼성전자 주식회사 | Apparatus and method of preventing virus code execution through buffer overflow control |
US8868439B2 (en) | 2009-05-15 | 2014-10-21 | Microsoft Corporation | Content activity feedback into a reputation system |
US8769683B1 (en) | 2009-07-07 | 2014-07-01 | Trend Micro Incorporated | Apparatus and methods for remote classification of unknown malware |
US20110041179A1 (en) | 2009-08-11 | 2011-02-17 | F-Secure Oyj | Malware detection |
CA2675666A1 (en) | 2009-08-27 | 2009-11-05 | Ibm Canada Limited - Ibm Canada Limitee | Accelerated execution for emulated environments |
US8280830B2 (en) | 2009-08-31 | 2012-10-02 | Symantec Corporation | Systems and methods for using multiple in-line heuristics to reduce false positives |
US8375450B1 (en) | 2009-10-05 | 2013-02-12 | Trend Micro, Inc. | Zero day malware scanner |
WO2011063269A1 (en) | 2009-11-20 | 2011-05-26 | Alert Enterprise, Inc. | Method and apparatus for risk visualization and remediation |
US8555393B2 (en) | 2009-12-03 | 2013-10-08 | Verizon Patent And Licensing Inc. | Automated testing for security vulnerabilities of devices |
US8468606B2 (en) | 2009-12-08 | 2013-06-18 | Verizon Patent And Licensing Inc. | Security handling based on risk management |
US8479286B2 (en) | 2009-12-15 | 2013-07-02 | Mcafee, Inc. | Systems and methods for behavioral sandboxing |
US8528091B2 (en) | 2009-12-31 | 2013-09-03 | The Trustees Of Columbia University In The City Of New York | Methods, systems, and media for detecting covert malware |
US8800034B2 (en) * | 2010-01-26 | 2014-08-05 | Bank Of America Corporation | Insider threat correlation tool |
US9501644B2 (en) | 2010-03-15 | 2016-11-22 | F-Secure Oyj | Malware protection |
KR101122650B1 (en) | 2010-04-28 | 2012-03-09 | 한국전자통신연구원 | Apparatus, system and method for detecting malicious code injected with fraud into normal process |
US8621638B2 (en) | 2010-05-14 | 2013-12-31 | Mcafee, Inc. | Systems and methods for classification of messaging entities |
US20120011590A1 (en) | 2010-07-12 | 2012-01-12 | John Joseph Donovan | Systems, methods and devices for providing situational awareness, mitigation, risk analysis of assets, applications and infrastructure in the internet and cloud |
US9215244B2 (en) * | 2010-11-18 | 2015-12-15 | The Boeing Company | Context aware network security monitoring for threat detection |
US8745734B1 (en) | 2010-12-29 | 2014-06-03 | Amazon Technologies, Inc. | Managing virtual computing testing |
US8862739B2 (en) | 2011-01-11 | 2014-10-14 | International Business Machines Corporation | Allocating resources to virtual functions |
US8555385B1 (en) | 2011-03-14 | 2013-10-08 | Symantec Corporation | Techniques for behavior based malware analysis |
US8751490B1 (en) | 2011-03-31 | 2014-06-10 | Symantec Corporation | Automatically determining reputations of physical locations |
US20120272317A1 (en) | 2011-04-25 | 2012-10-25 | Raytheon Bbn Technologies Corp | System and method for detecting infectious web content |
US9047441B2 (en) | 2011-05-24 | 2015-06-02 | Palo Alto Networks, Inc. | Malware analysis system |
US8555388B1 (en) | 2011-05-24 | 2013-10-08 | Palo Alto Networks, Inc. | Heuristic botnet detection |
US9323928B2 (en) | 2011-06-01 | 2016-04-26 | Mcafee, Inc. | System and method for non-signature based detection of malicious processes |
US20120311562A1 (en) | 2011-06-01 | 2012-12-06 | Yanlin Wang | Extendable event processing |
US9239800B2 (en) | 2011-07-27 | 2016-01-19 | Seven Networks, Llc | Automatic generation and distribution of policy information regarding malicious mobile traffic in a wireless network |
CN103765820B (en) * | 2011-09-09 | 2016-10-26 | 惠普发展公司,有限责任合伙企业 | Based on according to the system and method for the reference baseline assessment event of time location in sequence of events |
ES2755780T3 (en) | 2011-09-16 | 2020-04-23 | Veracode Inc | Automated behavior and static analysis using an instrumented sandbox and machine learning classification for mobile security |
US8856936B2 (en) * | 2011-10-14 | 2014-10-07 | Albeado Inc. | Pervasive, domain and situational-aware, adaptive, automated, and coordinated analysis and control of enterprise-wide computers, networks, and applications for mitigation of business and operational risks and enhancement of cyber security |
US8789179B2 (en) | 2011-10-28 | 2014-07-22 | Novell, Inc. | Cloud protection techniques |
US9519781B2 (en) | 2011-11-03 | 2016-12-13 | Cyphort Inc. | Systems and methods for virtualization and emulation assisted malware detection |
US9686293B2 (en) | 2011-11-03 | 2017-06-20 | Cyphort Inc. | Systems and methods for malware detection and mitigation |
US9792430B2 (en) | 2011-11-03 | 2017-10-17 | Cyphort Inc. | Systems and methods for virtualized malware detection |
CN102546641B (en) | 2012-01-14 | 2014-12-31 | 杭州安恒信息技术有限公司 | Method and system for carrying out accurate risk detection in application security system |
US9710644B2 (en) | 2012-02-01 | 2017-07-18 | Servicenow, Inc. | Techniques for sharing network security event information |
US9519782B2 (en) | 2012-02-24 | 2016-12-13 | Fireeye, Inc. | Detecting malicious network content |
WO2013130867A1 (en) | 2012-02-29 | 2013-09-06 | Sourcefire, Inc. | Method and apparatus for retroactively detecting malicious or otherwise undesirable software |
US9185095B1 (en) | 2012-03-20 | 2015-11-10 | United Services Automobile Association (Usaa) | Behavioral profiling method and system to authenticate a user |
US8850588B2 (en) * | 2012-05-01 | 2014-09-30 | Taasera, Inc. | Systems and methods for providing mobile security based on dynamic attestation |
IL219597A0 (en) | 2012-05-03 | 2012-10-31 | Syndrome X Ltd | Malicious threat detection, malicious threat prevention, and a learning systems and methods for malicious threat detection and prevention |
US9298494B2 (en) | 2012-05-14 | 2016-03-29 | Qualcomm Incorporated | Collaborative learning for efficient behavioral analysis in networked mobile device |
US20130312096A1 (en) | 2012-05-18 | 2013-11-21 | Vmware, Inc. | On-demand data scan in a virtual machine |
CN102799822B (en) | 2012-07-11 | 2015-06-17 | 中国信息安全测评中心 | Software running security measurement and estimation method based on network environment |
US9292688B2 (en) | 2012-09-26 | 2016-03-22 | Northrop Grumman Systems Corporation | System and method for automated machine-learning, zero-day malware detection |
US20150215334A1 (en) | 2012-09-28 | 2015-07-30 | Level 3 Communications, Llc | Systems and methods for generating network threat intelligence |
EP2901612A4 (en) | 2012-09-28 | 2016-06-15 | Level 3 Communications Llc | Apparatus, system and method for identifying and mitigating malicious network threats |
US9185093B2 (en) | 2012-10-16 | 2015-11-10 | Mcafee, Inc. | System and method for correlating network information with subscriber information in a mobile network environment |
EP2973138A4 (en) | 2013-03-11 | 2016-09-07 | Hewlett Packard Entpr Dev Lp | Event correlation based on confidence factor |
US9639820B2 (en) | 2013-03-15 | 2017-05-02 | Alert Enterprise | Systems, structures, and processes for interconnected devices and risk management |
US10425429B2 (en) * | 2013-04-10 | 2019-09-24 | Gabriel Bassett | System and method for cyber security analysis and human behavior prediction |
US9300686B2 (en) | 2013-06-28 | 2016-03-29 | Fireeye, Inc. | System and method for detecting malicious links in electronic messages |
JP6101408B2 (en) | 2013-09-10 | 2017-03-22 | シマンテック コーポレーションSymantec Corporation | System and method for detecting attacks on computing systems using event correlation graphs |
WO2015047394A1 (en) | 2013-09-30 | 2015-04-02 | Hewlett-Packard Development Company, L.P. | Hierarchical threat intelligence |
US9609010B2 (en) | 2013-10-04 | 2017-03-28 | Personam, Inc. | System and method for detecting insider threats |
US10616258B2 (en) * | 2013-10-12 | 2020-04-07 | Fortinet, Inc. | Security information and event management |
US9319421B2 (en) | 2013-10-14 | 2016-04-19 | Ut-Battelle, Llc | Real-time detection and classification of anomalous events in streaming data |
US9438620B2 (en) | 2013-10-22 | 2016-09-06 | Mcafee, Inc. | Control flow graph representation and classification |
WO2015066604A1 (en) * | 2013-11-04 | 2015-05-07 | Crypteia Networks S.A. | Systems and methods for identifying infected network infrastructure |
US9288220B2 (en) | 2013-11-07 | 2016-03-15 | Cyberpoint International Llc | Methods and systems for malware detection |
US10122747B2 (en) * | 2013-12-06 | 2018-11-06 | Lookout, Inc. | Response generation after distributed monitoring and evaluation of multiple devices |
US9753796B2 (en) * | 2013-12-06 | 2017-09-05 | Lookout, Inc. | Distributed monitoring, evaluation, and response for multiple devices |
US10063654B2 (en) * | 2013-12-13 | 2018-08-28 | Oracle International Corporation | Systems and methods for contextual and cross application threat detection and prediction in cloud applications |
US9692789B2 (en) * | 2013-12-13 | 2017-06-27 | Oracle International Corporation | Techniques for cloud security monitoring and threat intelligence |
US9386034B2 (en) | 2013-12-17 | 2016-07-05 | Hoplite Industries, Inc. | Behavioral model based malware protection system and method |
US9830450B2 (en) | 2013-12-23 | 2017-11-28 | Interset Software, Inc. | Method and system for analyzing risk |
US9652464B2 (en) * | 2014-01-30 | 2017-05-16 | Nasdaq, Inc. | Systems and methods for continuous active data security |
US10129288B1 (en) | 2014-02-11 | 2018-11-13 | DataVisor Inc. | Using IP address data to detect malicious activities |
US10095866B2 (en) * | 2014-02-24 | 2018-10-09 | Cyphort Inc. | System and method for threat risk scoring of security threats |
US11405410B2 (en) | 2014-02-24 | 2022-08-02 | Cyphort Inc. | System and method for detecting lateral movement and data exfiltration |
US10225280B2 (en) | 2014-02-24 | 2019-03-05 | Cyphort Inc. | System and method for verifying and detecting malware |
US9754117B2 (en) * | 2014-02-24 | 2017-09-05 | Northcross Group | Security management system |
US10326778B2 (en) | 2014-02-24 | 2019-06-18 | Cyphort Inc. | System and method for detecting lateral movement and data exfiltration |
WO2015168203A1 (en) * | 2014-04-29 | 2015-11-05 | PEGRight, Inc. | Characterizing user behavior via intelligent identity analytics |
US10212176B2 (en) | 2014-06-23 | 2019-02-19 | Hewlett Packard Enterprise Development Lp | Entity group behavior profiling |
US9495188B1 (en) | 2014-09-30 | 2016-11-15 | Palo Alto Networks, Inc. | Synchronizing a honey network configuration to reflect a target network environment |
EP3215943B1 (en) * | 2014-11-03 | 2021-04-21 | Vectra AI, Inc. | A system for implementing threat detection using threat and risk assessment of asset-actor interactions |
CN104468545A (en) | 2014-11-26 | 2015-03-25 | 中国航天科工集团第二研究院七〇六所 | Network security correlation analysis method based on complex event processing |
US20160171415A1 (en) * | 2014-12-13 | 2016-06-16 | Security Scorecard | Cybersecurity risk assessment on an industry basis |
US10021137B2 (en) * | 2014-12-27 | 2018-07-10 | Mcafee, Llc | Real-time mobile security posture |
US9800605B2 (en) | 2015-01-30 | 2017-10-24 | Securonix, Inc. | Risk scoring for threat assessment |
US10298608B2 (en) * | 2015-02-11 | 2019-05-21 | Honeywell International Inc. | Apparatus and method for tying cyber-security risk analysis to common risk methodologies and risk levels |
US10298607B2 (en) * | 2015-04-16 | 2019-05-21 | Nec Corporation | Constructing graph models of event correlation in enterprise security systems |
US20160308898A1 (en) * | 2015-04-20 | 2016-10-20 | Phirelight Security Solutions Inc. | Systems and methods for tracking, analyzing and mitigating security threats in networks via a network traffic analysis platform |
US9954896B2 (en) | 2015-04-29 | 2018-04-24 | Rapid7, Inc. | Preconfigured honey net |
US10447730B2 (en) * | 2015-05-15 | 2019-10-15 | Virsec Systems, Inc. | Detection of SQL injection attacks |
US9787709B2 (en) | 2015-06-17 | 2017-10-10 | Bank Of America Corporation | Detecting and analyzing operational risk in a network environment |
AU2016204072B2 (en) * | 2015-06-17 | 2017-08-03 | Accenture Global Services Limited | Event anomaly analysis and prediction |
US9699205B2 (en) * | 2015-08-31 | 2017-07-04 | Splunk Inc. | Network security system |
US9641544B1 (en) | 2015-09-18 | 2017-05-02 | Palo Alto Networks, Inc. | Automated insider threat prevention |
-
2019
- 2019-06-11 US US16/437,262 patent/US11405410B2/en active Active
-
2022
- 2022-07-29 US US17/816,040 patent/US11902303B2/en active Active
Patent Citations (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040168173A1 (en) * | 1999-11-15 | 2004-08-26 | Sandia National Labs | Method and apparatus providing deception and/or altered execution of logic in an information system |
US20030084349A1 (en) * | 2001-10-12 | 2003-05-01 | Oliver Friedrichs | Early warning system for network attacks |
US20040128543A1 (en) * | 2002-12-31 | 2004-07-01 | International Business Machines Corporation | Method and system for morphing honeypot with computer security incident correlation |
US20050166072A1 (en) * | 2002-12-31 | 2005-07-28 | Converse Vikki K. | Method and system for wireless morphing honeypot |
US7854003B1 (en) * | 2004-03-19 | 2010-12-14 | Verizon Corporate Services Group Inc. & Raytheon BBN Technologies Corp. | Method and system for aggregating algorithms for detecting linked interactive network connections |
US11637857B1 (en) * | 2004-04-01 | 2023-04-25 | Fireeye Security Holdings Us Llc | System and method for detecting malicious traffic using a virtual machine configured with a select software environment |
US20060018466A1 (en) * | 2004-07-12 | 2006-01-26 | Architecture Technology Corporation | Attack correlation using marked information |
US20060101516A1 (en) * | 2004-10-12 | 2006-05-11 | Sushanthan Sudaharan | Honeynet farms as an early warning system for production networks |
US20060161816A1 (en) * | 2004-12-22 | 2006-07-20 | Gula Ronald J | System and method for managing events |
US20080098476A1 (en) * | 2005-04-04 | 2008-04-24 | Bae Systems Information And Electronic Systems Integration Inc. | Method and Apparatus for Defending Against Zero-Day Worm-Based Attacks |
US20080256619A1 (en) * | 2007-04-16 | 2008-10-16 | Microsoft Corporation | Detection of adversaries through collection and correlation of assessments |
US20120084866A1 (en) * | 2007-06-12 | 2012-04-05 | Stolfo Salvatore J | Methods, systems, and media for measuring computer security |
US20090028135A1 (en) * | 2007-07-27 | 2009-01-29 | Redshift Internetworking, Inc. | System and method for unified communications threat management (uctm) for converged voice, video and multi-media over ip flows |
US8549643B1 (en) * | 2010-04-02 | 2013-10-01 | Symantec Corporation | Using decoys by a data loss prevention system to protect against unscripted activity |
US20130097701A1 (en) * | 2011-10-18 | 2013-04-18 | Mcafee, Inc. | User behavioral risk assessment |
US9043905B1 (en) * | 2012-01-23 | 2015-05-26 | Hrl Laboratories, Llc | System and method for insider threat detection |
US9356942B1 (en) * | 2012-03-05 | 2016-05-31 | Neustar, Inc. | Method and system for detecting network compromise |
US9043920B2 (en) * | 2012-06-27 | 2015-05-26 | Tenable Network Security, Inc. | System and method for identifying exploitable weak points in a network |
US9311479B1 (en) * | 2013-03-14 | 2016-04-12 | Fireeye, Inc. | Correlation and consolidation of analytic data for holistic view of a malware attack |
US20160034361A1 (en) * | 2013-04-16 | 2016-02-04 | Hewlett-Packard Development Company, L.P. | Distributed event correlation system |
US20170223046A1 (en) * | 2016-01-29 | 2017-08-03 | Acalvio Technologies, Inc. | Multiphase threat analysis and correlation engine |
US20170230402A1 (en) * | 2016-02-09 | 2017-08-10 | Ca, Inc. | Automated data risk assessment |
US20170339186A1 (en) * | 2016-05-22 | 2017-11-23 | Guardicore Ltd. | Protection of cloud-provider system using scattered honeypots |
US10404728B2 (en) * | 2016-09-13 | 2019-09-03 | Cisco Technology, Inc. | Learning internal ranges from network traffic data to augment anomaly detection systems |
US10277629B1 (en) * | 2016-12-20 | 2019-04-30 | Symantec Corporation | Systems and methods for creating a deception computing system |
US20180337941A1 (en) * | 2017-05-18 | 2018-11-22 | Qadium, Inc. | Correlation-driven threat assessment and remediation |
US10855700B1 (en) * | 2017-06-29 | 2020-12-01 | Fireeye, Inc. | Post-intrusion detection of cyber-attacks during lateral movement within networks |
US10601848B1 (en) * | 2017-06-29 | 2020-03-24 | Fireeye, Inc. | Cyber-security system and method for weak indicator detection and correlation to generate strong indicators |
US11075930B1 (en) * | 2018-06-27 | 2021-07-27 | Fireeye, Inc. | System and method for detecting repetitive cybersecurity attacks constituting an email campaign |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230109224A1 (en) * | 2020-09-28 | 2023-04-06 | T-Mobile Usa, Inc. | Network security system including a multi-dimensional domain name system to protect against cybersecurity threats |
Also Published As
Publication number | Publication date |
---|---|
US11902303B2 (en) | 2024-02-13 |
US11405410B2 (en) | 2022-08-02 |
US20190297097A1 (en) | 2019-09-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10326778B2 (en) | System and method for detecting lateral movement and data exfiltration | |
US11902303B2 (en) | System and method for detecting lateral movement and data exfiltration | |
US10095866B2 (en) | System and method for threat risk scoring of security threats | |
US10354072B2 (en) | System and method for detection of malicious hypertext transfer protocol chains | |
US10225280B2 (en) | System and method for verifying and detecting malware | |
EP3374871B1 (en) | System and method for detecting lateral movement and data exfiltration | |
Modi et al. | A survey of intrusion detection techniques in cloud | |
US9686293B2 (en) | Systems and methods for malware detection and mitigation | |
Chiba et al. | A survey of intrusion detection systems for cloud computing environment | |
EP3374870B1 (en) | Threat risk scoring of security threats | |
US11252167B2 (en) | System and method for detecting and classifying malware | |
US20180212988A1 (en) | System and method for detecting and classifying malware | |
US9332023B1 (en) | Uploading signatures to gateway level unified threat management devices after endpoint level behavior based detection of zero day threats | |
JP2024023875A (en) | Inline malware detection | |
US20240037231A1 (en) | Sample traffic based self-learning malware detection | |
Rajamenakshi et al. | An integrated network behavior and policy based data exfiltration detection framework |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: AWAITING TC RESP., ISSUE FEE NOT PAID |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |