CN114666239B - Visual display method, device and equipment for network shooting range and readable storage medium - Google Patents

Visual display method, device and equipment for network shooting range and readable storage medium Download PDF

Info

Publication number
CN114666239B
CN114666239B CN202210277596.4A CN202210277596A CN114666239B CN 114666239 B CN114666239 B CN 114666239B CN 202210277596 A CN202210277596 A CN 202210277596A CN 114666239 B CN114666239 B CN 114666239B
Authority
CN
China
Prior art keywords
information
event
target
subject
attack
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210277596.4A
Other languages
Chinese (zh)
Other versions
CN114666239A (en
Inventor
蔡晶晶
陈俊
张凯
程磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yongxin Zhicheng Technology Group Co ltd
Original Assignee
Beijing Yongxin Zhicheng Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yongxin Zhicheng Technology Co Ltd filed Critical Beijing Yongxin Zhicheng Technology Co Ltd
Priority to CN202210277596.4A priority Critical patent/CN114666239B/en
Publication of CN114666239A publication Critical patent/CN114666239A/en
Application granted granted Critical
Publication of CN114666239B publication Critical patent/CN114666239B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/04Processing captured monitoring data, e.g. for logfile generation
    • H04L43/045Processing captured monitoring data, e.g. for logfile generation for graphical visualisation of monitoring data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4641Virtual LANs, VLANs, e.g. virtual private networks [VPN]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1416Event detection, e.g. attack signature detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a visual display method of a network shooting range, which comprises the following steps: acquiring information of a subject, an object and an attack behavior in a target scene; determining visual descriptions of a subject, an object and an attack behavior; based on different event dimensions, the event process of each target event in the target scene is backtracked by adopting a preset backtracking mode, so that the backtracking results of the subject, the object and the attack behavior are displayed by adopting a visual description mode. The method is simple and easy to use, disassembly and configuration of the visual scene are achieved, the cost brought by customized display is reduced, the arrangement is clear, accurate expression of scene content is achieved, and man-machine interaction is natural. The application also provides a visual display device, equipment and a readable storage medium of the network shooting range.

Description

Visual display method, device and equipment for network shooting range and readable storage medium
Technical Field
The present application relates to the field of network security, and in particular, to a method, an apparatus, a device, and a readable storage medium for visually displaying a network shooting range.
Background
The network target range is a technology or a product for simulating and reproducing the running states and running environments of network architecture, system equipment and business processes in a real network space based on a virtualization technology. Network shooting ranges have become an essential network space security core infrastructure for network space security research, learning, testing, verification, drilling and the like in various countries. Network shooting ranges are highly valued in all countries in the world and are used as important means for safety capacity construction support.
The application of network target ranges in network security competitions is gradually wide, the types of the network security competitions comprise knowledge competitions, CTF competitions, AWD competitions, ISW competitions, RW (real scene) competitions and the like, the contents of the competitions are continuously rich, 2D and 3D technologies are adopted for the visual display effect of the competitions, the visual effect of the whole industry is greatly improved, but the internal logic and arrangement are lacked, and the network target ranges only have the effects of dazzling color collocation and the like.
Therefore, the current network event visualization solution based on the network range has the following problems:
1. excessively pursuing visual design effect and neglecting visual interaction effect;
2. the specific visualization effect needs to be customized, and the purely customized operation has the problems of high cost and incapability of being copied and used;
3. the data content of the network scene content is not fine enough, and the scene content cannot be accurately expressed.
Disclosure of Invention
The application provides a visual display method, device and equipment for a network shooting range and a readable storage medium, which can accurately express scene contents under the condition of less resource investment.
In a first aspect, the present application provides a visual display method for a network shooting range, including:
acquiring information of a subject, an object and an attack behavior in a target scene, wherein the target scene is an attack and defense scene in a network shooting range, the subject is an attack team in the target scene, the object is an attacked virtual machine in the target scene, and the attack behavior is an interaction behavior between the subject and the object;
determining a visual description of the subject, the object, and the attack behavior;
and based on different event dimensions, backtracking the event process of each target event in the target scene by adopting a preset backtracking mode so as to display the backtracking results of the subject, the object and the attack behavior by adopting the visual description mode.
Optionally, the information of the subject includes at least one of:
the information processing method comprises the following steps of role description information of each attacker in the attack team, account information of each attacker, account state behavior information of each attacker, and video information of each attacker in the target scene.
Optionally, the information of the object includes:
basic information and/or state information of the attacked virtual machine;
the basic information comprises information of a mounted file system and/or an exploit program which are/is arranged in the virtual machine; the state information is an invaded state, an access state, a defense state or an attacked state.
Optionally, the information of the attack behavior is derived from network traffic data in the target scene, which can prove an interaction relationship between the subject and the object.
Optionally, the obtaining information of the subject, the object, and the attack behavior in the target scene includes:
and based on the data resources related to the target scene, performing field extraction and/or video interception on the data resources to determine the subject, the object and the attack behavior in the target scene and the corresponding information, and associating the subject, the object and the attack behavior with the corresponding information.
Optionally, the backtracking the event process of each target event in the target scene by using a preset backtracking manner includes:
and backtracking the event process of each target event in the target scene by adopting a playback time period, an acquisition sequence and an acquisition time of different information in the playback process, and the frequency or the number of times of event playback defined in a preset backtracking mode.
Optionally, the different information includes:
basic information of a main body in the target event comprises role description information of each attacker in an attack team and/or account information of each attacker;
behavior characteristics of an attack behavior in the target event;
state change information of an object in the target event.
In a second aspect, the present application provides a visual display device of a network range, the device comprising:
the data management unit is used for acquiring information of a subject, an object and an attack behavior in a target scene, wherein the target scene is an attack and defense scene in a network shooting range, the subject is an attack team in the target scene, the object is an attacked virtual machine in the target scene, and the attack behavior is an interaction behavior between the subject and the object;
an object management unit for determining a visual description of the subject, the object and the attack behavior;
and the event arranging unit is used for backtracking the event process of each target event in the target scene by adopting a preset backtracking mode based on different event dimensions so as to display the backtracking results of the subject, the object and the attack behavior by adopting the visual description mode.
Optionally, the information of the subject includes at least one of:
the information processing method comprises the following steps of role description information of each attacker in the attack team, account information of each attacker, account state behavior information of each attacker, and video information of each attacker in the target scene.
Optionally, the information of the object includes:
basic information and/or state information of the attacked virtual machine; the basic information comprises information of a mounted file system and/or an exploit program which are/is arranged in the virtual machine; the state information is an invaded state, an access state, a defense state or an attacked state.
Optionally, the information of the attack behavior is derived from network traffic data in the target scene, which can prove an interaction relationship between the subject and the object.
Optionally, the data management unit is specifically configured to:
and performing field extraction and/or video interception on the data resources based on the data resources related to the target scene to determine the subject, the object and the attack behavior in the target scene and the corresponding information, and associating the subject, the object and the attack behavior with the corresponding information.
Optionally, when the event scheduling unit backtracks the event process of each target event in the target scene in a preset backtracking manner, the event scheduling unit is specifically configured to:
and backtracking the event process of each target event in the target scene by adopting a playback time period, an acquisition sequence and an acquisition time of different information in the playback process, and the frequency or the number of times of event playback defined in a preset backtracking mode.
Optionally, the different information includes:
basic information of a main body in the target event, wherein the basic information comprises role description information of each attacker in an attack team and/or account information of each attacker;
behavioral characteristics of an attack behavior in the target event;
state change information of an object in the target event.
In a third aspect, the present application provides an electronic device, comprising: a processor, a memory;
the memory for storing a computer program;
and the processor is used for executing the visual display method of the network shooting range by calling the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the method for visually presenting a network range as described above.
In the technical scheme provided by the application, the information of a subject, an object and an attack behavior in a target scene is acquired; determining visual descriptions of a subject, an object and an attack behavior; based on different event dimensions, the event process of each target event in the target scene is backtracked by adopting a preset backtracking mode, so that the backtracking results of the subject, the object and the attack behavior are displayed by adopting a visual description mode. Therefore, the method is simple and easy to use, the visual scenes are disassembled and configured through data cleaning, and the cost caused by customized display can be reduced; in addition, the method integrates the contents of a scene layout and human-computer interaction, realizes playback display based on event dimensionality, has clear order, realizes accurate expression of scene contents, and has natural human-computer interaction.
Drawings
Fig. 1 is a schematic flow chart of a visual display method for a network range shown in the present application;
fig. 2 is a schematic composition diagram of a visual display device of a network range shown in the present application;
fig. 3 is a schematic structural diagram of an electronic device shown in the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at" \8230; "or" when 8230; \8230; "or" in response to a determination ", depending on the context.
It should be noted that, the network event visualization display scheme based on the network shooting range is a relatively comprehensive technical solution, and can be divided into three layers simply from the technical perspective: layout, interaction and design.
Wherein, the layout refers to the precise expression based on the relation between the icon data and the scene content; the interaction refers to a man-machine mapping interaction relationship, so that the interaction can be easily understood without language expression and can be understood by customers; the design is aesthetic, and the subjective factor is large, mainly looks at the aesthetic of the client, is influenced by the client, can only guide or obey the aesthetic of the client, and does not create new aesthetic viewpoints.
Based on the above cognition and understanding, the embodiment of the application focuses on two layers of layout and interaction actually, and adopts a visual arrangement display mode to cover the core parts of the layout and the interaction, namely, the scene content and data related to the layout level and the interaction display of the minimized attack scene related to the interaction level, so as to solve the three existing problems mentioned in the background technology section.
Referring to fig. 1, a schematic flow chart of a method for visually displaying a network shooting range provided in an embodiment of the present application is shown, where the method includes the following steps S101 to S103:
s101: the method comprises the steps of obtaining information of a subject, an object and an attack behavior in a target scene, wherein the target scene is an attack and defense scene in a network shooting range, the subject is an attack team in the target scene, the object is an attacked virtual machine in the target scene, and the attack behavior is an interaction behavior between the subject and the object.
In the embodiment of the present application, a visualization showing arrangement may be performed for each scene unit in the network target range, and each scene unit in the network target range is defined as a target scene here. The target scenario may be a specific attack and defense scenario in the network shooting range, which may be a minimized attack and defense scenario, for example, the minimized attack and defense scenario is a scenario of a subject attacking an object in a certain network security competition in the network shooting range.
In the embodiment of the application, the data related to the subject, the object, the attack behavior and the like in the target scene can be combed and defined. The following describes the subject, object, and attack behavior in the target scene and their respective related information.
In one implementation, the information about the subject may include at least one of:
1. and the role description information of each attacker in the attack team. The attack team may include one or more attackers, each attacker is an attack member in the target scene, and has a corresponding scene role in the target scene, so that role description information of each attacker in the attack team may be collected.
2. Account information of each attacker.
3. And account state behavior information of each attacker. Wherein the account status behavior may be one of "login", "logout", and "submit".
4. Video information of each attacker in the target scene. The video information of each attacker in the target scene can be obtained in a monitoring or screen recording mode, and in the concrete implementation, a network video monitoring camera can be used for real-time monitoring, or the screen recording can be performed on the access behavior of the web-based attack team, the screen recording can be performed on the operation behavior of the virtual machine, and the like.
In one implementation, the information about the object may include:
basic information and/or state information of the attacked virtual machine; the basic information comprises information of a mounted file system and/or an exploit program which are built in the virtual machine, and the state information is an invaded state, an access state, a defense state, an attacked state and the like.
Specifically, since a file such as a file system (flag) and/or an exploit program may be mounted in the virtual machine when the virtual machine is created, the basic information of the virtual machine may include the flag and its description information, the exploit program and its description information, and the like.
In one implementation, information about the attack behavior is derived from network traffic data in the target scene that is capable of proving the subject's and object's interaction. Specifically, since the attack behavior refers to the interaction behavior between the subject and the object, the data traffic packet of the target scene may be obtained in the network target range, and the behavior of the data traffic packet may be analyzed to obtain the network traffic data that can prove the interaction relationship between the subject and the object.
In an implementation manner of the embodiment of the present application, the "acquiring information of a subject, an object, and an attack behavior in a target scene" in S101 may include:
and based on the data resources related to the target scene, performing field extraction and/or video interception on the data resources to determine the subject, the object and the attack behavior in the target scene and the corresponding information, and associating the subject, the object and the attack behavior with the corresponding information.
In the implementation mode, data resources related to visualization in a target scene can be integrated through a network target range, and the data resources can comprise application logs, databases, video files, flow analysis data and the like; then, taking the subjects, the objects and the attack behaviors as different categories, reading available fields related to each category from the data resources, and/or extracting video data in a related time range from a video file (a monitoring or screen recording file), and determining which subjects, objects and attack behaviors are related in the target scene and information (the information is introduced in the content) corresponding to the subjects, the objects and the attack behaviors; and finally, realizing the association of the data source, namely associating and corresponding the subject, the object and the attack behavior with respective information.
S102: visual descriptions of the subject, object, and attack behavior are determined.
In the embodiment of the application, regarding the subject, the object and the attack behavior in the target scene, "a standard component module supporting selection visualization" may be preset, and a user may revise and change the module to realize the visual description of the subject, the object and the attack behavior.
In order to realize the visual description of the subject and the object, a proper visual icon can be selected for the subject and the object, and the visual icon can be selected by the user or the default selection of the system; in order to realize the visual description of the attack behavior, a proper visual effect can be selected for the attack behavior, and the attack behavior can be selected by the user or the system by default.
Wherein, the visual icon of the main body refers to visual logo, such as three-dimensional or planar visual elements of network airship, fighter, tank, hacker and the like, which can be freely defined and selected; the state change icon of the main body means dynamic effects such as halo, color change, shape change, and the like added on the basis of the above.
The visual icon of the object refers to a visual logo, such as three-dimensional or planar visual elements of a planet, a battleship, a key plug, a fort and the like, and can be freely defined and selected; the state change icon of the object refers to dynamic effects such as halo, color change, shape change, explosion/protection and the like added on the basis.
The visual effect of the attack behavior refers to attack effects of attack types and result types, such as shooting effects and bombing effects.
S103: based on different event dimensions, event processes of each target event in a target scene are backtracked in a preset backtracking mode, so that backtracking results of subjects, objects and attack behaviors are displayed in a visual description mode.
In the embodiment of the present application, since one or more events may be involved in a target scenario, for convenience of description, each event in the target scenario is defined as a target event, for example, a certain target event is "successful attack on an object by a subject".
A plurality of different event types may be predefined and then, based on these event types, corresponding one or more target events are matched from the target scene. It should be noted that the same event type may correspond to one or more different target events.
In the embodiment of the application, a corresponding event backtracking mode can be defined for each event type in advance, so that for each target event appearing in a target scene, the event process of the target event can be backtracked according to the preset backtracking mode corresponding to the event type to which the target event belongs, and the backtracking result of the target event is obtained; the backtracking result includes relevant information of the subject, the object and the attack behavior involved in the target event, and the relevant information may be extracted from the existing data of the target event or obtained by performing data processing based on the existing data. Furthermore, a visual description (visual icon, visual effect and the like) mode can be adopted to display the backtracking result of the subject, the object and the attack behavior in the target event.
Therefore, the sequence time of the scene visualization process can be arranged based on different event dimensions, and the visualization effect is achieved. That is, a preset backtracking logic is adopted, and based on the end time point of each target event, the data before the target event is backtracked, so that the playback arrangement display of multiple events is realized.
In an implementation manner of the embodiment of the present application, the "backtracking an event process of each target event in a target scene by using a preset backtracking manner" in S103 may include:
and backtracking the event process of each target event in the target scene by adopting the playback time period, the acquisition sequence and the acquisition time of different information in the playback process and the frequency or the number of times of event playback defined in the preset backtracking mode.
The "different information" in the above may include: basic information of a main body in a target event, wherein the basic information comprises role description information of each attacker in an attack team and/or account information of each attacker; behavioral characteristics of an attack behavior in the target event; state change information of the object in the target event.
The behavior characteristics of the attack behavior refer to behavior characteristics of the subject and the object in the interaction process, and the behavior characteristics can be obtained by adopting a pre-constructed model for feature extraction and feature matching according to a preset extraction rule; the "state change information of the object" may refer to whether the state of the object changes and which changes, such as being invaded, accessed, defended, attacked, and so on.
Specifically, in the present implementation, since the event backtracking manners corresponding to different event types have been predefined, and each event backtracking manner correspondingly defines a playback time period, an acquisition sequence and an acquisition time of different information in the playback process, and the number or frequency of event playback, etc., after determining which target events are in the target scene, for each target event, the event backtracking manner corresponding to the event type to which the target event belongs can be determined, and the target event is played back according to the event backtracking manner, thereby implementing a visual display effect.
For example, taking a certain target event as an example, in a preset backtracking manner of the target event, playback time and content of a subject in the target event, time and content of an attack behavior, attacked effect and time of an object, and the number of times of event playback are defined. Assuming that the target event is 'successful attack of the subject on the object', for example, in 12; based on this, extracting 11; since the number of times of replay of the target event is defined as 3 times, the target event is replayed three times for the current behavior.
It should be noted that, when performing actual visual display, playback display may be performed according to a preset playback number, and the extracted information and/or features are displayed in the playback process; if the target scene has multi-target events, the multi-target events can be displayed in series according to the sequence of the events.
It should be further noted that the above examples are only exemplary, and different trace-back manners may be predefined for different event types, so that a multi-event-dimensional personalized visual display arrangement is performed for each event in the network target range scene, thereby achieving an expected display effect.
The embodiment of the application can realize the automatic corresponding relation of the minimum attack and defense scene data through modular management, namely, through an associated management module of a data source; flexible definition of scene objects and connection dynamic effects is realized by selecting a visual/user-defined module; and through the arrangement management module, the visual display is realized based on the series connection of all contents of the event dimension.
Therefore, under the condition of less resource investment (shorter time, semi-automation and less personnel intervention), the embodiment of the application can lead the user to have comprehensive and clear cognition on the attack and defense scene, can rapidly make the required scene, realizes the arrangement of the visual scene and is convenient for the further better expansion and application of the network shooting range.
In the visual display method of the network shooting range provided by the embodiment of the application, information of a subject, an object and an attack behavior in a target scene is acquired; determining visual descriptions of a subject, an object and an attack behavior; based on different event dimensions, the event process of each target event in the target scene is backtracked by adopting a preset backtracking mode, so that the backtracking results of the subject, the object and the attack behavior are displayed by adopting a visual description mode. Therefore, the method is simple and easy to use, the visual scenes are disassembled and configured through data cleaning, and the cost caused by customized display can be reduced; in addition, the method integrates the contents of a scene layout and human-computer interaction, realizes playback display based on event dimensionality, has clear order, realizes accurate expression of scene contents, and has natural human-computer interaction.
Referring to fig. 2, a schematic composition diagram of a visual display device of a network shooting range provided in an embodiment of the present application is shown, where the device includes:
the data management unit 210 is configured to acquire information of a subject, an object, and an attack behavior in a target scene, where the target scene is an attack and defense scene in a network shooting range, the subject is an attack team in the target scene, the object is an attacked virtual machine in the target scene, and the attack behavior is an interaction behavior between the subject and the object;
an object management unit 220, configured to determine visual descriptions of the subject, the object, and the attack behavior;
the event arranging unit 230 is configured to trace back an event process of each target event in the target scene in a preset trace-back manner based on different event dimensions, so as to display a trace-back result of the subject, the object, and the attack behavior in the visual description manner.
In an implementation manner of the embodiment of the present application, the information of the main body includes at least one of:
the information processing method comprises the following steps of role description information of each attacker in the attack team, account information of each attacker, account state behavior information of each attacker, and video information of each attacker in the target scene.
In an implementation manner of the embodiment of the present application, the information of the object includes:
basic information and/or state information of the attacked virtual machine; the basic information comprises information of a mounted file system and/or an exploit program which are/is arranged in the virtual machine; the state information is an invaded state, an access state, a defense state or an attacked state.
In an implementation manner of the embodiment of the present application, the information of the attack behavior is derived from network traffic data in the target scene, which is capable of proving an interaction relationship between the subject and the object.
In an implementation manner of the embodiment of the present application, the data management unit 210 is specifically configured to:
and based on the data resources related to the target scene, performing field extraction and/or video interception on the data resources to determine the subject, the object and the attack behavior in the target scene and the corresponding information, and associating the subject, the object and the attack behavior with the corresponding information.
In an implementation manner of the embodiment of the present application, when the event orchestration unit 230 backtracks the event process of each target event in the target scene in a preset backtracking manner, it is specifically configured to:
and backtracking the event process of each target event in the target scene by adopting a playback time period, an acquisition sequence and an acquisition time of different information in the playback process, and the frequency or the number of times of event playback defined in a preset backtracking mode.
In an implementation manner of the embodiment of the present application, the different information includes:
basic information of a main body in the target event comprises role description information of each attacker in an attack team and/or account information of each attacker;
behavior characteristics of an attack behavior in the target event;
state change information of an object in the target event.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
An embodiment of the present application further provides an electronic device, a schematic structural diagram of the electronic device is shown in fig. 3, the electronic device 3000 includes at least one processor 3001, a memory 3002, and a bus 3003, and the at least one processor 3001 is electrically connected to the memory 3002; the memory 3002 is configured to store at least one computer-executable instruction, and the processor 3001 is configured to execute the at least one computer-executable instruction so as to perform the steps of any one of the network range visualization methods as provided by any one of the embodiments or any alternative embodiments of the present application.
Further, the processor 3001 may be an FPGA (Field-Programmable Gate Array) or other devices with logic processing capability, such as an MCU (micro controller Unit) and a CPU (Central processing Unit).
By applying the embodiment of the application, the visualization display arrangement based on the event dimensionality is carried out on the independent scene of the network shooting range, the method is simple and easy to use, the disassembly and configuration of the visualization scene are realized through data cleaning, and the cost brought by customized display can be reduced; in addition, the method integrates the contents of a scene layout and human-computer interaction, realizes playback display based on event dimensionality, has clear order, realizes accurate expression of scene contents, and has natural human-computer interaction.
The embodiments of the present application further provide another computer-readable storage medium, which stores a computer program, where the computer program is used for implementing, when executed by a processor, the steps of any one of the network range visualization display methods provided in any one of the embodiments or any one of the alternative embodiments of the present application.
The computer-readable storage medium provided by the embodiments of the present application includes, but is not limited to, any type of disk including floppy disks, hard disks, optical disks, CD-ROMs, and magneto-optical disks, ROMs (Read-Only memories), RAMs (Random Access memories), EPROMs (Erasable Programmable Read-Only memories), EEPROMs (Electrically Erasable Programmable Read-Only memories), flash memories, magnetic cards, or optical cards. That is, a readable storage medium includes any medium that can store or transfer information in a form readable by a device (e.g., a computer).
By applying the embodiment of the application, the visualization display arrangement based on the event dimensionality is carried out on the independent scene of the network shooting range, the method is simple and easy to use, the disassembly and configuration of the visualization scene are realized through data cleaning, and the cost brought by customized display can be reduced; in addition, the method integrates the contents of the scene layout and the human-computer interaction, realizes playback display based on event dimensionality, is clear and clear in order, realizes accurate expression of the scene contents, and is natural in human-computer interaction.
The above description is only a preferred embodiment of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (8)

1. A visual display method of a network shooting range is characterized by comprising the following steps:
acquiring information of a subject, an object and an attack behavior in a target scene, wherein the target scene is an attack and defense scene in a network shooting range, the subject is an attack team in the target scene, the object is an attacked virtual machine in the target scene, and the attack behavior is an interaction behavior between the subject and the object;
determining a visual description of the subject, the object, and the attack behavior;
based on different event dimensions, backtracking the event process of each target event in the target scene by adopting a preset backtracking mode so as to display the backtracking results of the subject, the object and the attack behavior by adopting the visual description mode;
the event process of each target event in the target scene is backtracked in a preset backtracking mode, and the backtracking method comprises the following steps: backtracking the event process of each target event in the target scene by adopting a playback time period defined in a preset backtracking mode, an acquisition sequence and an acquisition time of different information in the playback process and the frequency of event playback;
the different information includes: basic information of a main body in the target event comprises role description information of each attacker in an attack team and/or account information of each attacker; behavior characteristics of an attack behavior in the target event; state change information of an object in the target event.
2. The method of claim 1, wherein the information of the subject includes at least one of:
the method comprises the following steps of obtaining role description information of each attacker, account state behavior information of each attacker and video information of each attacker in the target scene in the attack team.
3. The method of claim 1, wherein the information of the object comprises:
basic information and/or state information of the attacked virtual machine;
the basic information comprises information of a mounted file system and/or an exploit program which are/is arranged in the virtual machine; the state information is an invaded state, an access state, a defense state or an attacked state.
4. The method of claim 1, wherein the information of the attack behavior is derived from network traffic data in the target scene, which can prove an interaction relationship between the subject and the object.
5. The method of claim 1, wherein the obtaining information of the subject, the object and the attack behavior in the target scene comprises:
and based on the data resources related to the target scene, performing field extraction and/or video interception on the data resources to determine the subject, the object and the attack behavior in the target scene and the corresponding information, and associating the subject, the object and the attack behavior with the corresponding information.
6. A visual display device of a network range is characterized by comprising:
the data management unit is used for acquiring information of a subject, an object and an attack behavior in a target scene, wherein the target scene is an attack and defense scene in a network shooting range, the subject is an attack team in the target scene, the object is an attacked virtual machine in the target scene, and the attack behavior is an interaction behavior between the subject and the object;
an object management unit for determining a visual description of the subject, the object and the attack behavior;
the event arrangement unit is used for backtracking the event process of each target event in the target scene by adopting a preset backtracking mode based on different event dimensions so as to display the backtracking results of the subject, the object and the attack behavior by adopting the visual description mode;
when the event scheduling unit backtracks the event process of each target event in the target scene in a preset backtracking manner, the event scheduling unit is specifically configured to: backtracking the event process of each target event in the target scene by adopting a playback time period defined in a preset backtracking mode, an acquisition sequence and an acquisition time of different information in the playback process and the frequency of event playback;
the different information includes: basic information of a main body in the target event comprises role description information of each attacker in an attack team and/or account information of each attacker; behavioral characteristics of an attack behavior in the target event; state change information of an object in the target event.
7. An electronic device, comprising: a processor, a memory;
the memory for storing a computer program;
the processor is used for executing the visual display method of the network range according to any one of claims 1-5 by calling the computer program.
8. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a method for visual presentation of a network range according to any one of claims 1 to 5.
CN202210277596.4A 2022-03-21 2022-03-21 Visual display method, device and equipment for network shooting range and readable storage medium Active CN114666239B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210277596.4A CN114666239B (en) 2022-03-21 2022-03-21 Visual display method, device and equipment for network shooting range and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210277596.4A CN114666239B (en) 2022-03-21 2022-03-21 Visual display method, device and equipment for network shooting range and readable storage medium

Publications (2)

Publication Number Publication Date
CN114666239A CN114666239A (en) 2022-06-24
CN114666239B true CN114666239B (en) 2023-01-20

Family

ID=82032336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210277596.4A Active CN114666239B (en) 2022-03-21 2022-03-21 Visual display method, device and equipment for network shooting range and readable storage medium

Country Status (1)

Country Link
CN (1) CN114666239B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115225347B (en) * 2022-06-30 2023-12-22 烽台科技(北京)有限公司 Method and device for monitoring target range resources
CN115037562B (en) * 2022-08-11 2022-11-15 北京网藤科技有限公司 Industrial control network target range construction method and system for safety verification

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110098951A (en) * 2019-03-04 2019-08-06 西安电子科技大学 A kind of network-combination yarn virtual emulation based on virtualization technology and safety evaluation method and system
CN111212064A (en) * 2019-12-31 2020-05-29 北京安码科技有限公司 Method, system, equipment and storage medium for simulating attack behavior of shooting range
CN111935192A (en) * 2020-10-12 2020-11-13 腾讯科技(深圳)有限公司 Network attack event tracing processing method, device, equipment and storage medium
CN112839039A (en) * 2021-01-05 2021-05-25 四川大学 Interactive automatic restoration method for network threat event attack scene
CN113660221A (en) * 2021-07-28 2021-11-16 上海纽盾科技股份有限公司 Joint anti-attack method, device and system combined with game

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110098951A (en) * 2019-03-04 2019-08-06 西安电子科技大学 A kind of network-combination yarn virtual emulation based on virtualization technology and safety evaluation method and system
CN111212064A (en) * 2019-12-31 2020-05-29 北京安码科技有限公司 Method, system, equipment and storage medium for simulating attack behavior of shooting range
CN111935192A (en) * 2020-10-12 2020-11-13 腾讯科技(深圳)有限公司 Network attack event tracing processing method, device, equipment and storage medium
CN112839039A (en) * 2021-01-05 2021-05-25 四川大学 Interactive automatic restoration method for network threat event attack scene
CN113660221A (en) * 2021-07-28 2021-11-16 上海纽盾科技股份有限公司 Joint anti-attack method, device and system combined with game

Also Published As

Publication number Publication date
CN114666239A (en) 2022-06-24

Similar Documents

Publication Publication Date Title
CN114666239B (en) Visual display method, device and equipment for network shooting range and readable storage medium
Kurzhals et al. Evaluating visual analytics with eye tracking
Olsson et al. Computer forensic timeline visualization tool
Wallner et al. A spatiotemporal visualization approach for the analysis of gameplay data
CN107851275A (en) Time series explorer
EP2649512A2 (en) Place-based image organization
CN110851043B (en) Page display method and device, storage medium and electronic device
CN111538852B (en) Multimedia resource processing method, device, storage medium and equipment
Švábenský et al. Dataset of shell commands used by participants of hands-on cybersecurity training
CN108711031B (en) Intelligent terminal electronic evidence library management training system and method
CN109814958A (en) Management state display methods, device, computer installation and storage medium
JP2003196476A5 (en)
CN112988586B (en) Control testing method and device, electronic equipment and storage medium
CN113271486B (en) Interactive video processing method, device, computer equipment and storage medium
Ošlejšek et al. Evaluation of cyber defense exercises using visual analytics process
CN109821233A (en) A kind of data analysing method and device
Pathmanathan et al. Been There, Seen That: Visualization of Movement and 3D Eye Tracking Data from Real‐World Environments
CN109785114A (en) Credit data methods of exhibiting, device, equipment and medium for audit of providing a loan
CN114282795B (en) Network target range personnel skill evaluation method, device, equipment and readable storage medium
CN109999495B (en) Method and system for processing artificial intelligence AI unit state information
Thurler et al. Prov-Replay: A Qualitative Analysis Framework for Gameplay Sessions Using Provenance and Replay
JP6263870B2 (en) Method for determining order relation of teaching materials, learning support system, apparatus for determining order relation of teaching materials, terminal device, and program
Halic et al. GPU‐based efficient realistic techniques for bleeding and smoke generation in surgical simulators
CN103501865B (en) Social interaction creator of content is developed
Luboschik et al. Feature‐driven visual analytics of chaotic parameter‐dependent movement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 100094 103, building 6, yard 9, FengHao East Road, Haidian District, Beijing

Patentee after: Yongxin Zhicheng Technology Group Co.,Ltd.

Address before: 100094 103, building 6, yard 9, FengHao East Road, Haidian District, Beijing

Patentee before: BEIJING YONGXIN ZHICHENG TECHNOLOGY CO.,LTD.

CP01 Change in the name or title of a patent holder