CN111580648A - Simulation drilling method and device based on virtual reality - Google Patents

Simulation drilling method and device based on virtual reality Download PDF

Info

Publication number
CN111580648A
CN111580648A CN202010334734.9A CN202010334734A CN111580648A CN 111580648 A CN111580648 A CN 111580648A CN 202010334734 A CN202010334734 A CN 202010334734A CN 111580648 A CN111580648 A CN 111580648A
Authority
CN
China
Prior art keywords
scene
virtual
interactive
state
sound information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010334734.9A
Other languages
Chinese (zh)
Inventor
李心刚
顾登明
张亚平
郭凯
李锴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China General Nuclear Power Corp
CGN Power Co Ltd
Daya Bay Nuclear Power Operations and Management Co Ltd
Lingdong Nuclear Power Co Ltd
Guangdong Nuclear Power Joint Venture Co Ltd
Lingao Nuclear Power Co Ltd
Original Assignee
China General Nuclear Power Corp
CGN Power Co Ltd
Daya Bay Nuclear Power Operations and Management Co Ltd
Lingdong Nuclear Power Co Ltd
Guangdong Nuclear Power Joint Venture Co Ltd
Lingao Nuclear Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China General Nuclear Power Corp, CGN Power Co Ltd, Daya Bay Nuclear Power Operations and Management Co Ltd, Lingdong Nuclear Power Co Ltd, Guangdong Nuclear Power Joint Venture Co Ltd, Lingao Nuclear Power Co Ltd filed Critical China General Nuclear Power Corp
Priority to CN202010334734.9A priority Critical patent/CN111580648A/en
Publication of CN111580648A publication Critical patent/CN111580648A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Water Supply & Treatment (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Public Health (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application is suitable for the technical field of nuclear power station information construction, and provides a simulation drilling method and device based on virtual reality, wherein the method comprises the following steps: constructing a virtual interactive scene; the virtual interaction scene comprises: interactive objects and scene sound information; determining a state change list about each demonstration interactive object based on the acquired operation flow information; the demonstration interactive object is the interactive object which needs to be subjected to state change and is determined according to the operation flow information; updating the virtual interactive scene based on a state change list about each of the presentation interactive objects and the scene sound information generated during the state change; and receiving operation data fed back by a user, and updating the virtual interaction scene based on the operation data. According to the method, the scene sound information is added when the virtual interactive scene is constructed, so that the virtual interactive scene is closer to a real environment, and the effect of simulated drilling is enhanced.

Description

Simulation drilling method and device based on virtual reality
Technical Field
The application belongs to the technical field of nuclear power station informatization construction, and particularly relates to a simulation drilling method and device based on virtual reality.
Background
With the continuous development of nuclear technology, the installation and maintenance requirements of related equipment of a nuclear reactor are more and more increased, and the operating environment for installation and maintenance of the related equipment bears nuclear irradiation for a long time, is highly activated and is not suitable for workers to work for a long time. Therefore, installation and maintenance work is required to be completed within a specified time for the personal safety of workers.
However, due to the complex working environment, the workers are easily affected by factors such as alarm sound when entering the working environment, the attention of the workers is dispersed, and the working speed of the workers is reduced. Therefore, it is necessary to perform simulation training on the working personnel, and general simulation training only enables the working personnel to roughly know the operation process, so that a real field environment cannot be simulated, and the simulation training effect cannot meet the requirement.
Disclosure of Invention
The embodiment of the application provides a simulation drilling method and device based on virtual reality, which can simulate the real working environment of installation and maintenance related equipment to be used for workers to drill aiming at the installation and maintenance working process, solve the problems that the real working environment cannot be simulated and the simulation training effect cannot meet the requirement, improve the authenticity of the simulation field working environment, enhance the simulation demonstration effect, and improve the familiarity of installation and replacement personnel to the field working environment and the proficiency of the installation and replacement working process.
In a first aspect, an embodiment of the present application provides a simulation drilling method based on virtual reality, including: constructing a virtual interactive scene; the virtual interaction scene comprises: interactive objects and scene sound information; determining a state change list about each demonstration interactive object based on the acquired operation flow information; the demonstration interactive object is the interactive object which needs to be subjected to state change and is determined according to the operation flow information; updating the virtual interactive scene based on a state change list about each of the demonstration interactive objects and the scene sound information generated in the state change process so as to demonstrate the operation flow; and receiving operation data fed back by a user, and updating the virtual interaction scene based on the operation data.
In a possible implementation manner of the first aspect, the updating the virtual interaction scene based on the operation data includes: randomly selecting at least one target accident information from a plurality of preset accident information; updating the virtual interaction scenario based on the target contingency information and based on the operational data.
For example, the target accident information may include time cue sound information including cue sounds regarding time at a certain time or after a certain period of time, which may occur in the target scene, and the time cue sound information is played when the virtual interactive scene is updated based on the operation data.
It should be understood that, in the process of demonstrating the operation flow, at least one target unexpected information may also be randomly selected from a plurality of preset unexpected information; updating the virtual interaction scenario based on the target contingency information and based on the operational data.
In a second aspect, an embodiment of the present application provides a simulation drilling device based on virtual reality, including:
the virtual interactive scene construction module is used for constructing a virtual interactive scene; the virtual interaction scene comprises: interactive objects and scene sound information; the state change list determining module is used for determining a state change list of each demonstration interactive object based on the acquired operation flow information; the demonstration interactive object is the interactive object which needs to be subjected to state change and is determined according to the operation flow information; the operation flow demonstration module is used for updating the virtual interaction scene based on the state change list of each demonstration interaction object and the scene sound information generated in the state change process so as to demonstrate the operation flow; and the user operation module is used for receiving operation data fed back by a user and updating the virtual interaction scene based on the operation data.
In a third aspect, an embodiment of the present application provides a terminal device, including: a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the method of any of the above first aspects when executing the computer program.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, including: the computer readable storage medium stores a computer program which, when executed by a processor, implements the method of any of the first aspects described above.
In a fifth aspect, the present application provides a computer program product, which when run on a terminal device, causes the terminal device to execute the method of any one of the above first aspects.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Compared with the prior art, the embodiment of the application has the advantages that:
compared with the prior art, the simulation drilling method based on the virtual reality is characterized in that scene sound information is added when the virtual interaction scene is constructed, so that the virtual interaction scene is closer to a complex real environment, the reality of simulation drilling is improved, the simulation training effect is enhanced, and the familiarity of workers to the real environment and the proficiency of the workers to the workflow are improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart of an implementation of a simulation drilling method according to a first embodiment of the present application;
FIG. 2 is a schematic diagram of an application scenario provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a three-dimensional scan of a room R140 in an application scenario provided by an embodiment of the present application;
FIG. 4 is a schematic view of an environment model of an R140 room in an application scenario provided by an embodiment of the present application;
FIG. 5 is a schematic view of a part model in an application scenario provided by an embodiment of the present application;
fig. 6 is a flowchart of an implementation of the simulation drill method S101 according to the second embodiment of the present application;
fig. 7 is a flowchart of an implementation of the simulation drilling method S1013 provided in the third embodiment of the present application;
fig. 8 is a flowchart of an implementation of the simulation drilling method S104 according to the fourth embodiment of the present application;
fig. 9 is a flowchart of an implementation of the simulation drill method S103 according to the fifth embodiment of the present application;
fig. 10 is a flowchart of an implementation of the simulation drilling method S104 according to the sixth embodiment of the present application;
fig. 11 is a schematic structural diagram of a simulation exercise device according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected may be interpreted contextually to mean" upon determining "or" in response to determining "or" upon detecting a [ described condition or event ] or "in response to detecting a [ described condition or event ].
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
In the embodiment of the present application, the main execution body of the flow is a terminal device. The terminal devices include but are not limited to: the server, the computer, the smart phone, the tablet computer and the like can execute the device of the simulation drilling method provided by the application, wherein the devices of various types can be connected with the virtual reality VR device, the output data of the virtual interaction scene is sent to the VR device, and the user can wear the VR device to perform simulation drilling. Preferably, the terminal device is a virtual reality interaction device, and the terminal device can simulate a real environment in a virtual interaction scene. Fig. 1 shows a flowchart of an implementation of the method provided in the first embodiment of the present application, which is detailed as follows:
in S101, a virtual interactive scene is constructed.
In this embodiment, the virtual interactive scene includes: interactable objects and scene sound information. The interactable object refers to an object whose state can be changed based on a user operation in the virtual interactive scene, and the state may include object properties such as size, shape, and coordinates of the interactable object. For example, in the virtual interactive scenario, a box that can simulate a pushing process (i.e. the state of the coordinates of the interactive object changes) according to the operation data fed back by the user is the interactive object. The scene sound information refers to a set of sound information that may appear in a real scene, and for example, if an alarm exists in the real scene, the scene sound information includes alarm sound information when the alarm is activated, that is, the alarm sound information is scene sound information associated with an activation state of the alarm (an interactive object).
It should be understood that although the virtual interactive scene includes the scene sound information, it does not mean that the scene sound information is always played in the virtual interactive scene. Optionally, in the virtual interaction scene, a certain triggering means is required to play the scene sound information, for example, the alarm sound information is played when the state of an interactive object, i.e., an alarm in the virtual interaction scene, is changed to a starting state.
In this embodiment, the above-mentioned manner of constructing the virtual interactive scene may be specifically constructed based on three-dimensional modeling software. Preferably, the virtual interactive scene is constructed based on three-dimensional modeling software 3 DMAX. The purpose of constructing the virtual interactive scene is to simulate a real target scene, so that each real object in the target scene or a model of the real object can be measured, so as to obtain data (including data such as size, shape, material, position coordinates in the target scene and the like) of each entity object related to the state, the acquired data is imported into the 3DMAX, an interactive object corresponding to each entity object in the virtual interactive scene is constructed based on the data of each model, all sound information possibly generated by each entity object in the target scene is imported into the 3DMAX to generate the scene sound information, and the virtual interactive scene is constructed based on the interactive object and the scene sound information.
In a possible implementation manner, when all the sound information that may be generated by each entity object is collected, state transformation relations corresponding to different types of sound information may be recorded, and a sound state lookup table is generated. For example, the entity object may be a button, and when the state of the entity object is changed from the to-be-pressed state to the pressed state, a type of sound information is associated, in the virtual interactive scene, the interactive object corresponding to the entity object and the sound information corresponding to the interactive object in the process of changing the state of the interactive object from the to-be-pressed state to the pressed state are stored in a sound state lookup table, and then in the process of demonstration and simulation training, the corresponding sound information may be acquired when the state of each interactive object is changed according to the sound state lookup table.
In S102, a state change list with respect to each presentation interaction object is determined based on the acquired operation flow information.
In this embodiment, the demonstration interactive object is the interactive object whose state is required to be changed, which is determined according to the operation flow information. The operation flow information comprises operation contents of all steps in the standard operation flow in the target scene and operation sequences of all the steps. Optionally, based on virtual reality editing software, a key frame including state change information of each presentation interactive object is generated in the virtual interactive scene, and the operation flow information is generated according to all key frames, specifically, after the virtual interactive scene is constructed, a plurality of key frames related to the operation flow are obtained by editing the state of each presentation interactive object in the virtual interactive scene in each key frame and configuring associated scene sound information for the state of the presentation interactive object, and all the obtained key frames are identified as the operation flow information.
Optionally, a state change list of each demonstration interactive object is determined based on the operation flow information, specifically, state change data of the demonstration interactive object associated with the operation content is determined according to the operation content of each step, for example, the operation content is that a part a and a part B are connected through a connecting piece C, the state change data of the part a is connected with the connecting piece C, the state change data of the part B is connected with the connecting piece C, and the state change data of the connecting piece C is simultaneously connected with the part a and the part B. And sequentially storing the state change data of the demonstration interactive objects related to the operation contents into the state change list according to the operation sequence of each step.
It should be understood that the state change data with the first order in the state change list is the initialization state data, and the initial states of all the interactable objects in the virtual interactive scene are determined according to the initialization state data.
In S103, the virtual interactive scene is updated based on the state change list regarding each of the presentation interactive objects and the scene sound information generated during the state change.
In this embodiment, the virtual interactive scene is updated according to the state change list of each of the presentation interactive objects and the scene sound information generated during the state change process, so as to present the operation flow. Specifically, according to the order of each state change information in the state change list, the changed state of each demonstration interactive object is determined in sequence based on the state change information; extracting changed sound information associated with the changed state from the scene sound information according to the changed state of the demonstration interactive object, wherein the demonstration interactive object is exemplarily an alarm, the changed state of the demonstration interactive object is an activated state, and the scene sound information stores alarm sound information associated with the activated state of the alarm, namely the alarm sound information is identified as changed sound information; and in the virtual interactive scene, sequentially changing the state of the demonstration interactive object into the changed state, and playing the changed sound information related to the changed state of the demonstration interactive object.
In S104, operation data fed back by the user is received, and the virtual interaction scene is updated based on the operation data.
In the present embodiment, operation data fed back by the user is illustratively acquired according to the controller. Illustratively, the controller may be a handle or a glove. The controller may include a motion sensor, such as a gyroscope, an acceleration sensor, an angular velocity sensor, and the like, and acquire a motion trajectory and a motion velocity of a portion where the user grips the controller by using a pressure sensor, and may further include a pressure sensor, and acquire a force with which the user grips the controller by using a pressure sensor, and generate operation data of the user from data such as the motion trajectory, the motion velocity, and the force.
In this embodiment, the manner of updating the virtual interaction scene based on the operation data may specifically be: and importing the operation data into the virtual interactive scene to change the state of an interactive object associated with the operation data in the virtual interactive scene, and playing scene sound information generated in the state change process of the interactive object associated with the operation data.
Optionally, as an alternative to S102, exemplary operation data fed back by a crafter is received, and the virtual interaction scenario is updated based on the exemplary operation data, so as to demonstrate an operation flow. The exemplary operation data fed back by the tradesman is the operation data generated in the process of the tradesman performing the operation flow in the virtual interactive scene in S104. The skilled worker is a worker who is familiar with the target scene and skilled in the operation process to reach a preset standard. Specifically, the step of receiving the demonstration operation data fed back by the craftsman and updating the virtual interactive scene based on the demonstration operation data is described in detail with reference to the simulation exercise method S104 provided in this embodiment, and details are not repeated herein.
It should be understood that in order to simulate a target scene more realistically, in a virtual interactive scene, there may be an association between different interactable objects corresponding to the target scene. Illustratively, the target scene is provided with a switch device and a lamp, wherein the lamp is in a bright state when the switch device is in an on state, and the lamp is in a dark state when the switch device is in an off state; in the virtual interactive scene simulating the target scene, there are a switch model corresponding to the switch device and a lamp model corresponding to the lamp, and in the process of the user operation (i.e., the above S104), if the user changes the state of the switch model from the on state to the off state, the lamp model is changed from the bright state to the dark state. Therefore, when the virtual interaction scene is updated according to the operation data, the target interaction object required to be operated by the user in the virtual interaction scene can be determined based on the operation data, the state of the target interaction object is changed according to the operation data, and whether the operation has the associated interaction object with the associated state change of the target interaction object or not is judged based on the changed state and the association relation; and if so, determining the state of the associated interactive object based on the association relationship, and updating the virtual interactive scene.
In the embodiment, scene sound information is added when the virtual interactive scene is constructed, so that the virtual interactive scene is closer to a complex real environment, the reality of simulation exercise is improved, the simulation training effect is enhanced, and the familiarity of workers to the real environment and the proficiency of the workers to the workflow are improved.
Fig. 2 shows a schematic diagram of an application scenario provided in an embodiment of the present application, which is detailed as follows:
referring to fig. 2, fig. 2 is a virtual demonstration flow of the installation and replacement process of the out-of-pile neutron dosimeter. The out-of-reactor neutron dosimeter measures the fast neutron fluence sustained by a reactor pressure vessel by adopting a multi-foil activation method, and the installation and replacement work of the out-of-reactor neutron dosimeter is mainly performed in R140 and R440 rooms of a reactor factory, and equipment and components in the two rooms are subjected to neutron and gamma ray irradiation for a long time and are highly activated, so that the environmental dosage level in the rooms is very high even during shutdown. Due to the requirement of personnel dosage control, the installation and replacement work of the neutron dosimeter outside the reactor is generally required to be completed within 30 minutes. In an application scenario of virtual demonstration of installation and replacement of a neutron dosimeter outside a reactor, based on the simulation exercise method provided by this embodiment, three-dimensional laser scanning is performed on an installation and replacement space (i.e., R140 and R440 rooms), referring to fig. 3, fig. 3 is a schematic three-dimensional scanning diagram of the R140 room, preferably, before performing three-dimensional laser scanning on the R140 room, a black-and-white checkerboard diagram with four grids is pasted on a wall of the R140 room as a calibration object (for example, a pasting diagram with reference number 579 at the lower right corner of a picture) for determining the relative position of each solid object in the room, so as to more accurately model the room in the following process. And establishing an environment model for installing and replacing a working scene according to the scanning result, referring to fig. 4, wherein fig. 4 is an environment model schematic diagram of an R140 room, and it can be seen that the precision of the environment model established by three-dimensional point cloud data obtained by three-dimensional laser scanning reaches the millimeter level. Modeling is performed on parts required by the installation and replacement work of the out-of-pile neutron dosimeter, referring to fig. 5, fig. 5 is a schematic view of a part model in an application scene of this embodiment, a three-dimensional part model related to the part is established according to information such as the size of the part in reality, and preferably, an actual photo texture of the part is attached to the part model, so that the part model is more real. And importing the environment model into a new virtual scene, importing the part model into the virtual scene, finally recording scene sound information possibly emitted under a target scene, importing the scene sound information into the virtual scene, and generating the virtual interactive scene. In order to realize simulation exercise, a virtual human body model is required to be constructed in a virtual interactive scene, and operation flow data is manufactured according to the environment model and the part model by taking the virtual human body model as a first visual angle, so that a user can observe the installation and replacement process of a learning demonstration and control the virtual human body model to exercise. In addition, in order to meet the interaction requirement of the user and the virtual interaction scene, interactive tools such as a display and a controller which are matched with the virtual interaction scene need to be manufactured, the display is used for enabling the user to observe the installation and replacement process demonstrated in the virtual interaction scene, and the controller is used for enabling the user to practice the installation and replacement process in the virtual interaction scene. Illustratively, the display may be a helmet and the controller may be a handle, the display and controller preferably being the virtual reality device hardware of the HTC VIVE.
Fig. 6 shows a flowchart of an implementation of the method provided in the second embodiment of the present application. Referring to fig. 6, with respect to the embodiment shown in fig. 1, the simulation drill method S101 provided in this embodiment includes S1011 to S1016, which are detailed as follows:
further, the constructing the virtual interactive scene comprises:
in this embodiment, the scene sound information includes environment sound information, environment interaction sound information, and part sound information.
In S1011, an environment model related to the target scene is constructed based on three-dimensional point cloud environment data scanned by the three-dimensional laser scanning module in the target scene.
In this embodiment, a three-dimensional laser scanning module is used for scanning in a target scene to obtain three-dimensional point cloud environment data, and an environment model related to the target scene is constructed. Illustratively, calibration objects (preferably black and white checkerboard images as shown in fig. 3) are attached to the target scene as uniformly as possible, so that each scanning object of the target scene is accurately positioned during the scanning process of the three-dimensional laser scanner, and the reality of simulating the target scene is improved, so that the constructed virtual interactive scene is closer to the real target scene.
In S1012, the environmental sound information of the target scene is collected, and the associated environmental interaction sound information is configured for each changeable state of the environmental model in the target scene.
In this embodiment, the environmental sound of the target scene is collected, and the associated environmental interaction sound is configured for each changeable state of the environmental model in the target scene, for example, the target scene is an installation and replacement working scene of a neutron dosimeter outside a reactor, specifically, an R140 or R440 room of a reactor building is provided, the environmental sound is collected in the target scene, and the environmental sound may be sound generated by operation of other machines in the reactor building; in the target scenario, the environment interaction sound is configured for the environment model, and the environment interaction sound may be a footstep sound when a floor in the environment model is in a treaded state (i.e. one of the above changeable states) or a climbing sound when a ladder in the environment model is in a climbing state (i.e. one of the above changeable states).
In S1013, all existing parts included in the target scene are modeled to obtain a part model corresponding to each existing part, and associated part sound information is configured for each changeable state of the existing part in the target scene.
In this embodiment, modeling is performed on all existing parts included in the target scene to obtain part models corresponding to the existing parts, specifically, based on real models of all existing parts included in the target scene, equal-scale modeling is performed, it should be understood that, if the precision of modeling software may not meet the modeling requirement for some small-sized parts in the modeling process, in this case, all models in the virtual interactive scene may be scaled up in equal-scale until the precision of modeling software meets the modeling requirement for the existing part with the minimum size in the target scene, illustratively, the target scene is an installation replacement working scene of an out-of-pile neutron dosimeter, in the target scene, a bead chain part exists, the diameter of the bead chain part is about 0.003m, and if the precision is lower than 0.01m in the modeling process, the bead chain part loses the due physical effect, model confusion is caused, all models in the virtual interactive scene can be uniformly expanded by 100 times, namely in the virtual interactive scene, the size of the bead chain part is 0.003m × 100-0.3 m, so that the detail loss of the part is avoided, and the size relation among different parts is not changed because all existing parts in the virtual interactive scene are expanded in the equal proportion, namely the changed content is invisible for a user, so that the problem that the size of the bead chain part is too small to meet the modeling requirement is solved.
In this embodiment, the related part sound information is configured for each changeable state of the existing part in the target scene, for example, the target scene is an installation and replacement working scene of the off-stack neutron dosimeter, and the off-stack neutron dosimeter generates an alarm sound under a certain condition, for example, in the replacement working scene, the off-stack neutron dosimeter generates an alarm sound when being in a detached state (i.e., one of the changeable states).
In S1014, a virtual mannequin is configured for the user based on the first person perspective.
In the embodiment, in order to realize the simulation exercise, a virtual human body model needs to be built in the virtual interactive scene, and in order to improve the immersion of the user, the virtual human body model needs to be configured based on the first human perspective so that the user can observe the workflow of the learning demonstration and control the virtual human body model to exercise. Illustratively, the virtual mannequin typically has hands (the primary objects of the user control operations) and feet (i.e., a standing point under the camera used to observe the virtual interactive scene for subsequent users to move the camera).
In S1015, the environment model, the part model, and the virtual human body model are identified as the interactable objects in the virtual interaction scene.
In this embodiment, in order to improve the reality, in the virtual interactive scene, the environment model, the part model, and the virtual human body model may all change states based on the interaction of the user with the virtual interactive scene, so the environment model, the part model, and the virtual human body model are identified as the interactable objects in the virtual interactive scene.
In S1016, a virtual interactive scene is constructed based on the interactable object, the environmental sound information, the environmental interaction sound information, and the part sound information.
In this embodiment, the interactive object, the environmental sound information, the environmental interactive sound information, and the part sound information are imported into an empty newly-created virtual scene, so as to construct the virtual interactive scene. It should be understood that the above-mentioned environmental sound information is preferably played circularly in the virtual interactive scene to increase the reality of the virtual interactive scene; above-mentioned environment interaction sound information and part sound information just can play under the trigger condition that corresponds to increase the authenticity of this virtual interaction scene, exemplarily, this target scene is the installation of off-heap neutron dosimeter and changes the working scenario, this off-heap neutron dosimeter can send the chimes of doom when being in alarm state, consequently this off-heap neutron dosimeter when the state of the part model that corresponds in this virtual interaction scene is in alarm state, play this part sound information that this part model is relevant under alarm state in this virtual interaction scene, this part sound information is the same or similar with the chimes of doom that above-mentioned this off-heap neutron dosimeter sent when being in alarm state.
In this embodiment, the virtual interactive scene is constructed by the method provided by this embodiment, specifically, the precision of models that need to be constructed for simulating the target scene is improved by the three-dimensional laser scanning technology, scene sound information is configured for these models, and the virtual interactive scene is constructed based on these models and the scene sound information, so that the reality of the virtual interactive scene can be improved, and the simulation effect of the virtual interactive scene on the target scene is enhanced, so that the virtual interactive scene is closer to the target scene.
Fig. 7 shows a flowchart of an implementation of the simulation drilling method according to the third embodiment of the present application. Referring to fig. 7, in contrast to the embodiment shown in fig. 6, the simulation drill method S1013 provided in this embodiment includes S701, which is specifically detailed as follows:
further, the modeling of all existing parts included in the target scene to obtain a part model corresponding to each existing part includes:
in this embodiment, the existing part includes a connector; the connecting piece is used for connecting any two other existing parts except the connecting piece.
In S701, if the connector and the other existing component are connected, a material attribute of an articulation joint for connecting the connector and the other existing component is set as a rigid body.
In this embodiment, if the connecting member is in a connected state with another existing part, there is a portion for connecting with another existing part, that is, an articulation joint, on the connecting member, and a material attribute of the articulation joint is set to be a rigid body, so as to prevent model confusion in a virtual interactive scene.
In this embodiment, the material attribute of the articulation joint is set as a rigid body, so as to prevent the occurrence of a mold penetration phenomenon when the connecting piece is connected with other existing parts, and increase the reality of the virtual interactive scene.
Fig. 8 shows a flowchart of an implementation of the simulation drilling method according to the fourth embodiment of the present application. Referring to fig. 8, with respect to the embodiment shown in fig. 6, the simulation drill method S104 provided in this embodiment includes S801 to S803, which are detailed as follows:
further, the updating the virtual interaction scenario based on the operation data includes:
in S801, the state of each target interaction object is determined according to the operation data.
In this embodiment, the target interactive object is the interactive object corresponding to the operation data;
in this embodiment, based on the operation data, the state of each target interaction object is determined, that is, the state of each target interaction object that is changed after being affected by the user operation after the user operation is determined, which reflects the latest state of each target interaction object in the virtual interaction scene after the user operation. For example, a user controls the virtual human body model to move the part a to a specific location in a virtual interaction scene through the controller, and then determines that the target interaction object is the virtual human body model and the part a based on the obtained operation data, and then the state of the target interaction object is: the virtual human body model grabs a part A, and the part A moves to the specific location under the action of the virtual human body model (the central coordinate of the part A is the central coordinate of the specific location). It should be understood that the state of the target interaction object at this time also includes the orientation direction of the part a, the material property, and whether the part is in a related state with other parts, which are not described herein again.
Preferably, the simulation practicing method S801 provided in this embodiment includes S8011 to S8012. Further, the determining the state of each target interaction object according to the operation data includes:
in S8011, the state of the dynamic interaction object is determined according to the operation data.
In this embodiment, the dynamic interaction object includes the virtual human body model. The state of the dynamic interaction object is determined according to the operation data, for example, a user moves the part a to a specific place in a virtual interaction scene through a controller, firstly, a virtual human body model in the virtual interaction scene needs to be controlled, and then, the part a is influenced according to the state of the virtual human body model. Therefore, when the operation data is acquired, the state of the virtual human body model (i.e., the dynamic interaction object) is determined according to the operation data, specifically, the state of the hand of the virtual human body model is determined as grasping the part a, then the state of the foot of the virtual human body model is determined as a position displaced to the vicinity of the specific position, and finally the state of the hand is determined as displacing the part a to the specific position and releasing the part a.
In S8012, the state of the static interactive object in the virtual interactive scene is determined according to the state of the dynamic interactive object.
In this embodiment, the static interactive object includes the environment model and/or the part model corresponding to the state of the dynamic interactive object. According to the state of the dynamic interaction object, determining the state of a static interaction object in the virtual interaction scene, illustratively, a user controls the virtual human body model to move a part A to a specific place in the virtual interaction scene through a controller, and then the static interaction object is the part A; if the virtual human body model is in a state that the part A is displaced to the specific place and is released, the part A is determined to be in a state of being located at the specific place and not in a state of being grabbed by the virtual human body model.
In this embodiment, the state of the static interactive object is determined by the state of the dynamic interactive object, so that the user can control the virtual human body model through the controller, and the states of other interactive objects in the virtual interactive scene are affected according to the action of the virtual human body model in the virtual interactive scene, so that the whole virtual interactive process (i.e. updating the virtual interactive scene based on the operation data) is more realistic.
In S802, target sound information associated with the target interaction object is extracted from the scene sound information.
In this embodiment, the target sound information includes the environment interaction sound information and/or the part sound information associated with the state of the target interaction object. Extracting target sound information associated with the target interactive object from the scene sound information, illustratively, controlling the virtual human body model to install the part a at a specific position in the virtual interactive scene by the user through the controller, firstly controlling the virtual human body model to grab the part a, controlling the virtual human body model to move onto the iron plate D (preferably, the user can transfer the virtual human body model onto the iron plate D through a transfer module on the controller), the specific position being above the iron plate D, and finally controlling the virtual human body model to install the part a at the specific position and release the part a. In this case, the target interaction objects are a virtual human body model, a part a, and an iron plate D, and the target sound information includes a footstep sound (i.e., the above-mentioned environment interaction sound) associated with the virtual human body model moving onto the iron plate D, that is, the state of the iron plate D being a depressed state, and also includes a fitting sound (i.e., the above-mentioned part sound, the pseudo sound being "click" associated with the part a being mounted at the specific position state).
In S803, the virtual interactive scene is updated according to the state of each target interactive object and the target sound information
In this embodiment, the virtual interactive scene is updated based on the state of each target interactive object and the target sound information, and illustratively, the state of each target interactive object in the virtual interactive scene is changed according to the determined state of each target interactive object; and playing the target sound information in the state change process of the target interactive object in the virtual interactive scene according to the target sound information associated with the state of the target interactive object.
In this embodiment, the scene sound information is added in the entire virtual interaction process (i.e., the virtual interaction scene is updated based on the operation data), so that the virtual interaction scene is as close to the real environment as possible, and the user obtains feedback (state change of the target object and target sound information) corresponding to the operation data in the virtual interaction scene, thereby improving the reality of the virtual interaction process.
Fig. 9 shows a flowchart of an implementation of the method provided in the second embodiment of the present application. Referring to fig. 9, with respect to the embodiment shown in fig. 1, the simulation drill method S103 provided in this embodiment includes S901, which is detailed as follows:
further, the updating the virtual interactive scene based on the state change list about each of the presentation interactive objects and the scene sound information generated during the state change includes:
in this embodiment, the state change list includes state change information of a plurality of the presentation interaction objects and timestamps corresponding to the state change information; the state change information is stored in the state change list in association with a time stamp corresponding to the state change information.
In S901, sequentially changing the state of each of the presentation interactive objects in the virtual interactive scene according to all the state change information about each of the presentation interactive objects and the timestamp corresponding to the state change information, and playing the scene sound information generated during the state change.
In this embodiment, the state of each of the presentation interactive objects is sequentially changed in the virtual interactive scene according to all the state change information of each of the presentation interactive objects and the time stamp corresponding to the state change information, and the scene sound information generated during the state change is played back to present the operation flow. Illustratively, the operation flow information is that the virtual human body model installs the part a to a specific position above the iron plate D, and the state change list includes at least 3 pieces of state change information, including: state change information one: the virtual human body model captures a part A, at the moment, the demonstration interactive object is the virtual human body model and the part A, in a virtual interactive scene, the state of the part A is changed to be captured by the virtual human body model, the state of the virtual human body model is changed to be captured by the part A, and scene sound information (if configured in advance) when the part A is captured is played; state change information two: the virtual human body model moves to an iron plate D, at the moment, the demonstration interaction objects are the virtual human body model and the iron plate D, in a virtual interaction scene, the state of the iron plate D is changed to be treaded by the virtual human body model, the state of the virtual human body model is changed to be stood on the iron plate D, and scene sound information when the tiban C is treaded is played (if configured in advance); state change information three: in this virtual human body model, the part a is set at a specific position and the part a is released, and at this time, the presentation interaction objects are the virtual human body model and the part a, and in the virtual interaction scene, the state of the part a is changed to be set at the specific position and not grasped, the state of the virtual human body model is changed to be not grasped, and the fitting sound (if configured in advance) associated with the case where the part a is set at the specific position is played. And changing the state of each demonstration interactive object in the virtual interactive scene in sequence according to the time stamp of each state change information, and playing the scene sound information generated in the state change process so as to demonstrate the operation flow.
In this embodiment, as described above, if the state change information is stored in the state change list in association with the corresponding timestamp, the virtual interactive scene is updated in the presentation process (i.e., the virtual interactive scene is updated based on the state change list about each of the presentation interactive objects and the scene sound information generated in the state change process) based on the timestamp and the state change information corresponding to the timestamp, i.e., the operation flow can be completely presented in the virtual interactive scene, so that the user can observe the operation flow in the virtual interactive scene.
Fig. 10 shows a flowchart of an implementation of the method provided in the second embodiment of the present application. Referring to fig. 10, in comparison with any of the above embodiments, the simulation drill method S104 provided in this embodiment includes S1041 to S1042, which are detailed as follows:
further, the updating the virtual interaction scenario based on the operation data includes:
in S1041, at least one target unexpected information is randomly selected from the plurality of preset unexpected information.
In the embodiment, a plurality of preset accident information is preset, and the preset accident information stores accident information which may occur in the target scene, for example, in the target scene, a sound of iron striking is occasionally emitted, and in order to improve the reality of the virtual interactive scene and improve the familiarity of the staff with the target scene, an accident factor needs to be added in the drilling process. And randomly selecting at least one target accident information from the preset accident information. The random selection is based on a random algorithm in the prior art, and is not described herein again.
In S1042, the virtual interaction scenario is updated based on the target accident information and based on the operation data.
In this embodiment, the virtual interaction scene is updated according to the target accident information and the operation data, that is, in the process of updating the virtual interaction scene according to the operation data, if the state of the interactable object in the virtual interaction scene meets a preset trigger state related to the target accident information, the virtual interaction scene is updated according to the target accident information. Illustratively, in a target scene, when the part a is mounted at a specific position above the iron plate D, a worker may drop the part a onto the iron plate D due to a sudden hand-off force, and at this time, the target accident information includes that the state of the part a is not grasped, the state of the virtual human body model is not grasped, and the state of the iron plate D is accident sound information when colliding with the part a and the iron plate D is in a collision state; illustratively, the operation data is operation data generated by a user controlling a virtual human body model to grab a part a, passing through a iron plate D, and in the process of updating the virtual interactive scene according to the operation data, when the part a is in a state of being grabbed by the virtual human body model and is positioned above the iron plate D, the virtual interactive scene is updated according to the target accident information, specifically, in the virtual interactive scene, the state of the part a is changed from the state of being grabbed by the virtual human body model to the state of not being grabbed, the state of the virtual human body model is changed from the part a to the part a which is not grabbed, the state of the iron plate D is changed to collide with the part a, and the accident sound information generated in the process of changing the state of the iron plate D to the collision state is played; the specific steps for updating the virtual interactive scene according to the operation data are described in any of the above embodiments, and are not described herein again.
It should be understood that, in this embodiment, the method for randomly selecting at least one target unexpected information from a plurality of preset unexpected information and updating the virtual interactive scene according to the target unexpected information may also be applied to the above-mentioned demonstration process of the operation flow.
In this embodiment, by randomly selecting at least one target accident information from a plurality of preset accident information and updating the virtual interactive scene according to the target accident information, accident factors can be added in the drilling process of the user, the authenticity of the virtual interactive scene is increased, the effect of the drilling process is enhanced, the reaction capability of the working personnel is improved, the psychological quality of the working personnel is enhanced, and the working time consumed by the working personnel due to the accident factors in the working process is reduced.
Fig. 11 shows a schematic structural diagram of a simulation training apparatus provided in an embodiment of the present application, corresponding to the method described in the foregoing embodiment, and only shows portions related to the embodiment of the present application for convenience of description.
Referring to fig. 11, the simulation practicing device includes: the virtual interactive scene construction module is used for constructing a virtual interactive scene; the virtual interaction scene comprises: interactive objects and scene sound information; the state change list determining module is used for determining a state change list of each demonstration interactive object based on the acquired operation flow information; the demonstration interactive object is the interactive object which needs to be subjected to state change and is determined according to the operation flow information; the operation flow demonstration module is used for updating the virtual interaction scene based on the state change list of each demonstration interaction object and the scene sound information generated in the state change process so as to demonstrate the operation flow; and the user operation module is used for receiving operation data fed back by a user and updating the virtual interaction scene based on the operation data.
Optionally, the scene sound information includes environment sound information, environment interaction sound information, and part sound information; the virtual interactive scene building module comprises: the environment model building module is used for building an environment model related to a target scene based on three-dimensional point cloud environment data obtained by scanning the three-dimensional laser scanning module in the target scene; the environmental sound acquisition module is used for acquiring the environmental sound information of the target scene and configuring the associated environmental interaction sound information for each changeable state of the environmental model in the target scene; the part acquisition module is used for modeling all existing parts contained in the target scene to obtain part models corresponding to the existing parts and configuring associated part sound information for the existing parts in each changeable state in the target scene; the virtual human body model module is used for configuring a virtual human body model for the user based on a first human perspective; an interactable object identification module for identifying the environment model, the part model and the virtual human body model as the interactable object in the virtual interaction scene; and the virtual interactive scene integration module is used for constructing a virtual interactive scene based on the interactive object, the environment sound information, the environment interactive sound information and the part sound information.
Optionally, the existing part comprises a connector; the connecting piece is used for connecting any two other existing parts except the connecting piece; this part collection module includes: and the connecting piece judging module is used for judging whether the connecting piece and the other existing parts are in a connecting state or not, and if the connecting piece and the other existing parts are in the connecting state, setting the material attribute of the hinged joint for connecting the connecting piece and the other existing parts as a rigid body.
Optionally, the user operation module includes: the target interactive object state determining module is used for determining the state of each target interactive object according to the operation data; the target interactive object is the interactive object corresponding to the operation data; the target sound information extraction module is used for extracting target sound information associated with the target interaction object from the scene sound information; the target sound information comprises the environment interaction sound information and/or the part sound information related to the state of the target interaction object; and the virtual interactive scene interactive module is used for updating the virtual interactive scene according to the state of each target interactive object and the target sound information.
Optionally, the target interaction object includes a static interaction object and a dynamic interaction object; the target interactive object determination module comprises: the dynamic interactive object module is used for determining the state of the dynamic interactive object according to the operation data; the dynamic interaction object comprises the virtual human body model; the static interactive object module is used for determining the state of a static interactive object in the virtual interactive scene according to the state of the dynamic interactive object; the static interaction object comprises the environment model and/or the part model corresponding to the state of the dynamic interaction object.
Optionally, the state change list includes a plurality of state change information about each of the presentation interaction objects and a timestamp corresponding to the state change information; the operation flow demonstration module is further configured to sequentially change the state of each demonstration interactive object in the virtual interactive scene according to all the state change information about each demonstration interactive object and the timestamp corresponding to the state change information; and playing the scene sound information generated in the state change process so as to demonstrate the operation flow.
Optionally, the user operation module further includes: the accident information selecting module is used for randomly selecting at least one target accident information from a plurality of preset accident information; the user operation module is further configured to update the virtual interaction scene based on the target accident information and based on the operation data.
It should be noted that, for the information interaction, the execution process, and other contents between the above-mentioned apparatuses, the specific functions and the technical effects of the embodiments of the method of the present application are based on the same concept, and specific reference may be made to the section of the embodiments of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 12 shows a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 12, the terminal device 12 of this embodiment includes: at least one processor 120 (only one shown in fig. 12), a memory 121, and a computer program 122 stored in the memory 121 and executable on the at least one processor 120, the processor 120 implementing the steps of any of the above-described method embodiments when executing the computer program 122.
Terminal equipment 12 can be computing equipment such as desktop computer, notebook, palm computer and high in the clouds server, and wherein, each above-mentioned type of equipment can link to each other with virtual reality VR equipment to give VR equipment with the data transmission of the virtual interactive scene of output, the user can dress VR equipment and carry out the simulation rehearsal. The terminal device may include, but is not limited to, a processor 120, a memory 121. Those skilled in the art will appreciate that fig. 12 is merely an example of terminal device 12 and does not constitute a limitation on terminal device 12, and may include more or less components than those shown, or some components in combination, or different components, such as input output devices, network access devices, etc.
The Processor 120 may be a Central Processing Unit (CPU), and the Processor 120 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 121 may be an internal storage unit of the terminal device 12 in some embodiments, for example, a hard disk or a memory of the terminal device 12. The memory 121 may also be an external storage device of the terminal device 12 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 12. Further, the memory 121 may also include both an internal storage unit and an external storage device of the terminal device 12. The memory 121 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer programs. The memory 121 may also be used to temporarily store data that has been output or is to be output.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), random-access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A simulation drilling method based on virtual reality is characterized by comprising the following steps:
constructing a virtual interactive scene; the virtual interaction scene comprises: interactive objects and scene sound information;
determining a state change list about each demonstration interactive object based on the acquired operation flow information; the demonstration interactive object is the interactive object which needs to be subjected to state change and is determined according to the operation flow information;
updating the virtual interactive scene based on a state change list about each of the demonstration interactive objects and the scene sound information generated in the state change process so as to demonstrate the operation flow;
and receiving operation data fed back by a user, and updating the virtual interaction scene based on the operation data.
2. The simulation drill method of claim 1, wherein the scene sound information includes environmental sound information, environmental interaction sound information, and part sound information; the constructing of the virtual interactive scene comprises the following steps:
constructing an environment model related to a target scene based on three-dimensional point cloud environment data obtained by scanning a target scene by a three-dimensional laser scanning module;
collecting the environmental sound information of the target scene, and configuring the associated environmental interaction sound information for each changeable state of the environmental model in the target scene;
modeling all existing parts contained in the target scene to obtain part models corresponding to the existing parts, and configuring associated part sound information for the existing parts in each changeable state in the target scene;
configuring a virtual human body model for the user based on a first human perspective;
identifying the environment model, the part model and the virtual human body model as the interactive objects in the virtual interactive scene;
and constructing a virtual interaction scene based on the interactive object, the environment sound information, the environment interaction sound information and the part sound information.
3. The simulation drill method of claim 2, wherein the existing part includes a connector; the connecting piece is used for connecting any two other existing parts except the connecting piece; the modeling of all existing parts contained in the target scene to obtain part models corresponding to all existing parts comprises the following steps:
and if the connecting piece and the other existing parts are in a connecting state, setting the material attribute of the hinge joint for connecting the connecting piece and the other existing parts as a rigid body.
4. The simulation drill method of claim 2, wherein the updating the virtual interactive scene based on the operational data comprises:
determining the state of each target interaction object according to the operation data; the target interactive object is the interactive object corresponding to the operation data;
extracting target sound information associated with the target interaction object from the scene sound information; the target sound information comprises the environment interaction sound information and/or the part sound information related to the state of the target interaction object;
and updating the virtual interactive scene according to the state of each target interactive object and the target sound information.
5. The simulation drill method of claim 4, wherein the target interactive objects include static interactive objects and dynamic interactive objects; the determining the state of each target interaction object according to the operation data comprises:
determining the state of the dynamic interaction object according to the operation data; the dynamic interaction object comprises the virtual human body model;
determining the state of a static interactive object in the virtual interactive scene according to the state of the dynamic interactive object; the static interaction object comprises the environment model and/or the part model corresponding to the state of the dynamic interaction object.
6. The simulation drill method as claimed in claim 1, wherein the state change list includes a plurality of state change information about each of the presentation interaction objects and a time stamp corresponding to the state change information; the updating the virtual interactive scene based on the state change list about each of the presentation interactive objects and the scene sound information generated during the state change includes:
sequentially changing the state of each demonstration interactive object in the virtual interactive scene according to the state change information of each demonstration interactive object and the timestamp corresponding to the state change information; and
and playing the scene sound information generated in the state change process so as to demonstrate the operation flow.
7. The simulation drill method of any of claims 1-6, wherein the updating the virtual interaction scenario based on the operational data comprises:
randomly selecting at least one target accident information from a plurality of preset accident information;
updating the virtual interaction scenario based on the target contingency information and based on the operational data.
8. A simulation rehearsal device based on virtual reality, characterized by comprising:
the virtual interactive scene construction module is used for constructing a virtual interactive scene; the virtual interaction scene comprises: interactive objects and scene sound information;
the state change list determining module is used for determining a state change list of each demonstration interactive object based on the acquired operation flow information; the demonstration interactive object is the interactive object which needs to be subjected to state change and is determined according to the operation flow information;
the operation flow demonstration module is used for updating the virtual interaction scene based on the state change list of each demonstration interaction object and the scene sound information generated in the state change process so as to demonstrate the operation flow;
and the user operation module is used for receiving operation data fed back by a user and updating the virtual interaction scene based on the operation data.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202010334734.9A 2020-04-24 2020-04-24 Simulation drilling method and device based on virtual reality Pending CN111580648A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010334734.9A CN111580648A (en) 2020-04-24 2020-04-24 Simulation drilling method and device based on virtual reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010334734.9A CN111580648A (en) 2020-04-24 2020-04-24 Simulation drilling method and device based on virtual reality

Publications (1)

Publication Number Publication Date
CN111580648A true CN111580648A (en) 2020-08-25

Family

ID=72120631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010334734.9A Pending CN111580648A (en) 2020-04-24 2020-04-24 Simulation drilling method and device based on virtual reality

Country Status (1)

Country Link
CN (1) CN111580648A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113741684A (en) * 2021-07-26 2021-12-03 南方电网深圳数字电网研究院有限公司 Transformer substation inspection method based on virtual reality and electronic equipment
CN113821104A (en) * 2021-09-17 2021-12-21 武汉虹信技术服务有限责任公司 Visual interactive system based on holographic projection
CN115100914A (en) * 2022-06-21 2022-09-23 岭澳核电有限公司 Method, system and computer equipment for simulating primary loop hydrostatic test of nuclear power station

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106601062A (en) * 2016-11-22 2017-04-26 山东科技大学 Interactive method for simulating mine disaster escape training
CN106940941A (en) * 2017-05-22 2017-07-11 广东电网有限责任公司教育培训评价中心 Substation Operating virtual scene update method, device and simulation operations training system
CN108958459A (en) * 2017-05-19 2018-12-07 深圳市掌网科技股份有限公司 Display methods and system based on virtual location
CN110335359A (en) * 2019-04-22 2019-10-15 国家电网有限公司 Distribution board firing accident emergency drilling analogy method based on virtual reality technology

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106601062A (en) * 2016-11-22 2017-04-26 山东科技大学 Interactive method for simulating mine disaster escape training
CN108958459A (en) * 2017-05-19 2018-12-07 深圳市掌网科技股份有限公司 Display methods and system based on virtual location
CN106940941A (en) * 2017-05-22 2017-07-11 广东电网有限责任公司教育培训评价中心 Substation Operating virtual scene update method, device and simulation operations training system
CN110335359A (en) * 2019-04-22 2019-10-15 国家电网有限公司 Distribution board firing accident emergency drilling analogy method based on virtual reality technology

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113741684A (en) * 2021-07-26 2021-12-03 南方电网深圳数字电网研究院有限公司 Transformer substation inspection method based on virtual reality and electronic equipment
CN113821104A (en) * 2021-09-17 2021-12-21 武汉虹信技术服务有限责任公司 Visual interactive system based on holographic projection
CN115100914A (en) * 2022-06-21 2022-09-23 岭澳核电有限公司 Method, system and computer equipment for simulating primary loop hydrostatic test of nuclear power station
CN115100914B (en) * 2022-06-21 2024-01-30 岭澳核电有限公司 Method, system and computer equipment for simulating primary circuit water pressure test of nuclear power station

Similar Documents

Publication Publication Date Title
Hilfert et al. Low-cost virtual reality environment for engineering and construction
CN111580648A (en) Simulation drilling method and device based on virtual reality
CN102930753B (en) Gas station virtual training system and application
KR100735676B1 (en) Operating system for model house with virtual reality and method thereof
CN108227921A (en) A kind of digital Zeng Houyi ancient Chinese chime with 12 bells interactive system based on immersive VR equipment
CN106530887B (en) Fire scene simulating escape method and device
CN105374251A (en) Mine virtual reality training system based on immersion type input and output equipment
CN110794968B (en) Emergency drilling interaction system and method based on scene construction
Jacobsen et al. Active personalized construction safety training using run-time data collection in physical and virtual reality work environments
US20110109628A1 (en) Method for producing an effect on virtual objects
KR20120045744A (en) An apparatus and method for authoring experience-based learning content
CN106683193B (en) Design method and design device of three-dimensional model
CN113559518B (en) Interaction detection method and device for virtual model, electronic equipment and storage medium
CN108711327A (en) Protection simulation training platform construction method based on VR technologies
CN108805766B (en) AR somatosensory immersive teaching system and method
WO2015029654A1 (en) Computer-implemented operator training system and method of controlling the system
CN115690375B (en) Building model modification interaction method, system and terminal based on virtual reality technology
CN100414506C (en) Virtual human movement simulation frame
CN115374591A (en) Method, system, device and computer readable storage medium for scene rehearsal
Mihaľov et al. Potential of low cost motion sensors compared to programming environments
CN109166388A (en) A kind of virtual training and experiential method towards seismic safety education
CN110942519B (en) Computer assembly virtual experiment system and implementation method thereof
CN112132962A (en) Virtual reality-based urban rail vehicle maintenance operation process research method
US20230162458A1 (en) Information processing apparatus, information processing method, and program
KR20220061309A (en) Apparatus and method for experiencing augmented reality-based screen sports

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination