CN110610547A - Cabin training method and system based on virtual reality and storage medium - Google Patents

Cabin training method and system based on virtual reality and storage medium Download PDF

Info

Publication number
CN110610547A
CN110610547A CN201910879257.1A CN201910879257A CN110610547A CN 110610547 A CN110610547 A CN 110610547A CN 201910879257 A CN201910879257 A CN 201910879257A CN 110610547 A CN110610547 A CN 110610547A
Authority
CN
China
Prior art keywords
data
trainer
training
cabin
virtual reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910879257.1A
Other languages
Chinese (zh)
Other versions
CN110610547B (en
Inventor
师润乔
刘爽
许秋子
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ruilishi Multimedia Technology Beijing Co ltd
Original Assignee
Shenzhen Ruili Visual Multimedia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ruili Visual Multimedia Technology Co Ltd filed Critical Shenzhen Ruili Visual Multimedia Technology Co Ltd
Priority to CN201910879257.1A priority Critical patent/CN110610547B/en
Publication of CN110610547A publication Critical patent/CN110610547A/en
Application granted granted Critical
Publication of CN110610547B publication Critical patent/CN110610547B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/04Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/08Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of aircraft, e.g. Link trainer
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/08Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of aircraft, e.g. Link trainer
    • G09B9/16Ambient or aircraft conditions simulated or indicated by instrument or alarm
    • G09B9/165Condition of cabin, cockpit or pilot's accessories

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Tourism & Hospitality (AREA)
  • Human Computer Interaction (AREA)
  • Primary Health Care (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a cabin training method based on virtual reality, which comprises the following steps: acquiring two-dimensional image data of a mark point and motion data of a hand of a trainer during practical training; preprocessing the two-dimensional image data of the mark points to obtain two-dimensional coordinate data of the mark points, and calculating the two-dimensional coordinate data of the mark points to obtain point cloud coordinates and directions in a three-dimensional capturing space; obtaining space position positioning data corresponding to the rigid body action according to the point cloud coordinates and the direction; determining the hand position in the cabin virtual scene according to the space position positioning data, and determining the finger posture in the cabin virtual scene according to the action data of the hands of the trainer; and determining the corresponding virtual button operation of the finger position and the finger gesture in the cabin virtual scene according to the preset mapping relation and responding. The invention also discloses a cabin training system based on virtual reality and a storage medium. The virtual reality cockpit training method improves the immersion and effect of virtual reality cockpit training.

Description

Cabin training method and system based on virtual reality and storage medium
Technical Field
The invention relates to the technical field of virtual reality, in particular to a cabin training method and system based on virtual reality and a storage medium.
Background
Existing pilot or driver training is mainly developed in a "physically simulated cockpit" training mode, which simulates the actual operation of an airplane or a vehicle, but has some disadvantages, such as: one simulation cabin can usually simulate only one model and cannot meet the training of multiple models. Because the cabin door opening methods and the airplane landing gear structures of different types of airplane are different, if other types of airplane need to be simulated, other entity simulation cabins need to be reconfigured, and the practical training cost is multiplied. Meanwhile, in some aviation practical training, such as simulated evacuation on water, maintenance of major parts of an airplane and the like, due to the irreversible problem of equipment, the cost of each practical training is very high. Meanwhile, training on fire, attack, emergency forced landing and the like cannot be realized in the practical training cabin. Although many ways of training by adopting VR/AR alternative entities (such as a car cabin situation is simulated by a virtual scene display through a head-up display and inertial motion capture gloves) have appeared, the virtual reality technologies have problems of insufficient immersion, high cost, slow system response speed and the like.
Disclosure of Invention
The invention mainly aims to provide a cabin training method, a cabin training system and a storage medium based on virtual reality, and aims to solve the technical problem of improving the immersion of virtual reality cabin training.
In order to achieve the above object, the present invention provides a cabin training method based on virtual reality, which includes the following steps:
the method comprises the steps that a plurality of dynamic capture cameras in a dynamic capture space simultaneously and continuously shoot actions of a trainer wearing reflective mark points for practical training, synchronous mark point two-dimensional image data are obtained, and the action data of hands of the trainer are obtained through inertial action capture gloves worn by the trainer, wherein the trainer uses a real object to carry out simulated driving training, and the real object comprises at least one of a seat, an operating lever, an accelerator and a rudder;
preprocessing the two-dimensional image data of the mark points to obtain two-dimensional coordinate data of the mark points, and calculating the two-dimensional coordinate data of the mark points by adopting a computer multi-view vision technology to obtain point cloud coordinates and directions in a three-dimensional capturing space;
identifying rigid body structures bound at different parts of a trainer according to the point cloud coordinates and the direction, and resolving the position and the orientation of each rigid body structure in a capturing space to obtain space position positioning data corresponding to rigid body actions of the trainer during practical training;
according to the spatial position positioning data corresponding to the rigid body motion of a trainer in practical training, determining the hand position of the trainer in the cabin virtual scene, and according to the motion data provided by the inertial motion capture gloves, determining the finger position and posture of the trainer in the cabin virtual scene;
and determining the corresponding virtual button operation of the finger position and the finger posture of the trainer in the cabin virtual scene and carrying out corresponding response according to the preset mapping relation between the dynamic capture data of the trainer and the cabin virtual scene.
Optionally, the motion data provided by the inertial motion capture glove comprises: real-time angular velocity data for each finger joint.
Optionally, an infrared narrow-band filtering technology is adopted to filter redundant background information in the shot image data, and a Field Programmable Gate Array (FPGA) is adopted to preprocess the captured image information of the mark point.
Optionally, various types of data are calculated in a heterogeneous processing mode of a CPU, a GPU, and an APU, where the data at least includes: the marking point two-dimensional image data, the motion data, the marking point two-dimensional coordinate data, the point cloud coordinate and direction in the three-dimensional capturing space, and the space position positioning data.
Further, in order to achieve the above object, the present invention further provides a cabin training system based on virtual reality, including: the system comprises a motion capture server and a content presentation end;
the motion capture server side at least comprises the following components:
the dynamic capture camera is used for acquiring image data of a trainer in practical training and filtering redundant background information in the shot image data by adopting an infrared narrow-band filtering technology;
the dynamic capture data processing server comprises an electronic computer, corresponding input and output equipment and dynamic capture data analysis and processing software running on the computer, wherein the input and output equipment comprises but is not limited to a display, a keyboard and a mouse, the dynamic capture data analysis and processing software is used for carrying out operation processing on dynamic capture data transmitted by a dynamic capture camera, and the display is used for displaying the running condition of the dynamic capture software;
the data switch is used for realizing data exchange between the server-side component and the client-side component, between the client-side related components and between the server-side related components;
the calibration rod is used for calibrating the moving capture cameras to obtain the relative position relation among the moving capture cameras in the moving capture space;
the content presentation end comprises at least the following components:
the virtual environment rendering and synchronization server comprises an electronic computer and corresponding input and output equipment, and is used for rendering virtual scenes of a virtual reality cabin and synchronizing data in the virtual environment to a plurality of virtual reality head display client sides so as to facilitate multi-person training at the same time, wherein the input and output equipment comprises but is not limited to a display, a keyboard and a mouse, and the display is used for displaying a emperor visual angle picture of the training situation of a trainer; the virtual reality head display host comprises an electronic computer and corresponding input and output equipment, and is used for rendering control keys and extravehicular scenes in a cockpit virtual scene and transmitting the control keys and extravehicular scenes to the virtual reality head display for display; a plurality of virtual reality head display hosts can be added into the cabin training system for simultaneous multi-person training;
the virtual reality head display is connected with the virtual reality head display host and used for displaying a cabin virtual scene rendered by the virtual reality head display host to a trainer;
the inertial motion capture glove is used for collecting motion data of the hands of the trainer;
the simulation training object comprises at least one of a seat, a control lever, a throttle and a rudder, and is used for simulating a cockpit.
Alternatively, the motion capture volume may be a large volume or a small volume, formed by a plurality of motion capture cameras around the content presentation end.
Optionally, a rigid body structure is bound on the virtual reality head display and the inertial motion capture glove, and a plurality of reflective marker points are configured on the rigid body structure.
Optionally, the motion capture camera is specifically configured to:
the method comprises the steps of continuously shooting actions of a trainer wearing a light-reflecting mark point for practical training, generating mark point two-dimensional image data which keeps synchronization with other mobile capturing cameras, preprocessing the mark point two-dimensional image data to obtain two-dimensional coordinate data of the mark point, and sending the two-dimensional coordinate data to a mobile capturing data processing server through a data switch.
Optionally, the kinetic capture data processing server is specifically configured to:
receiving two-dimensional coordinate data of the mark points sent by the dynamic capture camera, and calculating the two-dimensional coordinate data of the mark points by adopting a computer multi-view vision technology to obtain point cloud coordinates and directions in a three-dimensional capture space;
identifying rigid body structures bound at different parts of a trainer according to the point cloud coordinates and the direction, resolving the position and the orientation of each rigid body structure in a capturing space, obtaining space position positioning data corresponding to a rigid body of the trainer during practical training, and sending the space position positioning data to the virtual environment rendering server;
the virtual environment rendering server is specifically configured to:
receiving space position positioning data which are sent by the dynamic capture data processing server and correspond to rigid body actions of a trainer in practical training and action data of hands of the trainer sent by the inertial motion capture gloves;
according to the spatial position positioning data corresponding to the rigid body motion of a trainer in practical training, determining the hand position of the trainer in the cabin virtual scene, and according to the motion data provided by the inertial motion capture gloves, determining the finger position and posture of the trainer in the cabin virtual scene;
and determining the corresponding virtual button operation of the finger position and the finger posture of the trainer in the cabin virtual scene and carrying out corresponding response according to the preset mapping relation between the dynamic capture data of the trainer and the cabin virtual scene.
Further, to achieve the above object, the present invention also provides a computer readable storage medium, where a virtual reality-based cabin training program is stored on the computer readable storage medium, and when executed by a processor, the virtual reality-based cabin training program implements the steps of the virtual reality-based cabin training method according to any one of the above items.
The invention utilizes a plurality of dynamic capture cameras to build dynamic capture spaces for cabin training, is suitable for being adopted in large space and small space, has flexible training space, and can be freely expanded from small-scale single training to large-scale multi-person training. The dynamic capturing camera adopted by the invention utilizes an advanced high-resolution image sensor and is combined with a high-power infrared stroboscopic light source, so that the capturing range is greatly expanded. Meanwhile, redundant background information is filtered by using an infrared narrow band-pass filtering technology, and the image information of the captured mark point is preprocessed by adopting the FPGA, so that the camera can rapidly and accurately output clean two-dimensional coordinate information of the captured mark point, the processing and calculating time of building on a server is reduced, the system delay is greatly reduced, and meanwhile, the motion capture precision is greatly improved. In addition, the invention further improves the capability of the system for processing data by adopting a CPU + GPU heterogeneous processing mode. The virtual cabin training method has the advantages that the virtual scene is vivid, the operation sense is real, and the immersion sense and effect of virtual reality cabin training are improved.
Drawings
Fig. 1 is a schematic flow chart of an embodiment of a virtual reality-based cabin training method according to the present invention;
FIG. 2 is a schematic diagram of a training scenario of an embodiment of the virtual reality based cabin training system of the present invention;
fig. 3 is a functional module schematic diagram of an embodiment of the virtual reality-based cabin training system of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides a cabin training method based on virtual reality.
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of a virtual reality-based cabin training method of the present invention. In this embodiment, the virtual reality-based cabin training method includes the following steps:
step S10, a plurality of dynamic capture cameras in the dynamic capture space continuously shoot the motion of the trainee wearing the reflective mark points for training at the same time, and obtain the synchronous two-dimensional image data of the mark points, and the motion data of the trainee hand through the inertial motion capture glove worn by the trainee;
in this embodiment, a trainer carries out practical training in a pre-established dynamic capturing space, referring to fig. 2, a plurality of dynamic capturing cameras 10 are installed in the dynamic capturing space, each dynamic capturing camera 10 surrounds a training position of a trainer 20, and various actions of the trainer 20 during practical training by wearing a reflective marker are continuously shot.
In this embodiment, the dynamic capture camera can actively capture various actions of the trainer, so as to generate two-dimensional image data for the trainer wearing the reflective marker points. The actions of the trainer collected in this embodiment are not limited, and may be, for example, the head and hands of the trainer, or the head, body and hands of the trainer.
In addition, in the embodiment, in order to more accurately obtain the data related to the hand motion of the training person during training, the training person wears the inertial motion capture glove to obtain the motion data of the hand of the training person.
Optionally, the motion data of the hands of the trainer includes: the hand posture of the trainer during training can be measured through the three types of data obtained.
Step S20, preprocessing the two-dimensional image data of the mark points to obtain two-dimensional coordinate data of the mark points, and calculating the two-dimensional coordinate data of the mark points by adopting a computer multi-view vision technology to obtain point cloud coordinates and directions in a three-dimensional capturing space;
in this embodiment, after obtaining the two-dimensional image data of the marker points generated by the action of the shooting trainer of the mobile shooting camera, the two-dimensional image data of the marker points is further preprocessed, for example, key points (i.e., light reflection marker points) in images collected by each mobile shooting camera at the same time are identified first, and then coordinates of the light reflection marker points in the same image are calculated, so that the two-dimensional coordinate data of all the light reflection marker points can be obtained.
In this embodiment, to identify the action of the trainer, the two-dimensional coordinate data needs to be further converted into three-dimensional coordinate data, so in this embodiment, the multiple motion capture cameras are first matched with the image key points acquired at the same time to determine each reflective mark point, then the two-dimensional coordinate data of the mark points are calculated by using the computer multi-view vision technology, and specifically, the coordinates and the directions of the point clouds in the three-dimensional capture space are calculated according to the matching relationship between the two-dimensional point clouds in the image and the relative positions and the relative orientations of the motion capture cameras, that is, the point cloud coordinates and the directions corresponding to each reflective mark point in the three-dimensional capture space are obtained.
Step S30, identifying rigid body structures bound at different parts of a trainer according to the point cloud coordinates and directions, and resolving the positions and the orientations of the rigid body structures in a capture space to obtain space position positioning data corresponding to rigid body actions of the trainer during practical training;
in this embodiment, the two-dimensional coordinate data of the mark points includes rigid body names or identification numbers and rigid body data (that is, coordinate data), so that according to the calculated point cloud coordinates and directions of the reflective mark points corresponding to the three-dimensional capturing space, the rigid body structures bound to different parts of the trainer can be identified and the positions and orientations of the rigid body structures in the capturing space can be calculated, and then the motion trajectory of the trainer in the real training of the moving capturing space can be determined, thereby realizing the positioning of the action positions of the trainer in the moving capturing space and obtaining the space position positioning data corresponding to the rigid body actions of the trainer in the real training.
Step S40, according to the spatial position positioning data corresponding to the rigid body motion of the trainer in the practical training, determining the hand position of the trainer in the cabin virtual scene, and according to the motion data provided by the inertia motion capture glove, determining the finger position and posture of the trainer in the cabin virtual scene;
in this embodiment, after obtaining the spatial position positioning data corresponding to the rigid body motion (including the finger operation) of the trainer in the practical training, the finger position of the trainer in the cabin virtual scene may be further calculated. The hand or the glove of the trainer is externally provided with a rigid body structure with reflective mark points, so that the finger position of the trainer can be positioned by calculating the space position positioning data corresponding to the rigid body action when the trainer carries out training.
In addition, in this embodiment, the motion data (including the real-time angular velocity data of each finger joint) of the hand of the trainer collected by the glove is further used, so that the finger posture information of the trainer in the cabin virtual scene can be obtained. The embodiment adopts a method combining inertia and optical tracking, and can further improve the precision of finger positioning.
And step S50, according to the preset mapping relation between the dynamic capture data of the trainer and the cabin virtual scene, determining the corresponding virtual button operation of the finger position and the finger posture of the trainer in the cabin virtual scene and carrying out corresponding response.
In this embodiment, the virtual reality technology can map the hand movements of the trainer to the virtual cabin scene, the trainer can see the virtual cabin scene and the movements of the hands of the virtual trainer in the virtual cabin scene through the virtual head, and the trainer can realize the simulated training by adjusting the movements of the hands of the virtual trainer in the virtual cabin scene.
In this embodiment, a mapping relationship between the training data (including various motions of the trainer) of the trainer and the cabin virtual scene is preset, and when the position and the posture of the finger of the trainer are calculated, the corresponding virtual button operation of the training motion of the trainer in the cabin virtual scene can be determined through the mapping relationship, and corresponding response is performed according to the virtual button operation, such as various practical training motions of acceleration, deceleration, turning and the like.
As shown in fig. 2, in this embodiment, in order to provide better training immersion and effect for the trainee, the physical simulation cockpit is adopted, which includes the seat 30, the joystick 40 and the throttle 50, which are all physical objects, and the trainee can have more experience by adopting these physical objects, so that the trainee can have more experience of being personally on the scene.
In addition, this promotes motion capture precision, and this embodiment preferably adopts infrared narrow band-pass filtering technique to filter the redundant background information in the image data that shoots and adopts field programmable gate array FPGA to carry out the preliminary treatment to the mark point image information of catching.
The invention utilizes a plurality of dynamic capture cameras to build dynamic capture spaces for cabin training, is suitable for being adopted in large space and small space, has flexible training space, and can be freely expanded from small-scale single training to large-scale multi-person training. The dynamic capturing camera adopted by the invention utilizes an advanced high-resolution image sensor and is combined with a high-power infrared stroboscopic light source, so that the capturing range is greatly expanded. Meanwhile, redundant background information is filtered by using an infrared narrow band-pass filtering technology, and the image information of the captured mark point is preprocessed by adopting the FPGA, so that the motion capture camera can rapidly and accurately output clean two-dimensional coordinate information of the captured mark point, the processing and calculating time of building on a server is reduced, the system delay is greatly reduced, and meanwhile, the motion capture precision is greatly improved. In addition, the invention further improves the data processing capacity of the system by adopting a heterogeneous processing mode of CPU + GPU + APU. The virtual cabin training method has the advantages that the virtual scene is vivid, the operation sense is real, and the immersion sense and effect of virtual reality cabin training are improved.
The invention provides a cabin training system based on virtual reality.
Referring to fig. 3, fig. 3 is a functional module schematic diagram of an embodiment of the virtual reality-based cabin training system of the present invention. In this embodiment, the virtual reality-based cabin training system includes:
(1) the motion capture server side at least comprises the following components:
the dynamic capture camera 101 is used for acquiring image data of a trainer in practical training and filtering redundant background information in the shot image data by adopting an infrared narrow-band filtering technology; optionally, in a specific embodiment, the motion capture server further includes: the three-dimensional tripod head adopts a large force clamp and an inclined opening ejecting particle and is used for fixing the movable capturing camera at a specific installation position;
the motion capture data processing server 102 comprises an electronic computer and corresponding input and output equipment, wherein the input and output equipment comprises but is not limited to a display, a keyboard and a mouse, the motion capture data processing server 102 further comprises motion capture data analysis processing software which runs on the computer and is used for carrying out operation processing on motion capture data transmitted by the motion capture camera, and the display is used for displaying the running condition of the motion capture software;
the motion capture data processing server in this embodiment includes a computer and motion capture data analysis processing software, and the input and output devices refer to all devices that can implement computer control, and may be a pan-tilt (cloud server) or other remote control devices, and may also be devices such as a display, a keyboard, and a mouse.
The data switch 103 is used for realizing data exchange between the server-side component and the client-side component, between the client-side related components and between the server-side related components;
a calibration rod 104 for calibrating the motion capture cameras to obtain a relative positional relationship between the motion capture cameras in the motion capture space;
in this embodiment, the positioning may be performed by a passive optical tracking method, that is, the moving capture camera 101 obtains infrared image data reflected by reflective mark points bound to each part of the body of the trainee, and then transmits the infrared image data to the moving capture data processing server 102 for arithmetic processing, or may be performed by an active optical tracking method, that is, the moving capture camera 101 performs the positioning by capturing LED infrared image data emitted by an active optical rigid body (mark points), and can continuously and stably output high-precision positioning data without depending on the reflection of light, thereby achieving a longer capture distance. In addition, the stable and reliable active optical rigid body can be directly fixed on the surfaces of objects such as a head display, VR props and the like.
(2) A content presentation end comprising at least the following components:
the virtual environment rendering and synchronization server 201 comprises an electronic computer and corresponding input and output equipment, wherein the input and output equipment comprises but is not limited to a display, a keyboard and a mouse, the display is used for displaying the images of the upper views of the trainees in the training situation, and the virtual environment rendering and synchronization server 201 is used for rendering the virtual scene of the virtual reality cabin and displaying data in the synchronous virtual environment to a plurality of virtual reality heads so as to facilitate multi-person training;
in the embodiment, in the cabin training process, other people with head-on display can see the operation state of the other side mutually through synchronization, namely, the single-machine state is changed into a multi-person interactive state, for example, multiple trainers can see the position of the virtual scene where the other side is located, whether the other side is ready to attack or not through synchronization, and other people in the same virtual scene can avoid the conditions of collision, attack and the like according to the information, so that the training of attack, emergency landing and the like is facilitated.
The virtual reality head display host 202 comprises an electronic computer and corresponding input and output equipment, is used for rendering control keys and extravehicular scenes in a cabin virtual scene and displaying the control keys and extravehicular scenes by transmitting the control keys and the extravehicular scenes to the virtual reality head display, the input and output equipment also comprises but is not limited to a display, a keyboard and a mouse, and a plurality of virtual reality head display hosts can be added into the cabin training system to be used for simultaneously training a plurality of persons;
in this embodiment, the virtual reality head display host computer also includes computer and corresponding input/output device, including the display, can be used to show the virtual picture that the training person saw, also can set up the head display host computer in virtual reality helmet equipment, helmet equipment includes the virtual reality head display host computer and the virtual reality head display.
The virtual reality head display 203 is connected with the virtual reality head display host 202 and is used for displaying a cabin virtual scene rendered by the virtual reality head display host 202 to a trainer;
an inertial motion capture glove 204 for collecting motion data of the trainer's hand;
a simulated training object 205 comprising at least one of a seat, a joystick, a throttle and a rudder, for simulating a cockpit.
In this embodiment, the trainer wears inertial motion capture gloves 204 on the hands and binds rigid structures on the inertial motion capture gloves 204, wears virtual reality head displays 203 on the trainer heads and binds rigid structures on the virtual reality head displays 203, and the rigid structures are configured with a plurality of reflective balls as reflective marker points.
The real standard system of passenger cabin of this embodiment uses optical space location action capture system as the core, is aided with the immersive scene formula of customization and experiences content, realizes single or many people in the space on a large scale and the virtual reality application of different scenes, and the virtual scene is lifelike, and simulation training object 205 is according to 1: the 1-proportion simulation real product, such as a seat, an operating lever, an accelerator, a rudder, and the like, is similar to the 1-proportion simulation real product, the operation and body feeling are real, the immersion feeling is sufficient, and the realization effect is better.
In this embodiment, the dynamic capture space is formed by a plurality of dynamic capture cameras 10 surrounding a simulated training object. As shown in fig. 2, the trainer 20 sits on the training seat 30 and performs driving simulation training by manually operating the joystick 40, the accelerator 50, and the like. In the training process, the plurality of motion capture cameras 10 continuously shoot the training motions of the trainee 20 at the same time, and synchronously map the training motions of the trainee 20 to the virtual scene of the cockpit, so that the combination and interaction of virtual and reality are realized. The training space required by the invention is flexible in size, namely the training space can be built indoors, the training space can be built outdoors, the training space can be freely expanded from a small space to a large space, and in addition, an outdoor large space training scene can be built, so that the requirement of simultaneous online training of multiple persons can be met.
The cockpit practical training system of the embodiment specifically uses an optical space positioning motion capture system as a core, captures two-dimensional position information of optical mark points of a target object through a plurality of high frame rate industrial cameras installed above a space and outputs the two-dimensional position information to algorithm software, and further, three-dimensional position information and motion gestures of the target in the space are calculated, and the cockpit practical training system is a bridge connecting virtual and real and is also an entrance for human-computer interaction.
In addition, the embodiment can calculate and output the three-dimensional space position data of the target in real time according to the synchronous two-dimensional data obtained by high-speed shooting of the plurality of optical dynamic capturing cameras. The position orientation of each target object in the space is calculated by identifying the optical mark point structures bound at different positions of the moving object, so that the motion track of the target object in the space is determined and synchronously guided into 3D software or a VR real-time engine in real time, and the combination and interaction of the virtual and the reality are realized.
Further, in a specific embodiment, the motion capture camera 101 is specifically configured to:
continuously shooting the actions of a trainer wearing the reflective marker points for training, generating marker point two-dimensional image data which keeps synchronization with other mobile capturing cameras, preprocessing the marker point two-dimensional image data to obtain two-dimensional coordinate data of the marker points, and sending the two-dimensional coordinate data to the mobile capturing data processing server 102 of the server through the data exchange 103;
further, in an embodiment, the kinetic data processing server 102 is specifically configured to:
receiving two-dimensional coordinate data of the mark points sent by the moving capture camera 101, and calculating the two-dimensional coordinate data of the mark points by adopting a computer multi-view vision technology to obtain point cloud coordinates and directions in a three-dimensional capture space;
identifying rigid body structures bound at different parts of a trainer according to the point cloud coordinates and directions, resolving the positions and the orientations of the rigid body structures in a capturing space, obtaining space position positioning data corresponding to rigid body actions of the trainer during practical training, and sending the space position positioning data to the virtual environment rendering and synchronization server 201;
further, in an embodiment, the virtual environment rendering and synchronization server 201 is specifically configured to:
receiving the spatial position positioning data corresponding to the rigid body motion of the trainer in practical training and the motion data of the hands of the trainer, which are sent by the motion capture data processing server 102, and the motion data of the hands of the trainer, which are sent by the inertial motion capture glove 204;
according to the spatial position positioning data corresponding to the rigid body action of the trainer in the practical training, determining the finger position of the trainer in the cabin virtual scene, and according to the action data of the hand of the trainer, determining the finger posture of the trainer in the cabin virtual scene;
and determining the corresponding virtual button operation of the finger position and the finger posture of the trainer in the cabin virtual scene and carrying out corresponding response according to the preset mapping relation between the dynamic capture data of the trainer and the cabin virtual scene.
Based on the embodiment description which is basically the same as the virtual reality based cabin training method of the present invention, the description of the virtual reality based cabin training system in this embodiment is not repeated.
The invention utilizes a plurality of dynamic capture cameras to build dynamic capture spaces for cabin training, is suitable for being adopted in large space and small space, has flexible training space, and can be freely expanded from small-scale single training to large-scale multi-person training. The dynamic capturing camera adopted by the invention utilizes an advanced high-resolution image sensor and is combined with a high-power infrared stroboscopic light source, so that the capturing range is greatly expanded. Meanwhile, redundant background information is filtered by using an infrared narrow band-pass filtering technology, and the image information of the captured mark point is preprocessed by adopting the FPGA, so that the motion capture camera can rapidly and accurately output clean two-dimensional coordinate information of the captured mark point, the processing and calculating time of building on a server is reduced, the system delay is greatly reduced, and meanwhile, the motion capture precision is greatly improved. In addition, the invention further improves the data processing capacity of the system by adopting a heterogeneous processing mode of CPU + GPU + APU. The virtual cabin training method has the advantages that the virtual scene is vivid, the operation sense is real, and the immersion sense and effect of virtual reality cabin training are improved.
The present invention also provides a non-volatile computer-readable storage medium.
The computer-readable storage medium of this embodiment stores thereon a virtual reality-based cabin training program, which when executed by a processor implements the steps of the virtual reality-based cabin training method as described in any of the above. Based on the embodiment description which is basically the same as the virtual reality based cabin training method of the present invention, the content of the virtual reality based cabin training program is not described in detail in this embodiment.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM), and includes several instructions for enabling a terminal (which may be a computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
The present invention is described in connection with the accompanying drawings, but the present invention is not limited to the above embodiments, which are only illustrative and not restrictive, and those skilled in the art can make various changes without departing from the spirit and scope of the invention as defined by the appended claims, and all changes that come within the meaning and range of equivalency of the specification and drawings that are obvious from the description and the attached claims are intended to be embraced therein.

Claims (10)

1. A virtual reality-based cabin training method is characterized by comprising the following steps:
the method comprises the steps that a plurality of dynamic capture cameras in a dynamic capture space simultaneously and continuously shoot actions of a trainer wearing reflective mark points for practical training, synchronous mark point two-dimensional image data are obtained, and the action data of hands of the trainer are obtained through inertial action capture gloves worn by the trainer, wherein the trainer uses a real object to carry out simulated driving training, and the real object comprises at least one of a seat, an operating lever, an accelerator and a rudder;
preprocessing the two-dimensional image data of the mark points to obtain two-dimensional coordinate data of the mark points, and calculating the two-dimensional coordinate data of the mark points by adopting a computer multi-view vision technology to obtain point cloud coordinates and directions in a three-dimensional capturing space;
identifying rigid body structures bound at different parts of a trainer according to the point cloud coordinates and the direction, and resolving the position and the orientation of each rigid body structure in a capturing space to obtain space position positioning data corresponding to a rigid body of the trainer during practical training;
according to the spatial position positioning data corresponding to the rigid body motion of the trainer in the practical training, determining the hand position of the trainer in the cabin virtual scene, and according to the motion data provided by the inertial motion capture gloves, determining the finger position and posture of the trainer in the cabin virtual scene;
and determining the corresponding virtual button operation of the finger position and the finger posture of the trainer in the cabin virtual scene and carrying out corresponding response according to the preset mapping relation between the dynamic capture data of the trainer and the cabin virtual scene.
2. The virtual reality-based cockpit training method of claim 1 wherein said motion data provided by said inertial motion capture gloves comprises: real-time angular velocity data for each finger joint.
3. The virtual reality based cabin training method of claim 1, wherein the captured marker point image information is preprocessed by using an infrared narrow band pass filter technology to filter redundant background information in the captured image data and by using a Field Programmable Gate Array (FPGA).
4. The virtual reality based cabin training method of any one of claims 1-3, wherein various types of data are calculated using heterogeneous processing modes of a CPU, a GPU and APUs, wherein the data at least comprises: the marking point two-dimensional image data, the motion data, the marking point two-dimensional coordinate data, the point cloud coordinate and direction in the three-dimensional capturing space, and the space position positioning data.
5. The virtual reality-based cabin training system is characterized by comprising:
the system comprises a motion capture server and a content presentation end;
the motion capture server side at least comprises the following components:
the dynamic capture camera is used for acquiring image data of a trainer in practical training and filtering redundant background information in the shot image data by adopting an infrared narrow-band filtering technology;
the dynamic capture data processing server comprises an electronic computer, corresponding input and output equipment and dynamic capture data analysis and processing software running on the computer, wherein the input and output equipment comprises but is not limited to a display, a keyboard and a mouse, the dynamic capture data analysis and processing software is used for carrying out operation processing on dynamic capture data transmitted by a dynamic capture camera, and the display is used for displaying the running condition of the dynamic capture software;
the data switch is used for realizing data exchange between the server-side component and the client-side component, between the client-side related components and between the server-side related components;
the calibration rod is used for calibrating the moving capture cameras to obtain the relative position relation among the moving capture cameras in the moving capture space;
the content presentation end comprises at least the following components:
the virtual environment rendering and synchronization server comprises an electronic computer and corresponding input and output equipment, and is used for rendering a virtual scene of a virtual reality cabin and synchronously transmitting data in the virtual environment to a plurality of virtual reality head displays so as to facilitate multi-person training at the same time, wherein the input and output equipment comprises but is not limited to a display, a keyboard and a mouse, and the display is used for displaying a emperor visual angle picture of the training situation of a trainer;
the virtual reality head display host comprises an electronic computer and corresponding input and output equipment, and is used for rendering control keys and extravehicular scenes in a cockpit virtual scene and transmitting the control keys and the extravehicular scenes to the virtual reality head display for display;
the virtual reality head display is connected with the virtual reality head display host and used for displaying a cabin virtual scene rendered by the virtual reality head display host to a trainer;
the inertial motion capture glove is used for collecting motion data of the hands of the trainer;
the simulation training object comprises at least one of a seat, a control lever, a throttle and a rudder, and is used for simulating a cockpit.
6. The virtual reality based cabin training system of claim 5, wherein the kinetic capture space is a large space or a small space formed by a plurality of kinetic capture cameras around the content presentation end.
7. The virtual reality based cockpit training system of claim 5 wherein a rigid structure is bound to said virtual reality head and said inertial motion capture glove, said rigid structure having a plurality of retro-reflective marker points disposed thereon.
8. The virtual reality based cabin training system of any one of claims 5-7, wherein the motion capture camera is specifically configured to:
the method comprises the steps of continuously shooting actions of a trainer wearing a light-reflecting mark point for practical training, generating mark point two-dimensional image data which keeps synchronization with other mobile capturing cameras, preprocessing the mark point two-dimensional image data to obtain two-dimensional coordinate data of the mark point, and sending the two-dimensional coordinate data to a mobile capturing data processing server through a data switch.
9. The virtual reality-based cabin training system of claim 8, wherein the kinetic capture data processing server is specifically configured to:
receiving two-dimensional coordinate data of the mark points sent by the dynamic capture camera, and calculating the two-dimensional coordinate data of the mark points by adopting a computer multi-view vision technology to obtain point cloud coordinates and directions in a three-dimensional capture space;
identifying rigid body structures bound at different parts of a trainer according to the point cloud coordinates and the direction, resolving the position and the orientation of each rigid body structure in a capturing space, obtaining space position positioning data corresponding to a rigid body of the trainer during practical training, and sending the space position positioning data to the virtual environment rendering server;
the virtual environment rendering server is specifically configured to:
receiving space position positioning data which are sent by the dynamic capture data processing server and correspond to rigid body actions of a trainer in practical training and action data of hands of the trainer sent by the inertial motion capture gloves;
according to the spatial position positioning data corresponding to the rigid body motion of a trainer in practical training, determining the hand position of the trainer in the cabin virtual scene, and according to the motion data provided by the inertial motion capture gloves, determining the finger position and posture of the trainer in the cabin virtual scene;
and determining the corresponding virtual button operation of the finger position and the finger posture of the trainer in the cabin virtual scene and carrying out corresponding response according to the preset mapping relation between the dynamic capture data of the trainer and the cabin virtual scene.
10. A computer readable storage medium having stored thereon a virtual reality based cockpit training program for implementing the steps of the virtual reality based cockpit training method of any of claims 1-4 when executed by a processor.
CN201910879257.1A 2019-09-18 2019-09-18 Cabin practical training method, system and storage medium based on virtual reality Active CN110610547B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910879257.1A CN110610547B (en) 2019-09-18 2019-09-18 Cabin practical training method, system and storage medium based on virtual reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910879257.1A CN110610547B (en) 2019-09-18 2019-09-18 Cabin practical training method, system and storage medium based on virtual reality

Publications (2)

Publication Number Publication Date
CN110610547A true CN110610547A (en) 2019-12-24
CN110610547B CN110610547B (en) 2024-02-13

Family

ID=68891558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910879257.1A Active CN110610547B (en) 2019-09-18 2019-09-18 Cabin practical training method, system and storage medium based on virtual reality

Country Status (1)

Country Link
CN (1) CN110610547B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111314484A (en) * 2020-03-06 2020-06-19 王春花 Virtual reality data synchronization method and device and virtual reality server
CN111338481A (en) * 2020-02-28 2020-06-26 武汉灏存科技有限公司 Data interaction system and method based on whole body dynamic capture
CN112308983A (en) * 2020-10-30 2021-02-02 北京虚拟动点科技有限公司 Virtual scene arrangement method and device, electronic equipment and storage medium
CN112781589A (en) * 2021-01-05 2021-05-11 北京诺亦腾科技有限公司 Position tracking equipment and method based on optical data and inertial data
CN113192382A (en) * 2021-03-19 2021-07-30 徐州九鼎机电总厂 Vehicle mobility simulation system and method based on immersive human-computer interaction
CN113398578A (en) * 2021-06-03 2021-09-17 Oppo广东移动通信有限公司 Game data processing method, system, device, electronic equipment and storage medium
CN113552950A (en) * 2021-08-06 2021-10-26 上海炫伍科技股份有限公司 Virtual and real interaction method for virtual cockpit
CN114360312A (en) * 2021-12-17 2022-04-15 江西洪都航空工业集团有限责任公司 Ground service maintenance training system and method based on augmented reality technology
CN114840079A (en) * 2022-04-27 2022-08-02 西南交通大学 High-speed rail driving action simulation virtual-real interaction method based on gesture recognition
WO2022166264A1 (en) * 2021-02-04 2022-08-11 三一汽车起重机械有限公司 Simulation training system, method and apparatus for work machine, and electronic device
CN116661600A (en) * 2023-06-02 2023-08-29 南开大学 Multi-person collaborative surgical virtual training system based on multi-view behavior identification

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105374251A (en) * 2015-11-12 2016-03-02 中国矿业大学(北京) Mine virtual reality training system based on immersion type input and output equipment
US20160225188A1 (en) * 2015-01-16 2016-08-04 VRstudios, Inc. Virtual-reality presentation volume within which human participants freely move while experiencing a virtual environment
CN106710362A (en) * 2016-11-30 2017-05-24 中航华东光电(上海)有限公司 Flight training method implemented by using virtual reality equipment
CN107221223A (en) * 2017-06-01 2017-09-29 北京航空航天大学 A kind of band is strong/the virtual reality aircraft cockpit system of touch feedback
CN107341832A (en) * 2017-04-27 2017-11-10 北京德火新媒体技术有限公司 A kind of various visual angles switching camera system and method based on infrared location system
US9868449B1 (en) * 2014-05-30 2018-01-16 Leap Motion, Inc. Recognizing in-air gestures of a control object to control a vehicular control system
CN109313484A (en) * 2017-08-25 2019-02-05 深圳市瑞立视多媒体科技有限公司 Virtual reality interactive system, method and computer storage medium
US20190228263A1 (en) * 2018-01-19 2019-07-25 Seiko Epson Corporation Training assistance using synthetic images

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9868449B1 (en) * 2014-05-30 2018-01-16 Leap Motion, Inc. Recognizing in-air gestures of a control object to control a vehicular control system
US20160225188A1 (en) * 2015-01-16 2016-08-04 VRstudios, Inc. Virtual-reality presentation volume within which human participants freely move while experiencing a virtual environment
CN105374251A (en) * 2015-11-12 2016-03-02 中国矿业大学(北京) Mine virtual reality training system based on immersion type input and output equipment
CN106710362A (en) * 2016-11-30 2017-05-24 中航华东光电(上海)有限公司 Flight training method implemented by using virtual reality equipment
CN107341832A (en) * 2017-04-27 2017-11-10 北京德火新媒体技术有限公司 A kind of various visual angles switching camera system and method based on infrared location system
CN107221223A (en) * 2017-06-01 2017-09-29 北京航空航天大学 A kind of band is strong/the virtual reality aircraft cockpit system of touch feedback
CN109313484A (en) * 2017-08-25 2019-02-05 深圳市瑞立视多媒体科技有限公司 Virtual reality interactive system, method and computer storage medium
US20190228263A1 (en) * 2018-01-19 2019-07-25 Seiko Epson Corporation Training assistance using synthetic images

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111338481A (en) * 2020-02-28 2020-06-26 武汉灏存科技有限公司 Data interaction system and method based on whole body dynamic capture
CN111314484B (en) * 2020-03-06 2020-11-10 北京掌中飞天科技股份有限公司 Virtual reality data synchronization method and device and virtual reality server
CN111314484A (en) * 2020-03-06 2020-06-19 王春花 Virtual reality data synchronization method and device and virtual reality server
CN112308983A (en) * 2020-10-30 2021-02-02 北京虚拟动点科技有限公司 Virtual scene arrangement method and device, electronic equipment and storage medium
CN112308983B (en) * 2020-10-30 2024-03-29 北京虚拟动点科技有限公司 Virtual scene arrangement method and device, electronic equipment and storage medium
CN112781589A (en) * 2021-01-05 2021-05-11 北京诺亦腾科技有限公司 Position tracking equipment and method based on optical data and inertial data
WO2022166264A1 (en) * 2021-02-04 2022-08-11 三一汽车起重机械有限公司 Simulation training system, method and apparatus for work machine, and electronic device
CN113192382A (en) * 2021-03-19 2021-07-30 徐州九鼎机电总厂 Vehicle mobility simulation system and method based on immersive human-computer interaction
CN113398578A (en) * 2021-06-03 2021-09-17 Oppo广东移动通信有限公司 Game data processing method, system, device, electronic equipment and storage medium
CN113552950B (en) * 2021-08-06 2022-09-20 上海炫伍科技股份有限公司 Virtual and real interaction method for virtual cockpit
CN113552950A (en) * 2021-08-06 2021-10-26 上海炫伍科技股份有限公司 Virtual and real interaction method for virtual cockpit
CN114360312A (en) * 2021-12-17 2022-04-15 江西洪都航空工业集团有限责任公司 Ground service maintenance training system and method based on augmented reality technology
CN114840079A (en) * 2022-04-27 2022-08-02 西南交通大学 High-speed rail driving action simulation virtual-real interaction method based on gesture recognition
CN114840079B (en) * 2022-04-27 2023-03-10 西南交通大学 High-speed rail driving action simulation virtual-real interaction method based on gesture recognition
CN116661600A (en) * 2023-06-02 2023-08-29 南开大学 Multi-person collaborative surgical virtual training system based on multi-view behavior identification

Also Published As

Publication number Publication date
CN110610547B (en) 2024-02-13

Similar Documents

Publication Publication Date Title
CN110610547B (en) Cabin practical training method, system and storage medium based on virtual reality
CN107221223B (en) Virtual reality cockpit system with force/tactile feedback
US9892563B2 (en) System and method for generating a mixed reality environment
CN106527177B (en) The multi-functional one-stop remote operating control design case of one kind and analogue system and method
Higuchi et al. Flying head: a head motion synchronization mechanism for unmanned aerial vehicle control
KR101480994B1 (en) Method and system for generating augmented reality with a display of a moter vehicle
US20170025031A1 (en) Method and apparatus for testing a device for use in an aircraft
US20140160162A1 (en) Surface projection device for augmented reality
KR101671320B1 (en) Virtual network training processing unit included client system of immersive virtual training system that enables recognition of respective virtual training space and collective and organizational cooperative training in shared virtual workspace of number of trainees through multiple access and immersive virtual training method using thereof
WO2015180497A1 (en) Motion collection and feedback method and system based on stereoscopic vision
CN104699247A (en) Virtual reality interactive system and method based on machine vision
JP2013061937A (en) Combined stereo camera and stereo display interaction
CN104133378A (en) Real-time simulation platform for airport activity area monitoring guidance system
KR102188313B1 (en) Multi-model flight training simulator using VR
CN103543827A (en) Immersive outdoor activity interactive platform implement method based on single camera
EP3913478A1 (en) Systems and methods for facilitating shared rendering
CN113744585B (en) Fire accident emergency treatment drilling system and treatment method
Viertler et al. Requirements and design challenges in rotorcraft flight simulations for research applications
KR20150007023A (en) Vehicle simulation system and method to control thereof
KR101076263B1 (en) Tangible Simulator Based Large-scale Interactive Game System And Method Thereof
CN105632271A (en) Ground simulation training system for low-speed wind tunnel model flight experiment
CN110108159B (en) Simulation system and method for large-space multi-person interaction
GB2535729A (en) Immersive vehicle simulator apparatus and method
CN103680248A (en) Ship cabin virtual reality simulation system
EP3262624A1 (en) Immersive vehicle simulator apparatus and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240109

Address after: 909-175, 9th Floor, Building 17, No. 30 Shixing Street, Shijingshan District, Beijing, 100043 (Cluster Registration)

Applicant after: Ruilishi Multimedia Technology (Beijing) Co.,Ltd.

Address before: Room 9-12, 10th floor, block B, building 7, Shenzhen Bay science and technology ecological park, 1819 Shahe West Road, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN REALIS MULTIMEDIA TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant