CN111191322B - Virtual maintainability simulation method based on depth perception gesture recognition - Google Patents

Virtual maintainability simulation method based on depth perception gesture recognition Download PDF

Info

Publication number
CN111191322B
CN111191322B CN201911255981.3A CN201911255981A CN111191322B CN 111191322 B CN111191322 B CN 111191322B CN 201911255981 A CN201911255981 A CN 201911255981A CN 111191322 B CN111191322 B CN 111191322B
Authority
CN
China
Prior art keywords
virtual
gesture
model
interaction
simulation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911255981.3A
Other languages
Chinese (zh)
Other versions
CN111191322A (en
Inventor
刘飞洋
花斌
李荣强
杨旭东
李力莎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AVIC Chengdu Aircraft Design and Research Institute
Original Assignee
AVIC Chengdu Aircraft Design and Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AVIC Chengdu Aircraft Design and Research Institute filed Critical AVIC Chengdu Aircraft Design and Research Institute
Priority to CN201911255981.3A priority Critical patent/CN111191322B/en
Publication of CN111191322A publication Critical patent/CN111191322A/en
Application granted granted Critical
Publication of CN111191322B publication Critical patent/CN111191322B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to a virtual maintainability simulation method based on depth perception gesture recognition. The invention comprises the following steps: 1) building a hardware environment; 2) preparing data; 3) a functional layer is realized; 4) and realizing an interaction layer. The method adopts the non-wearable depth perception gesture recognition interaction equipment, so that users can carry out natural, visual and high-precision gesture interaction according with the natural use habits of human bodies, and the method can be used for more people without difference. The method is adopted to carry out maintainability simulation on each cabin of the airplane in the airplane design stage, and the simulation operation can be directly carried out on the design digital analogy in a human-in-loop mode in the virtual reality environment, so that the maintainability simulation process is greatly simplified, and the efficiency is improved; meanwhile, by defining operation steps and operation gestures, the problems of large interaction randomness and low interaction precision of human in real-time maintainability simulation of the loop are solved, and the practicability and the accuracy of the human in real-time maintainability simulation of the loop are improved.

Description

Virtual maintainability simulation method based on depth perception gesture recognition
Technical Field
The invention belongs to the technical field of aircraft maintenance and relates to a virtual maintainability simulation method based on depth perception gesture recognition.
Background
In the field of airplane design, the application of airplane digital design technology and VR virtual reality technology is becoming mature, and deeper and wider application and development are carried out in the field of virtual maintainability simulation in the design stage. The virtual maintainability simulation is that a designer carries out simulation verification and analysis processes of a maintenance process and a maintenance action on a digital prototype model in the design process, verifies the visibility, accessibility, maintenance channel and the like of a maintenance object, and optimizes the overall design and maintenance scheme of an airplane product through continuous iteration. A designer can directly perform interactive operation on a person in a loop with an airplane digital prototype at a first person visual angle in a loop simulation environment of the person based on a virtual reality environment, simulate a maintenance operation process, analyze maintenance indexes such as accessibility and visibility, and improve the efficiency and accuracy of maintenance design and simulation.
The current virtual maintainability simulation environment mainly comprises three types, namely a multi-channel immersion type environment, a desktop type environment and a helmet type environment. Different types of hardware systems have different hardware environments, interaction devices and man-machine interaction modes. At present, typical interactive equipment of a multi-channel immersive virtual reality environment is an interactive handle Flystick of an ART optical tracking system, although the immersive reality sense is good, the software and hardware are expensive, and the interactive function of the Flystick handle is single; desktop interactive equipment is usually a keyboard/mouse, and visual three-dimensional interaction is difficult to realize; the helmet-type interactive equipment is a handle which is usually matched with a helmet for use, and a wearable motion capture suite can be bound on a natural person, but the intervention of external equipment enables the accuracy and stability of human body gesture motion recognition to be improved, but covers the expression mode of gestures, and meanwhile, the wearable equipment has the defects that the matching precision of a natural hand and a virtual hand is poor and the motion deformation is caused due to the fact that the wearable equipment is large in randomness of positions and does not adapt to all body types. Meanwhile, the biological characteristics of people determine that the trajectory of the hand in the interaction space cannot be standardized, the gesture actions of the hand are generated in real time during virtual maintainability simulation or training, collision interaction may occur with a maintenance model at any position and angle, and many existing human-computer interaction technologies are difficult to effectively understand gesture input intentions and judge whether the gesture meets the maintenance operation requirements to complete interaction.
Disclosure of Invention
The purpose of the invention is: in order to meet the requirements that human-computer interaction can get rid of the constraint of wearable motion capture equipment during virtual maintainability simulation of an airplane, natural, visual and high-precision gesture interaction conforming to the natural use habit of a human body can be performed, and the requirement of indiscriminate use of more people can be met, the virtual maintainability simulation method based on depth perception gesture recognition is provided in combination with the existing helmet type virtual reality environment.
The technical solution of the invention is as follows: a virtual maintainability simulation method based on depth perception gesture recognition comprises the following steps:
1) hardware environment set-up
The hardware environment layer adopts no wearing formula gesture recognition interaction device based on degree of depth perception, cooperates the VR helmet, constructs the virtual simulation environment of people in the return circuit.
2) Data preparation
The method comprises a prototype model, a tool model, a scene model and a maintenance task flow which are required by the completion of virtual scene arrangement.
The prototype model comprises an operation model specified by a maintenance task process operation item.
3) Functional layer implementation
a) Gesture definition: the gesture definition is whether an operation model grabbed/picked by an operator in the loop virtual maintainability simulation system meets the operation requirement or not, and provides a judgment basis for effectively understanding the gesture input intention of the system. The operation can be successful only when the operator operates in the specified gesture angle posture, and otherwise, the operation fails.
b) And (3) gesture judgment: when the operation model is disassembled and assembled, the virtual gesture is judged, and when the gesture meets the gesture predefined requirement, the binding of the relative position relation is allowed; otherwise, no binding can be performed.
c) Identifying an operation model: and determining whether the operation model is consistent with the operation model designated in the maintenance task process operation item, and transmitting the operation model to the operation tool and then to the operation model by the virtual hand or the operation model by the virtual hand.
d) And (3) maintenance flow control: and in the free operation environment of the human in the loop, carrying out logic control on the operation steps of the operator according to the maintenance task flow.
4) Interaction layer implementation
a) Depth-aware gesture recognition interaction: and developing a customized gesture controller to process hand motion data frames acquired by the depth recognition gesture interaction equipment in real time, calling a gesture controller class to start the recognition of virtual gestures, sending an event message in combination with an interaction event trigger class to call an event processing program to complete the processing of interaction events, driving an operation model to move, and driving a UI to display information.
b) UI interaction control: and developing an interaction event trigger, judging the interaction between the virtual hand model and the operation model and the multi-level UI interaction interface by using a rigid body trigger and a collision detection technology, and triggering the response of the operation model and the UI interaction interface in an event entrusted mode.
c) And (3) information display: and developing a UI control to complete the display of the information of the model structure tree and the maintenance flow tree.
Compared with the typical interactive equipment in the traditional virtual reality environment, the wearable depth perception gesture recognition interactive equipment adopted by the method enables users to get rid of the constraint of the wearable motion capture equipment, can perform natural, intuitive and high-precision gesture interaction which accords with the natural use habit of human bodies, can be used for more people without difference, and has the working precision of 0.01mm in virtual maintainability simulation. Overcomes the defects that the traditional gesture interaction can not regularly express the hand in the interaction space track determined by the human biological characteristics and needs to develop a specific control, the defects of non-intuition and inconvenience brought by using gestures as input to generate events or messages as output to carry out human-computer interaction, by introducing a specific predefined gesture oriented to virtual maintainability simulation, and acquiring the joint data and hand space position information of a virtual hand in real time, whether the current gesture action meets the predefined gesture requirement is identified, when the conditions are met, the maintenance operation can be executed, the natural feeling and the accuracy of gesture recognition interaction are greatly improved, meanwhile, operation object recognition is realized in the gesture interaction by combining maintenance flow information, a corresponding association relation of gesture-operation tool-operation model is established, and multi-stage operation transmission of virtual human hand-operation tool-operation object is realized.
The technology is adopted to carry out maintainability simulation on each cabin of the airplane in the airplane design stage, and a designer can directly carry out simulation operation on a design digital analog in a human-in-loop mode in a virtual reality environment, so that the simulation process is greatly simplified, and the maintainability simulation efficiency is improved; meanwhile, by defining operation steps and operation gestures, the problems of large interaction randomness and low interaction precision of human in-circuit real-time simulation are solved, and the practicability and accuracy of human in-circuit real-time simulation are improved.
Drawings
FIG. 1 is a schematic diagram illustrating a virtual maintainability simulation method according to the present invention;
FIG. 2 is a schematic diagram of a human-in-loop virtual simulation environment;
FIG. 3 is a GenerateModelInfo scene structure tree;
FIG. 4 is a diagram illustrating predefined positions of a gesture model, a tool and an interaction model in a scene;
FIG. 5 is a set up in the Generator Model Info Controller script;
FIG. 6 is a maintainability simulation process data file;
FIG. 7 is a flow chart of gesture determination;
fig. 8 is a diagram of depth-sensing gesture recognition determination and operation effect according to an embodiment of the present invention.
Detailed Description
The following further describes the embodiments of the present invention with reference to the drawings.
A virtual maintainability simulation method based on depth-sensing gesture recognition is shown in FIG. 1 and comprises the following steps:
1) hardware environment set-up
According to the technical scheme, a hardware environment layer adopts non-wearable gesture recognition interactive equipment based on depth perception, and a virtual simulation environment of a human in a loop is constructed by matching with a VR (virtual reality) helmet and is shown in figure 2. The depth perception gesture recognition interaction equipment adopts a stereoscopic vision principle, and a controller provided with two cameras can perform coordinate positioning on a three-dimensional space object like human eyes. The target is captured simultaneously through the two cameras, the parallax of the target is calculated in real time, and the spatial position of the target is obtained. The VR helmet provides a totally closed VR visual environment for the wearer, so that the wearer is immersed in a digital space, the depth perception gesture recognition interaction equipment is bound at the front part of the VR helmet, and hand action data of the operator in front of eyes can be acquired in real time according to the movement of the head of the operator at any time.
2) Data preparation
The method comprises a prototype model, a tool model, a scene model and a maintenance process which are required by the completion of virtual scene arrangement. The sample model data is from a product data management system (PDM), the data of the maintainability simulation sample model is ensured to be homologous with the design data, and the FBX format single file model with a complete BOM structure is formed through lightweight processing and format conversion. Matching materials to specific model components from an original material library, reading material information, finishing material rendering of the model, and baking the rendered model effect.
3) Functional layer implementation
a) Gesture definition: the gesture definition is the judgment basis for the system to effectively understand the gesture input intention, wherein the gesture definition is whether an operation model object is grabbed/picked by an operator in the loop virtual maintainability simulation system to meet the operation requirement or not. The operation can be successful only when the operator operates in the specified gesture angle posture, and otherwise, the operation fails. Therefore, gestures must be reasonably planned and defined, and correct gesture parameter ranges (including single/double-hand operation, hand space position, key joint bending value and the like) when the disassembly and assembly model is maintained are defined in the Unity3D development platform.
b) And (3) gesture judgment: when the operation model is disassembled and assembled, the virtual gesture is judged, when the gesture meets the gesture predefined requirement, the binding of the relative position relation is allowed (the virtual hand, the virtual tool, the relative position of the disassembly and assembly operation model is fixed, and the operation model and the virtual tool move along with the position of the virtual hand after the binding) is not allowed, and when the gesture does not meet the requirement, the binding can not be carried out.
c) Operation object identification: and determining whether the operation model is consistent with the specified model in the maintenance task flow operation item, and meeting the transmission of two operation relations of 'virtual human hand-operation tool-operation object' and 'virtual human hand-operation object'.
d) And (3) maintenance flow control: and in the free operation environment of the human in the loop, carrying out logic control on the operation steps of the operator according to the maintenance task flow.
4) Interaction layer implementation
a) Depth-aware gesture recognition interaction: the method comprises the steps of developing and customizing a gesture controller to process hand motion data frames (including data of all joints of a hand and spatial position information) acquired by a depth recognition gesture interaction device in real time, calling gesture controller classes to start recognition of gestures of a virtual hand, such as circle drawing, sweep, linear motion, clicking, touch screen, holding, grabbing and twisting, sending event messages by combining interaction event triggering classes to call an event processing program to complete processing of interaction events, driving operation model motion, UI display information and the like.
b) UI interaction control: and developing an interaction event trigger, judging the interaction between the virtual hand model and the operation model and the multi-level UI interaction interface by using a rigid body trigger and a collision detection technology, and triggering the response of the operation object and the UI interaction interface in an event entrustment mode.
c) And (3) information display: and developing UGUI controls to complete the display of information such as a model structure tree, a maintenance flow tree and the like.
Example (b):
(1) virtual scene construction
1) Virtual maintenance operation three-dimensional scene
Importing the processed three-dimensional scene model into the Unity main scene, adjusting the scene material effect, adding light and shadow, adding a sky box or a pre-baked environment map and the like according to needs, adjusting the position of the three-dimensional scene model and the size, the position and the orientation of an auxiliary model (such as an operation platform and the like), and ensuring that the model is initialized and displayed at the correct position in the scene.
2) Virtual maintenance three-dimensional interface
A three-dimensional UI interactive interface is designed in graphics making software such as Photoshop and the like, so that the style is uniform, and the functional interface is clear and easy to understand. And importing the interface into the Unity, and adding interactive function components, such as Button, Slider, Toggle and other event triggering components.
(2) Aircraft prototype model processing
The method comprises the following steps that external airplane model data in an FBX format are dynamically loaded when virtual maintainability simulation software runs, an original BOM structure of a prototype model is displayed when the virtual maintainability simulation software runs, empty nodes are avoided as much as possible in the BOM structure of the prototype model, and the BOM structure of the prototype model needs to be standardized:
1) checking model structures of a prototype in CATIA software to ensure that each node naming specification does not include model nodes named as empty;
2) carrying out format conversion on the model, importing the model into an original model through GPure to carry out FBX format export, or exporting an OBJ format model in CATIA software, carrying out model mapping preparation and baking rendering through 3DsMax to export the FBX format;
3) if the training subject model consists of a plurality of sub-models, the sub-models need to be combined in CATIA, GPure or 3DsMax, and finally an FBX model file (3) is exported to generate a maintainability simulation process data file
The maintainability simulation process carries out matching and recognition according to the gestures and the tool prefabrication models of all the steps, thereby realizing high-precision gesture interactive operation, and therefore, a gesture and tool prefabrication definition library is arranged in the maintainability simulation process, and the gestures and the tools are placed at corresponding positions and angles of the operation models for constraint and limitation, and the maintainability simulation process comprises the following specific steps:
1) creating a gesture and tool model in 3DsMax according to maintainability simulation requirements;
2) import the gesture and tool model into the Unity generamedellnfo scene, initialize the size, location, and angle of the gesture and tool model, and create a pre-form.
3) Importing the processed sample model into a Unity generamedelInfo scene, creating a corresponding maintainability simulation step according to maintainability simulation requirements, selecting a virtual gesture model or a tool into a three-dimensional scene after the maintainability simulation step is newly created, predefining the angle and position information of the gesture or the tool, the motion intermediate position and the final position of the interactive model according to the maintainability simulation step, and recording parameters.
The generatemodinfo scene structure tree is shown in fig. 3, and the predefined positions of the gesture model, the tool model and the operation model in the scene are shown in fig. 4
4) The virtual hand usually adopts a right-hand coordinate system, but parameters such as a metering unit and the like are different from Unity defaults, so that initialization parameters such as the virtual hand coordinate system, a camera position and the like need to be adjusted to ensure that the virtual hand and a prototype model can be identified with correct specification scale, and correct action response of a virtual prototype can be triggered through gesture behaviors.
5) After the positions and angles of the gestures or tools corresponding to all the maintainability simulation steps and the intermediate positions and the final positions of the interactive Model motion are adjusted and defined, parameters (parts) corresponding to all the steps are set in a Generator Model Info Controller script as shown in FIG. 5, and after the script is run, a complete maintainability simulation flow data xml file (part) is automatically generated as shown in FIG. 6 (4) and the functions of depth perception gesture recognition, interaction and judgment development are developed
Adding a virtual hand model prefabricated body in the Unity main scene, adding a gesture controller for the virtual hand, and receiving a motion data frame transmitted by a sensor in real time
1) Developing a customized controller to process a motion data frame, calling a controller class to start recognition of gesture action change, detecting that a virtual hand collides with a prototype model, finishing processing an event by combining an event message sent by an interaction event trigger class after the virtual hand opens a fist, driving the prototype model to pick up, and moving and rotating along with the virtual hand; and releasing the pickup of the sample model after the gesture opening data is recognized.
2) Developing a gesture interaction event trigger, judging interaction between a virtual hand and a UI (user interface) by utilizing a rigid body trigger and a ray detection technology, detecting collision information by using rays, immediately triggering an event by the gesture interaction event trigger, sending an event message, calling an event processing program and finishing processing of an interaction event
3) And developing a gesture judgment controller, reading the numerical range (including single hand/double hands, hand space position, angle, key joint bending value and the like) of the correct gesture recorded in the maintainability simulation process data xml file when a prototype model is dismounted, judging the gesture of an operator, allowing the relative position relation binding (virtual hand, virtual tool, the relative position of the operation model is fixed, and the operation model and the virtual tool move along with the virtual hand position after the binding) to execute maintenance operation when the gesture meets the requirement, and not binding when the gesture does not meet the requirement. The gesture determination process is shown in fig. 7. The depth perception gesture recognition judgment and operation effects of the embodiment of the invention are shown in fig. 8.

Claims (8)

1. A virtual maintainability simulation method based on depth perception gesture recognition is characterized by comprising the following steps:
1) hardware environment set-up
The hardware environment layer adopts non-wearable gesture recognition interactive equipment based on depth perception and is matched with a VR helmet to construct a virtual simulation environment of a human in a loop;
2) data preparation
The method comprises a prototype model, a tool model, a scene model and a maintenance task flow which are required by finishing virtual scene arrangement;
the prototype model comprises an operation model specified by maintenance task process operation items;
3) functional layer implementation
a) Gesture definition: the gesture definition is that whether an operation model is grabbed/picked by an operator in the loop virtual maintainability simulation system or not, and provides a judgment basis for the system to effectively understand the gesture input intention; the operation can be successful only when the operator operates in a specified gesture angle posture, otherwise, the operation fails;
b) and (3) gesture judgment: when the operation model is disassembled and assembled, the virtual gesture is judged, and when the gesture meets the gesture definition requirement, the binding of the relative position relation is allowed; otherwise, binding cannot be performed;
c) identifying an operation model: determining whether the operation model is consistent with the operation model designated in the maintenance task process operation item and needs to meet two operation relations of 'virtual human hand, operation tool, operation model' or 'virtual human hand, operation model';
d) and (3) maintenance flow control: in the free operation environment of a person in a loop, carrying out logic control on the operation steps of an operator according to a maintenance task flow;
4) interaction layer implementation
a) Depth-aware gesture recognition interaction: developing a hand motion data frame acquired by a customized gesture controller processing depth recognition gesture interaction device in real time, calling a gesture controller class to start recognition of a virtual gesture, sending an event message in combination with an interaction event trigger class to call an event processing program to complete processing of an interaction event, driving an operation model to move, and driving a UI to display information;
b) UI interaction control: developing an interaction event trigger, judging the interaction between the virtual hand model and the operation model and the multi-level UI interaction interface by using a rigid body trigger and a collision detection technology, and triggering the response of the operation model and the UI interaction interface in an event entrusted mode;
c) and (3) information display: and developing a UI control to complete the display of information of the model structure tree and the maintenance flow tree of the prototype.
2. The virtual serviceability simulation method according to claim 1, characterized in that: the wearable gesture recognition interaction equipment adopts a stereoscopic vision principle, is provided with the controller with the double cameras, and can perform coordinate positioning on a three-dimensional space object.
3. The virtual serviceability simulation method according to claim 1, characterized in that: the sample model data is from a product data management system, the data of the maintainability simulation sample model is ensured to be homologous with the design data, and the FBX format single file model with the complete BOM structure is formed through lightweight processing and format conversion.
4. The virtual serviceability simulation method according to claim 1, characterized in that: and defining a correct gesture parameter range when the disassembly and assembly model is maintained in a development platform, wherein the gesture parameters comprise one/two-hand operation, hand space positions and key joint bending values.
5. The virtual serviceability simulation method according to claim 1, characterized in that: in step 3), in b), the binding of the relative position relationship specifically includes: the relative positions of the virtual hand, the virtual tool and the operation model are fixed and unchanged, and the operation model and the virtual tool move along with the position of the virtual hand after being bound.
6. The virtual serviceability simulation method according to claim 1, characterized in that: in the step 4), the hand motion data frame in a) includes joint data and spatial position information of the hand.
7. The virtual serviceability simulation method according to claim 6, characterized in that: in a) in the step 4), the identification content of the gesture is: circle drawing, swiping, linear motion, clicking, touch screen, grasping, grabbing, and twisting of a virtual hand.
8. The virtual serviceability simulation method according to claim 4, characterized in that: the development platform is a Unity3D development platform.
CN201911255981.3A 2019-12-10 2019-12-10 Virtual maintainability simulation method based on depth perception gesture recognition Active CN111191322B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911255981.3A CN111191322B (en) 2019-12-10 2019-12-10 Virtual maintainability simulation method based on depth perception gesture recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911255981.3A CN111191322B (en) 2019-12-10 2019-12-10 Virtual maintainability simulation method based on depth perception gesture recognition

Publications (2)

Publication Number Publication Date
CN111191322A CN111191322A (en) 2020-05-22
CN111191322B true CN111191322B (en) 2022-05-17

Family

ID=70710962

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911255981.3A Active CN111191322B (en) 2019-12-10 2019-12-10 Virtual maintainability simulation method based on depth perception gesture recognition

Country Status (1)

Country Link
CN (1) CN111191322B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112329246A (en) * 2020-11-10 2021-02-05 上海精密计量测试研究所 Virtual verification method and system for maintainability design of solar cell array of space station
CN112634464A (en) * 2020-12-21 2021-04-09 中国航空工业集团公司沈阳飞机设计研究所 Maintenance work flow design display method
CN114690884A (en) * 2020-12-28 2022-07-01 中国科学院沈阳自动化研究所 Ship equipment arrangement visual demonstration system based on AR glasses
CN113223182B (en) * 2021-04-28 2024-05-14 深圳市思麦云科技有限公司 Learning terminal applied to automobile industry based on MR (magnetic resonance) glasses technology
CN113419636B (en) * 2021-08-23 2021-11-30 北京航空航天大学 Gesture recognition method and tool automatic matching method in virtual maintenance
CN114387836B (en) * 2021-12-15 2024-03-22 上海交通大学医学院附属第九人民医院 Virtual operation simulation method and device, electronic equipment and storage medium
CN114237403A (en) * 2021-12-27 2022-03-25 郑州捷安高科股份有限公司 Operation gesture detection processing method, equipment and medium based on VR (virtual reality) interactive equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663182A (en) * 2012-03-30 2012-09-12 南京航空航天大学 Intelligent virtual maintenance training system for large equipment
CN103020332A (en) * 2012-10-22 2013-04-03 南京航空航天大学 Intelligent virtual maintenance training system for civil aircraft
CN103761667A (en) * 2014-01-09 2014-04-30 贵州宝森科技有限公司 Virtual reality e-commerce platform system and application method thereof
CN107357427A (en) * 2017-07-03 2017-11-17 南京江南博睿高新技术研究院有限公司 A kind of gesture identification control method for virtual reality device
CN108287483A (en) * 2018-01-17 2018-07-17 北京航空航天大学 A kind of immersion Virtual Maintenance Simulation method and system towards Product maintenance verification
WO2019164056A1 (en) * 2018-02-23 2019-08-29 (주)프론티스 Server, method and wearable device for supporting maintenance of military equipment on basis of binary search tree in augmented reality, virtual reality, or mixed reality based general object recognition

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102663182A (en) * 2012-03-30 2012-09-12 南京航空航天大学 Intelligent virtual maintenance training system for large equipment
CN103020332A (en) * 2012-10-22 2013-04-03 南京航空航天大学 Intelligent virtual maintenance training system for civil aircraft
CN103761667A (en) * 2014-01-09 2014-04-30 贵州宝森科技有限公司 Virtual reality e-commerce platform system and application method thereof
CN107357427A (en) * 2017-07-03 2017-11-17 南京江南博睿高新技术研究院有限公司 A kind of gesture identification control method for virtual reality device
CN108287483A (en) * 2018-01-17 2018-07-17 北京航空航天大学 A kind of immersion Virtual Maintenance Simulation method and system towards Product maintenance verification
WO2019164056A1 (en) * 2018-02-23 2019-08-29 (주)프론티스 Server, method and wearable device for supporting maintenance of military equipment on basis of binary search tree in augmented reality, virtual reality, or mixed reality based general object recognition

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Rapid Construction of Virtual Maintenance Training System Based on Unity3D;Bai Weibing等;《2018 IEEE International Conference of Safety Produce Informatization》;20190415;第158-160页 *
基于VR技术的船舶分段装配虚拟仿真研究;胡安超;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20190115;第C036-190页 *
基于虚拟现实的机械产品维修***研究;宋国民 等;《组合机床与自动化加工技术》;20120820(第8期);第43-46页 *
武器装备虚拟维修训练***行为树设计与实现;徐文胜 等;《***仿真学报》;20180730;第30卷(第7期);第2722-2728页 *

Also Published As

Publication number Publication date
CN111191322A (en) 2020-05-22

Similar Documents

Publication Publication Date Title
CN111191322B (en) Virtual maintainability simulation method based on depth perception gesture recognition
US20240168602A1 (en) Throwable interface for augmented reality and virtual reality environments
CN107430437B (en) System and method for creating a real grabbing experience in a virtual reality/augmented reality environment
Nizam et al. A review of multimodal interaction technique in augmented reality environment
Wang et al. Augmented reality aided interactive manual assembly design
CN103258078A (en) Human-computer interaction virtual assembly system fusing Kinect equipment and Delmia environment
CN104156068A (en) Virtual maintenance interaction operation method based on virtual hand interaction feature layer model
Zaldívar-Colado et al. A mixed reality for virtual assembly
O'Hagan et al. Visual gesture interfaces for virtual environments
CN112181132A (en) Model evaluation method and system based on ray interaction task in virtual environment
CN112380735A (en) Cabin engineering virtual assessment device
CN113419622A (en) Submarine operation instruction control system interaction method and device based on gesture operation
Wang et al. Assembly design and evaluation based on bare-hand interaction in an augmented reality environment
Xiong et al. A framework for interactive assembly task simulationin virtual environment
Yuan et al. The virtual interaction panel: an easy control tool in augmented reality systems
CN114020978B (en) Park digital roaming display method and system based on multi-source information fusion
Yin et al. An empirical study of an MR-enhanced kinematic prototyping approach for articulated products
CN115268646A (en) Man-machine collaborative construction process sensing system, device, analysis method and medium
CN109643182A (en) Information processing method and device, cloud processing equipment and computer program product
CN115239636A (en) Assembly detection method based on augmented reality technology
CN112947238B (en) Industrial robot real-time control system based on VR technique
CN113918013A (en) Gesture directional interaction system and method based on AR glasses
CN114971219A (en) Multi-view-angle human factor dynamic evaluation method and system based on augmented reality
Boudoin et al. Towards multimodal human-robot interaction in large scale virtual environment
Varga et al. Survey and investigation of hand motion processing technologies for compliance with shape conceptualization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant