CN110365666B - Multi-terminal fusion cooperative command system based on augmented reality in military field - Google Patents

Multi-terminal fusion cooperative command system based on augmented reality in military field Download PDF

Info

Publication number
CN110365666B
CN110365666B CN201910586319.XA CN201910586319A CN110365666B CN 110365666 B CN110365666 B CN 110365666B CN 201910586319 A CN201910586319 A CN 201910586319A CN 110365666 B CN110365666 B CN 110365666B
Authority
CN
China
Prior art keywords
target
battlefield
human
instruction
gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910586319.XA
Other languages
Chinese (zh)
Other versions
CN110365666A (en
Inventor
栾明君
洪岩
栾凯
卞强
宁阳
陈艳
孟德地
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 15 Research Institute
Original Assignee
CETC 15 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 15 Research Institute filed Critical CETC 15 Research Institute
Priority to CN201910586319.XA priority Critical patent/CN110365666B/en
Publication of CN110365666A publication Critical patent/CN110365666A/en
Application granted granted Critical
Publication of CN110365666B publication Critical patent/CN110365666B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a multi-terminal fusion cooperative command system based on augmented reality in the military field, which comprises ten parts of user right management, battlefield information source access, a command and control system integration frame, external content loading, virtual battlefield space modeling, virtual battlefield space display, cooperative interaction shared support, human-human image cooperation, human-target cooperation and human-target interaction Various command links such as information collection, critical situation research and judgment and the like greatly improve the cooperative command efficiency.

Description

Multi-terminal fusion cooperative command system based on augmented reality in military field
Technical Field
The invention relates to the field of computer software, in particular to a multi-terminal fusion cooperative command system based on augmented reality in the field of military.
Background
Augmented reality is a new technology for seamlessly integrating real world information and virtual world information, and is characterized in that entity information (visual information, sound and the like) which is difficult to obtain in a certain time space range of the real world originally is overlaid through scientific technologies such as computers, virtual information is applied to the real world and is perceived by human senses, and therefore the sensory experience beyond reality is achieved. Augmented reality is that real environment and virtual object are superimposed on the same picture or space in real time and displayed simultaneously, and the two kinds of information are mutually supplemented and superimposed. The augmented reality technology not only shows real world information, but also can show virtual information generated by a computer, and the two kinds of information are mutually supplemented and superposed into a picture and are seamlessly integrated in a space.
At present, the current collaborative command mode of our army is limited by space-time and distance, command works such as joint command and joint control are carried out in traditional collaborative modes such as electronic whiteboards, audio and video conferences and collaborative plotting based on a geographic information system among command centers at all levels, and the problems that the collaborative process is difficult to carry out in real time, and the situation of a commander perceiving a battlefield depends on the mouse and the keyboard, so that the interactivity is poor exist.
In the aspect of augmented reality equipment, enterprises represented by Microsoft, Magic Leap, Intel RealSense, Hewlett packard, HTC and the like at home and abroad have released glasses equipment, and the gestures of users can be recognized by matching with a camera on the equipment, so that the gesture-based scene interaction capability is provided, and the positioning of targets is realized based on the focus tracking technology.
In the aspect of portrait fusion, vendors who take Occipital structures and owl views as representatives provide a portrait projection solution in a virtual scene, but at present, most of the vendors can only simulate portraits, and few vendors can realize a vivid holographic three-dimensional real portrait projection effect (the holographic three-dimensional real portrait projection effect has important improvement on the experience of multi-person collaboration).
In the aspect of target positioning, most of the solutions provided in the market at present rely on focus tracking for positioning (e.g. microsoft Hololens), the positioning mode is inflexible to use, has high requirements on the position, the inclination angle and the like of the head, and the long operation time can cause strong fatigue, so that the method is not suitable for complex service scenes; in the aspect of gesture and voice interaction, the current technical difficulty is mainly focused on low recognition accuracy of user gestures, and most solutions can only recognize a small number of simple gestures (for example, microsoft Hololens can only recognize two simple gestures).
In the aspect of multi-person collaboration, most researches are focused on how to realize clear portrait projection in an augmented reality scene, and collaborative interaction among multiple persons in the scene also belongs to the blank field, so that no mature solution is provided at present.
There is a gap in the demand for remote cooperative commanding in the military field in view of the mainstream augmented reality interaction solution in the current market and published relevant augmented reality interaction application papers and patents. The concrete expression is as follows: the application of the field of cooperative combat commanding is particularly emphasized to be combined with the battlefield environment, including panoramic display of the battlefield environment, quick positioning and control of enemy and my targets, quick switching of battlefield scenes, quick transmission of information among commanders at all levels, accurate grasp of expression actions of commanders at all levels, real-time access of various battlefield information sources and effective display of the command actions in combination, situation plotting, combat research and judgment and other operations based on the battlefield environment, strict requirements on various indexes such as response speed, target positioning precision, information transmission real-time performance and the like are met, and all current applications based on augmented reality cannot meet the requirements of cooperative combat in the military field.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a set of multi-end fusion cooperative command system based on augmented reality, focuses on cooperative interaction core requirements such as joint command and joint control, provides a command environment for each level of commanders, ensures that a plurality of commanders in different places can carry out various command links such as situation plotting, simulation rehearsal, information gathering, critical situation research and judgment and the like by implementing efficient basic human-human cooperation/human interaction based on the introduction of various information resources in the battlefield in the same virtual battlefield space, and greatly improves the cooperative command efficiency.
In order to solve the technical problem, the invention adopts the following technical scheme: the military field multi-terminal fusion collaborative command system based on augmented reality comprises ten parts of user right management, battlefield information source access, command and control system integration frame, external content loading, virtual battlefield space modeling, virtual battlefield space display, collaborative interaction shared support, human-human image collaboration, human-target collaboration and human-target interaction, wherein the user right management comprises room management, personnel management, right management and collaborative invitation functions, all user information to be added into a virtual scene to participate in collaboration is maintained through the personnel management, a collaborative space in a virtual scene can be created by an initiator through the room management, personnel can be selected by the initiator to be added into the collaborative space through the collaborative invitation, the collaborative right of each person in the collaborative space can be set through the right management, and a commander can add into the virtual battlefield environment before accessing the virtual battlefield environment, login verification is carried out in a fingerprint and password mode, and only a user passing the verification can enter the virtual battlefield space and use related functions; the battlefield information source access provides guidance support for various information sources under fixed environment and tactical environment, such as land, sea, air and sky in an active or passive mode, and the information sources obtained in real time can be converted into battlefield situation information capable of providing support for cooperative command after being processed and analyzed and then bound with a target or directly and intensively displayed in a information source window; the integrated framework of the finger control system provides integration for the finger control system, supports integration of Web-like finger control system application/RPC, gPC, Webservice and Restful-like finger control function services, and realizes that a person is instructed to operate the finger control system in a virtual battlefield space by combining the human-target interaction function, and refreshes a system feedback result in real time; the external content loading refers to opening a content window at a designated position of a virtual battlefield space, periodically polling files in a local storage space, loading and refreshing the files in the content window of the virtual battlefield environment, realizing page turning and click operations on the content by combining the human-target interaction capacity, providing loading and displaying of contents in various formats, and supporting loading and displaying of video streams; the virtual battlefield space modeling is that various target, terrain, vegetation and tree models are built through a modeling process by combining a geographic information system and a modeling technology, a global virtual battlefield environment is generated to realize the vivid restoration of the battlefield environment after loading and running based on augmented reality equipment, and the battlefield environment and the motion state of the enemy target can be viewed from different angles; the virtual battlefield space display provides multiple browsing modes, and the feedback of all the browsing modes can be displayed to the commander through the augmented reality equipment, so that the commander can conveniently view the battlefield space from different angles; the collaborative interaction common support provides clock synchronization, coordinate calculation, instruction collaboration and state collaboration functions, the clock synchronization provides time system support for collaboration among users and collaboration of the users and contents, the coordinate calculation provides basic algorithm support for coordinate positioning of the holographic three-dimensional portrait in a virtual space, the instruction provides support for distribution of an operation instruction converted by a user gesture to each glasses device, and the state collaboration provides support for distribution of a target state changed in a virtual scene to each glasses device; the human-human image collaboration provides functions of human feature extraction, multi-path human image fusion, voice noise reduction and audio-image synchronization, wherein human images are acquired from multiple angles through a group of depth cameras in the human feature extraction, the multi-path human image fusion is that related image parameters are extracted from the human images acquired from different angles, complete holographic three-dimensional human image parameters are formed through splicing and combining, the voice noise reduction is that noise reduction is carried out on voice, the audio-image synchronization is that the human images and the voice acquired at the same moment are synchronously processed, and the human images and the voice are distributed to all glasses on the premise of unifying time reference, so that everyone can see other people and also can hear the voice of the other people; the human-target cooperation provides the functions of target state feedback, target state acquisition, target state distribution and target state synchronization, the target state feedback is that the target model makes different reactions according to the interactive instruction, drives the target position, shape, color, state and content to change along with the interactive instruction, the target state acquisition is to acquire the change of the target state, extract the parameter value of the specific change, the target state distribution is to distribute the change of the target state to the augmented reality device of all persons by referring to the initiating person, the target state synchronization is to compare and verify the target state distributed to the commander augmented reality device with the change of the local target state, if the target states are inconsistent, processing the target states by a collaborative client deployed in the augmented reality equipment, and then ensuring the consistency of target state display in all the augmented reality equipment; the human-target interaction provides functions of gesture feature extraction, gesture motion analysis, interaction instruction conversion, pointing target positioning, gesture instruction acquisition, voice instruction acquisition, instruction synchronous distribution and instruction action callback, the gesture feature extraction refers to sensing the direction and relative position data of the current gesture of a commander from auxiliary equipment, the gesture motion analysis refers to extracting the coherent motion of the gesture within a period of time from the auxiliary equipment, the interaction instruction conversion refers to analyzing the gesture coherent motion, comparing the gesture coherent motion with predefined gesture motion and converting the gesture motion into an operation instruction, the pointing target positioning refers to judging the coordinate and position of a pointing target by combining a coordinate system of an augmented reality scene according to the gesture direction and relative position data, and the gesture instruction acquisition and the voice instruction acquisition refer to when a user sends an instruction by using a gesture or voice, and triggering an instruction interface to call operation, and acquiring an instruction which is currently converted according to gesture or voice recognition, wherein the instruction synchronous distribution is to synchronously distribute an interactive instruction to other augmented reality equipment, and the instruction action callback is to establish an interactive channel with an operation target after receiving the instruction and trigger a callback method of the target, so that the target executes a feedback action.
In the above solution, preferably, the collaboration authority includes a definition of whether the user can access the virtual battlefield space, a definition of whether the user can see and collaborate with a designated user image in the virtual battlefield space, and a definition of which targets in the virtual battlefield space can be seen and interacted with by the user.
In the above scheme, preferably, the battlefield information source access includes the guidance of various battlefield information sources in active service of our army, and the battlefield information sources can be displayed in various ways such as automatic hooking with battlefield targets, centralized display according to themes, free calling of specific items, and the like.
In the above scheme, preferably, the command control system integration framework provides integration of Web-like command control system application/RPC, gRPC, Webservice, Restful-like command control function services, a commander can select, click, double click, drag, pull-down and the like the integrated command control system in the virtual battlefield space, and the command control system integration framework refreshes the system feedback result in real time.
In the above scheme, preferably, the external content loading provides the loaded and displayed contents in multiple formats including external video, picture, text, PPT, Word, and Excel, and the basic content paging and clicking operations can be performed on the loaded and displayed contents.
In the above scheme, preferably, the virtual battlefield space modeling is combined with a geographic information system and a modeling technology, and the created global virtual battlefield environment can realize the realistic restoration of the actual battlefield environment.
In the above solution, preferably, the virtual battlefield space display provides multiple browsing modes including eagle eye, roaming, switching in and out of specific mission areas, and target enlargement and restoration.
In the above scheme, preferably, the human-human target interaction realizes sensing of a gesture of a user pointing to a target, and realizes accurate positioning of the target, a content display window and an integrated control system window in a virtual battlefield environment by combining coordinate system setting and gesture pointing.
In the above scheme, preferably, the human-human target interaction realizes establishment of a command channel between the gesture/voice command and the scene target, converts the user gesture into a control command and sends the control command to the target selected by the user on the basis of realizing accurate positioning of the target, and triggers a target feedback action.
In the above scheme, preferably, the human-human target interaction realizes that the target feedback action triggered by the control instruction can be synchronously perceived by all commanders in the virtual battlefield environment.
In the above scheme, preferably, the human-human image cooperation realizes that human images are collected from multiple angles through a group of depth cameras, relevant image parameters are extracted from the human images collected from different angles, and complete holographic three-dimensional human image parameters are formed through splicing and combining.
In the above scheme, preferably, the person-person image cooperatively realizes that two or more commanders in different places project the holographic person image in the virtual battlefield command environment, each person can see the complete virtual battlefield command environment by wearing the terminal display device, and the facial features, actions, postures and sounds of each person can be synchronously perceived by other people.
In the above scheme, preferably, the human-human images cooperate to achieve synchronous acquisition of human images and speech audio of all users in the virtual scene under the same time reference, and are synchronously distributed to glasses and audio playing devices worn by all users in the virtual scene through different transmission channels, so as to achieve the effect of synchronous display of actions, expressions and sounds of the human images.
The invention provides a multi-terminal fusion cooperative command system based on augmented reality in the military field, aiming at the problems that the traditional remote command cooperative mode in the prior military field has low efficiency, poor real-time performance, can not communicate directly, and depends on a mouse and a keyboard seriously, and the like, and the system has the following characteristics:
(1) virtual battlefield environment construction based on geographic information
Constructing a virtual battlefield model based on various data of geographic information, wherein the virtual battlefield model comprises various geographic elements such as land oceans, desert grasslands, island reefs and islands, mountain rivers, trees and shrubs, deployment conditions of enemies, peoples, friendly forces and guarantee strength in a battlefield space, and various important targets such as artillery, tanks, vehicles, ships, airplanes and the like; after each level of commanders wear the augmented reality equipment, the commanders can be connected to enter the virtual battlefield environment through authentication.
(2) Battlefield scene fast switching
When the virtual battlefield environment is faced with a macroscopic battlefield environment, a plurality of scene switching means such as eagle eye, full-map roaming, battlefield universe overlook, specific area cut-in, specific target focusing and the like are provided, and meanwhile, a geographic information system window can be accessed in the virtual battlefield environment, so that a commander can conveniently check the battlefield environment in a manner of combining a plurality of means.
(3) Leading and connecting of various battlefield information
This patent supports all kinds of battlefield resource to insert, can take over all kinds of battlefield information sources of our army's active service to can articulate with the battlefield target in order automatically, concentrate the show according to the theme, freely call out multiple modes such as specific entry and demonstrate, can satisfy commander's command appeal at all levels, very big improvement command efficiency in coordination.
(4) Incorporating various common command functions
The integrated control system supports integration of various command functions, and can integrate operation command functions such as situation plotting, comprehensive research and judgment, operation planning, document receiving and sending and the like based on a virtual battlefield environment; the system can comprehensively show various texts, videos, images and system interface contents such as information compilation, battlefield investigation, battle documents, battle schemes and the like; the battlefield image data can be directly accessed and comprehensively displayed in combination with the virtual battlefield environment.
(5) Can accurately position the target in the virtual battlefield
The auxiliary sensing equipment is used for sensing the gesture of the user pointing to the target, and the pointing target (such as ships, airplanes, tanks, army stations and the like), window display contents (loaded videos, texts, images and the like) and integrated various command system interfaces are accurately positioned by combining coordinate system setting and gesture pointing.
(6) Being able to interact with targets in a virtual battlefield environment
Establishing a command channel of a gesture/voice command and a scene target, and on the basis of realizing accurate positioning of the target, converting the operation of the auxiliary sensing equipment by a user and specific words spoken by the user into a feedback action of a control command triggering the target to realize calling out a target menu, selecting menu items, carrying out single click, double click, click and other interactions on the target or a specified point of the virtual scene, ensuring that the user can interact with various targets in the virtual scene, triggering the self-control attribute of the target, and synchronously sensing all feedback effects triggered by target control in a virtual battlefield command environment by a commander.
(7) Action collaboration of multiple users in a virtual scene
The method supports two or more commanders in different places to project holographic portrait in the virtual battlefield command environment, each person can see the complete virtual battlefield command environment by wearing the terminal display equipment, the facial features, the actions, the postures and the sounds of each person can be synchronously sensed by other people, and the application experience is like that the person carries out cooperative command before a sand table in the same place.
(8) Voice collaboration of multiple users in virtual scenarios
The method comprises the steps of completing synchronous acquisition of human body images and speaking audios of all users in a virtual scene under the same time reference, compressing and filtering the acquired audios, and synchronously distributing the human body holographic stereo images and the human body holographic stereo images which are subjected to coordinate mapping processing to glasses equipment and audio playing equipment worn by all users in the virtual scene through different transmission channels, so that the effect of synchronously showing the actions, expressions and sounds of the human body images is achieved. Each user in the virtual scene can talk with other people and hear the other people as if the users in the real world communicated face to face.
The invention has the following beneficial effects:
a) the method makes a great breakthrough in the aspect of human-object interaction comprehensive application and has practical popularization value. By combining various interaction means such as gestures and voice, accurate positioning, instruction distribution and interaction feedback of each target (including models, buttons, menus, loaded external documents, videos, system interfaces and the like) in the augmented reality scene by each level of commander are realized, and the method can be applied to the vast majority of application scenes of cooperative command of combined combat;
b) a more accurate target positioning means is provided, and the method is an innovation in the field of augmented reality collaborative interaction. The accurate gesture sensing technology based on the auxiliary sensing equipment, which is supposed to be adopted by the patent, supports pointing and positioning operations in a wider field angle and a farther sight distance range, and accurately positions a target with a smaller volume;
c) the cooperative interaction among multiple persons based on the holographic three-dimensional real person image fills a blank in the field of augmented reality cooperative application. On the basis of acquiring human images and sounds of multiple users in different places, human body characteristic data, depth data, color data and voice data are extracted and subjected to a series of processing such as space coordinate conversion and positioning, audio-video synchronization, virtual-real scene fusion and the like, and then are projected into glasses, so that the users in a virtual scene can see and hear each other and exchange and cooperate with each other in the scene;
d) on the basis of the virtual battlefield space, integration of a function interface of an instruction control information system, loading and displaying of external content resources and integration of interconnection and intercommunication of various communication channels of a battle environment are provided, and meanwhile, commanders are supported to control the integrated content in the virtual battlefield environment, a new instruction control system using mode is provided for each level of commanders, and the problems that the commanders sense battlefield situation and rely on poor mouse and keyboard interactivity and the like are greatly improved.
Drawings
FIG. 1 is a design diagram of the multi-person cooperative interaction model based on the virtual battlefield.
Fig. 2 is a functional component structure diagram of the cooperative interaction system of the present invention.
FIG. 3 is a diagram of the relationship and invocation logic within the collaborative interaction system according to the present invention.
FIG. 4 is a flow chart of the collaborative interaction system implementation of the present invention.
Detailed Description
The technical solution of the present invention will be described in detail with reference to the accompanying drawings from the aspects of model design and composition structure.
(1) Model design
The model design of the patent is shown in figure 1, and comprises 11 parts of contents, such as a user gesture model, a virtual battlefield authority model, a user portrait model, virtual battlefield and portrait fusion, a multi-user cooperation model, an interaction instruction model, a battlefield dynamic target model, a battlefield static target model, a battlefield key reference object model, a battlefield landform model, a target feedback content model and the like.
The user gesture model is used for defining user gestures, and comprises gesture shape definition, gesture feature definition, gesture outline definition and the like; marking the pointing direction of the user gesture, namely combining the virtual scene projection reference object and the current user position, and obtaining the direction and pointing direction of the user gesture in the current virtual battlefield environment through coordinate conversion analysis; and defining the mapping relation between the gesture and the interaction instruction, considering that the user intention is interacted according to the matching degree of the gesture of the user, and triggering the defined interaction instruction to distribute to the specified target.
The virtual battlefield authority model is used for defining the authority of a user to enter a virtual battlefield space, and the virtual battlefield authority is defined by a plurality of dimensions of which command level user (military committee, war zone, army, each level of command agencies in the army), which service domain user (chief, information, logistics, political workers and the like), which level of job user (commander, army leader, teacher leader, group leader, continuous leader and the like), specific personnel selection and the like.
The user portrait model is used for defining holographic stereo portrait description parameters, and comprises portrait outlines, facial image features, hand image features, front body image features, back body image features, character motion capture and other elements.
The virtual battlefield and portrait fusion is used for defining the projection of holographic stereo portraits collected by a plurality of remote commanders in the virtual battlefield environment, determining the orientation of the holographic portraits in the virtual battlefield environment according to the station position and other parameters of the personnel distance collecting cameras, and distributing the projection positions of the portraits according to the number of the personnel in the virtual battlefield environment to ensure that the portraits face to face.
The multi-user cooperation model defines a cooperation mode among cooperation between two or more commanders in different places, and comprises the steps that each commander can synchronously see a virtual battlefield command environment, but the visual angles of the virtual battlefield command environments seen by the commanders are different due to different positions; each commander can see the holographic portrait of other people, and can sense the current consecutive changes of actions, facial features, postures and positions of other people through the portrait, and the portrait is coordinated with the portrait in a face-to-face way; the voice of other people can be heard among the commanders, and the voice of each commander is synchronously transmitted with the current action, the facial features and the gesture; the whole process of man-machine interaction between each commander and the target can be synchronously seen by all the commanders; two or more commanders can cooperate and carry out serial operation to the target, and the mutual overall process all can be seen by all commanders in step.
The interactive instruction model defines an interactive mode of a commander and a target in a virtual scene, the commander moves around the periphery of the virtual battlefield command environment, and the virtual battlefield command environment can be observed from different visual angles; the commander respectively moves away from and approaches to the virtual battlefield command environment, and can see that the virtual battlefield command environment also shows change along with the distance; the commander selects a certain target (pointing to the target, the target response indicates that the target is selected) through auxiliary equipment such as a ring or a ring rod and the like, and realizes calling out a target interactive menu, selecting and clicking a menu item, or selecting, clicking, double clicking, dragging, pulling down and other operations on a command system function page in a window by combining a voice instruction or the operation on the auxiliary equipment; in the virtual battlefield command environment content loading window, local videos, plain texts and pictures which are loaded and displayed are seen, and the videos can be controlled to stop, pause, play and mute; the contents of target detection information, information collection processing information, uploading and issuing document information and the like of the approach can be seen in a battlefield information source display window.
The battlefield dynamic target model defines all interactive models in the virtual battlefield environment, including appearance styles of the models, rendering colors of the models, lighting effects of the models, instruction sets to which the models can respond, and actions of the models after responding to the instructions (including movement of positions, change of inclination angles, change of content display, change of colors and the like).
The battlefield static object model defines all models without interaction capacity in the virtual battlefield environment, the battlefield landform model defines all models related to the battlefield environment, and the models comprise the appearance style of the models, the rendering color of the models, the illumination effect of the models and the like.
The target feedback content model defines how to reconstruct the target model and feed back the target model to the commander after the target receives the interactive instruction and triggers the target state change.
(2) Composite structure
The system composition structure of the patent is shown in fig. 2, and specifically comprises ten parts of user right management, battlefield information source access, a command control system integration frame, external content loading, virtual battlefield space modeling, virtual battlefield space display, cooperative interaction shared support, human-human image cooperation, human-target cooperation and human-target interaction.
The user authority management comprises functions of virtual meeting room management, personnel management, authority management, collaborative invitation and the like. The method comprises the steps that all user information participating in collaboration in a virtual scene to be added is maintained through personnel management, a collaboration space in the virtual scene can be created by an initiator through room management, the initiator can select personnel to add in the collaboration space through collaboration invitation, and the collaboration permission of each person in the collaboration space can be set through permission management, wherein the collaboration permission comprises images of people which can be seen and communicated with the images, objects in the virtual scene which can be seen by people, interaction of the objects which can be achieved by people and the like.
The battlefield information source access provides guidance support for various information sources of land, sea, air and sky in fixed environment and tactical environment, and the information sources obtained in real time can be converted into battlefield situation information capable of providing support for cooperative command after being processed and analyzed.
The integration framework of the command control system provides integration of the interface level of the command control system, supports integration of a desktop client application interface and a Web interface, and realizes operations of selecting, clicking, double clicking, dragging, pulling down and the like of a command control system by a person in a virtual battlefield space by combining a human-target interaction function, and refreshes a system feedback result in real time.
The external content loading means that a content window is opened at a designated position of a virtual battlefield space, so that loading and displaying of external video, pictures, texts, PPT, Word, Excel and other contents in various formats are provided, and loading and displaying of video streams are also supported.
The virtual battlefield space modeling is to realize the vivid restoration of a battlefield environment by combining a geographic information system and a modeling technology, and compared with the existing geographic information system, the virtual battlefield space modeling has the advantages of strong immersion feeling, high restoration degree of a real scene, and capability of viewing the battlefield environment and the motion state of the enemy and my targets from different angles.
The virtual battlefield space display provides various browsing modes such as eagle eye, roaming, specific task area cut-in/cut-out, target amplification/reduction and the like, and is convenient for commanders to view the battlefield space from different angles.
The cooperative interaction shared support provides the functions of clock synchronization, coordinate calculation, instruction cooperation, state cooperation and the like. Clock synchronization provides time system support for cooperation among users, cooperation between users and contents and the like; coordinate calculation provides basic algorithm support for positioning the coordinates of the holographic three-dimensional portrait in a virtual space; the instruction is coordinated to provide support for distributing the operation instruction converted by the user gesture to each glasses device; the state coordination provides support for distributing the changed target state in the virtual scene to each glasses device.
The human-human image cooperation provides the functions of character feature extraction, multi-path human image fusion, voice noise reduction, audio and image synchronization and the like. The character feature extraction is to acquire a portrait from a plurality of angles through a group of depth cameras; the multi-path portrait fusion is to extract characteristic values, colors, postures, body types and other related image parameters of the portraits collected at different angles and form complete portrait parameters through splicing and combination; the voice denoising is to perform denoising processing on voice; the audio-video synchronization is to perform synchronous processing on the human image and the voice collected at the same time, and distribute the human image and the voice to all glasses equipment on the premise of unifying the time reference, so that everyone can see other people and hear the voice of the other person at the same time.
The human-target cooperation provides the functions of target state feedback, target state acquisition, target state distribution, target state synchronization and the like. The target state feedback is that the target model makes different reactions according to the interactive instruction, and the position, the shape, the color, the state and the content of the target are driven to change; the target state acquisition is to acquire the change of a target state and extract a parameter value of specific change; target state distribution refers to distributing the change of the target state to augmented reality equipment of all persons by initiating human reference; and the target state synchronization is to compare and verify the changes of the target state distributed to the commander augmented reality equipment and the local target state, and if the changes are inconsistent, the target state synchronization is processed by a cooperative client deployed in the augmented reality equipment to ensure the consistency of target state display in all the augmented reality equipment.
The method comprises the following functions of human-target interaction gesture feature extraction, gesture action analysis, interaction instruction conversion, pointing target positioning, gesture instruction acquisition, voice instruction acquisition, instruction synchronous distribution, instruction action callback and the like. The gesture feature extraction is to sense the data such as the direction, the relative position and the like of the current gesture of the commander from the auxiliary equipment; the gesture motion analysis is a coherent motion that extracts gestures over a period of time from the auxiliary device; the interactive instruction conversion is to analyze the gesture coherent action, compare the gesture coherent action with the predefined gesture action and convert the gesture coherent action into an operation instruction; the pointing target positioning is to judge the coordinate and the position of a pointing target by combining a coordinate system of an augmented reality scene according to data such as gesture direction, relative position and the like; the gesture instruction/voice instruction acquisition is that when a user sends an instruction by using a gesture and voice, an instruction interface is triggered to call operation, and an instruction converted according to gesture/voice recognition is acquired; the instruction synchronous distribution is to synchronously distribute the interactive instruction to other augmented reality equipment; the instruction action callback is a callback method for establishing an interaction channel with an operation target and triggering the target after receiving an instruction, so that the target executes a feedback action.
The applicable environment, internal relationship and call logic, and implementation flow of this patent are further described below, as follows:
(1) requirements for augmented reality devices
Figure GDA0002200710310000161
Figure GDA0002200710310000171
(2) Internal relationship and call logic
The internal relation and the call logic of the patent are shown in fig. 3, and the internal relation and the call logic are introduced around user authority management, battlefield information source access, a command and control system integrated frame, external content loading, virtual battlefield space modeling, virtual battlefield space display, cooperative interaction shared support, human-human image cooperation, human-target cooperation and human-target interaction.
a) User rights management
Before accessing the virtual battlefield environment, the commander needs to log in and verify through fingerprints, passwords and the like, and only the user who passes the verification can enter the virtual battlefield space and use related functions.
b) Battlefield information source access
An interface is provided for the outside to acquire information sources such as land, sea, air, day and the like in an active or passive mode, the information sources are input into a virtual battlefield environment after being processed and arranged, and the information sources are bound with a target after being processed or are directly and intensively displayed in a information source window.
c) Integrated framework of finger control system
The system provides service integration functions of service registration, addressing, agent access and the like, can integrate the related functions of the active service command and control system of our army, provides a system display window integration display command and control system function page, and realizes the operations of clicking, double clicking, dragging, checking and the like on a system function interface by combining the human-target interaction capacity.
d) External content loading
Providing a local storage space for receiving external videos, texts, pictures, PPT, Word, Excel and other files, periodically polling the files in the local storage space, loading and refreshing the files in a content window of the virtual battlefield environment, and realizing the operations of page turning, click and the like of the content by combining the human-target interaction capacity.
e) Virtual battlefield space modeling
Models of various targets, terrains, vegetation, trees and the like are built through a modeling process, and a global virtual battlefield environment is generated after loading and running based on augmented reality equipment.
f) Virtual battlefield space representation
The roaming and eagle eye viewing of the global virtual battlefield environment are realized, operations such as amplification reduction, cut-in and cut-out and the like can be performed on the local model, and the feedback of all the operations can be displayed to a commander through the augmented reality equipment.
g) Cooperative interactive shared support
Constructing command transmission channels between the gesture auxiliary equipment and various dynamic targets in the virtual battlefield environment, and communicating interaction channels between people and the targets; a unified clock is provided, and synchronization and unified pace of cooperative content among multiple persons are ensured; the real world coordinate and the virtual battlefield environment coordinate conversion is provided, and position service is provided for cooperation among multiple persons and interaction between persons and targets; and state coordination service is provided, and the target state can be ensured to be simultaneously delivered to all the commander augmented reality equipment in the virtual battlefield space.
h) Human-human image collaboration
The method comprises the steps of interacting with holographic portrait acquisition equipment, generating a holographic three-dimensional portrait after rendering, further processing the acquired portrait and sound, sending the processed portrait to augmented reality equipment through a streaming media server, and sending the processed portrait to reality augmented equipment of other commanders through a network and a communication service link after being processed by a collaborative interaction shared support function, so that the holographic portrait interaction collaboration among a plurality of augmented reality equipment is realized.
i) Human-target interaction
The voice of a user is collected through the audio collection device, the gesture of the user is collected through the gesture auxiliary device, the selected target is determined after analysis, a target control instruction is formed, the instruction is sent to the specified target through the cooperative interaction shared support function, and the feedback action of the target is triggered after the target receives the instruction.
i) Human-target collaboration
And synchronously distributing the target receiving instruction and the feedback of the change of the characteristics such as position, shape, color and the like to augmented reality equipment of all commanders by using a cooperative interaction shared support function.
(3) Implementation process
The implementation flow of the patent is shown in fig. 4. The whole collaborative interaction process based on the virtual battlefield environment is as follows:
a) constructing various models in the virtual battlefield environment, and after the construction is finished, loading and running the models on the augmented reality equipment to generate the virtual battlefield environment;
b) a commander needs to log in for verification before wearing the augmented reality equipment, and can enter the virtual battlefield environment after the verification is passed;
c) after the verification is passed, the human body image data acquired by the human image acquisition equipment also enters a virtual battlefield environment, and after a series of processing such as feature processing, depth processing, color extraction, human image modeling and the like, a holographic three-dimensional human image is formed and is fused with the virtual battlefield environment;
e) through the human-human image cooperation function, the changes of the expression, the action, the position and the limbs of each commander can be synchronized to the augmented reality equipment of other commanders, and the holographic three-dimensional image display is refreshed, so that all people can sense the image changes of other people;
f) the commander can also use roaming and eagle eyes to check the full view of the battlefield in the virtual battlefield environment, and can also use functions of cutting in a specific task area, amplifying and rotating a target and the like to check the specific area or the target at a short distance;
g) the commander wears the gesture auxiliary equipment, can point to the target and form the specific gesture in the virtual battlefield environment, convey the signal of perception of the gesture to the human-target interaction function through the auxiliary equipment, obtain the operation order of the mapping after analyzing the gesture, send to pointing to the target; or speaking a voice instruction on the basis of pointing to the target, transmitting a voice signal acquired by the audio acquisition equipment to the human-target interaction function, matching the voice to obtain a mapped operation instruction, and sending the mapped operation instruction to the pointing target;
h) after receiving the operation instruction, the target triggers an instruction feedback code inside the target, so that the appearance, position, state and the like of the target are changed;
i) the human-target interaction function collects the change information of the target and synchronously distributes the change information to each commander augmented reality device in the virtual battlefield environment, and the target is refreshed in the virtual battlefield space, so that all people can see the change of the target;
j) after various information sources are accessed into the virtual battlefield environment, the information sources are bound with a specific target after being processed, or are intensively displayed in an information source window and refreshed along with human-target interaction;
k) various command control system functions are integrated into a function window, and the system functions are operated along with human-target interaction;
l) various external content resources are bound with a specific target after being processed, or are intensively displayed in an information source window and refreshed along with human-target interaction.

Claims (8)

1. The multi-end fusion collaborative command system based on augmented reality in the military field is characterized by comprising ten parts, namely a user right management module, a battlefield information source access module, a command and control system integrated frame module, an external content loading module, a virtual battlefield space modeling module, a virtual battlefield space display module, a collaborative interaction shared support module, a human-human image collaboration module, a human-target collaboration module and a human-target interaction module, wherein the user right management module comprises room management, personnel management, right management and collaborative invitation functions, all user information participating in collaboration and collaboration in a virtual scene to be added is maintained through the personnel management, a collaborative space in a virtual scene can be created by an initiator through the room management, personnel can be selected to be added into the collaborative space through the collaborative invitation, and the collaborative and interactive rights of each person in the collaborative space can be set through the right management, before accessing the virtual battlefield environment, the commander needs to log in and verify in a fingerprint and password mode, and only the user who passes the verification can enter the virtual battlefield space and use related functions; the battlefield information source access module provides guidance support for various information sources under fixed environment, tactical environment and on land, sea, air and sky in an active or passive mode, and can convert the information sources obtained in real time into battlefield situation information capable of providing support for cooperative command and binding the battlefield situation information with a target after processing and analysis, or directly and intensively display the battlefield situation information in a information source window; the system integration framework module provides integration for the finger control system, supports integration of Web finger control system application, RPC, gPC, Webservice and Restful finger control function service, and realizes that a person is given in a virtual battlefield space to operate the finger control system and refresh a system feedback result in real time by combining the human-target interaction module function; the external content loading module is used for opening a content window at a designated position of a virtual battlefield space, periodically polling files in a local storage space, loading and refreshing the files in the content window of the virtual battlefield environment, realizing page turning and click operations on contents by combining the capability of the human-target interaction module, providing loading and displaying of contents in various formats and supporting loading and displaying of video streams; the virtual battlefield space modeling module is used for building various target, terrain, vegetation and tree models through a modeling process by combining a geographic information system and a modeling technology, generating a global virtual battlefield environment to realize vivid restoration of the battlefield environment after loading and running based on augmented reality equipment, and checking the battlefield environment and the motion state of the enemy target from different angles; the virtual battlefield space display module provides multiple browsing modes, and the feedback of all the browsing modes can be displayed to the commander through the augmented reality equipment, so that the commander can conveniently view battlefield spaces from different angles; the collaborative interaction common support module provides clock synchronization, coordinate calculation, instruction collaboration and state collaboration functions, the clock synchronization provides time-based support for collaboration among users and collaboration between users and contents, the coordinate calculation provides basic algorithm support for coordinate positioning of the holographic three-dimensional portrait in a virtual space, the instruction collaborates to provide support for distribution of an operation instruction converted by a user gesture to each glasses device, and the state collaboration provides support for distribution of a target state changed in a virtual scene to each glasses device; the human-human image cooperation module provides functions of character feature extraction, multi-path human image fusion, voice noise reduction and sound and image synchronization, wherein the character feature extraction is to collect human images from multiple angles through a group of depth cameras, the multi-path human image fusion is to extract related image parameters of the human images collected from different angles, complete holographic three-dimensional human image parameters are formed through splicing and combining, the voice noise reduction is to perform noise reduction on voice, the sound and image synchronization is to perform synchronous processing on the human images and voice collected at the same time, and the human images and the voice are distributed to all glasses on the premise of unifying time reference, so that everyone can see other people and also can hear the voice of the other person; the human-target cooperation module provides the functions of target state feedback, target state acquisition, target state distribution and target state synchronization, the target state feedback is that the target model makes different reactions according to the interactive instruction, drives the target position, shape, color, state and content to change along with the interactive instruction, the target state acquisition is to acquire the change of the target state, extract the parameter value of the specific change, the target state distribution is to distribute the change of the target state to the augmented reality device of all persons by referring to the initiating person, the target state synchronization is to compare and verify the target state distributed to the commander augmented reality device with the change of the local target state, if the target states are inconsistent, processing the target states by a collaborative client deployed in the augmented reality equipment, and then ensuring the consistency of target state display in all the augmented reality equipment; the human-target interaction module provides functions of gesture feature extraction, gesture motion analysis, interaction instruction conversion, pointing target positioning, gesture instruction acquisition, voice instruction acquisition, instruction synchronous distribution and instruction action callback, the gesture feature extraction refers to sensing the current gesture direction and relative position data of a commander from auxiliary equipment, the gesture motion analysis refers to extracting gesture coherent motion in a period of time from the auxiliary equipment, the interaction instruction conversion refers to analyzing the gesture coherent motion, comparing the gesture coherent motion with predefined gesture motion and converting the gesture motion into an operation instruction, the pointing target positioning refers to judging the coordinate and position of a pointing target by combining a coordinate system of an augmented reality scene according to the gesture direction and the relative position data, and the gesture instruction acquisition and the voice instruction acquisition refer to when a user sends an instruction by using a gesture or voice, and triggering an instruction interface to call operation, and acquiring an instruction which is currently converted according to gesture or voice recognition, wherein the instruction synchronous distribution is to synchronously distribute an interactive instruction to other augmented reality equipment, and the instruction action callback is to establish an interactive channel with an operation target after receiving the instruction and trigger a callback method of the target, so that the target executes a feedback action.
2. The military field augmented reality based multi-terminal fusion collaborative command system of claim 1, wherein the collaborative right includes a definition of user access to a virtual battlefield, a definition of whether a user can see and collaborate with images of other users, and a definition of whether a user can see and interact with a specified target.
3. The military field augmented reality-based multi-terminal fusion cooperative command system of claim 1, wherein the battlefield information source access module comprises a module for guiding various battlefield information sources of the active military and displaying the information in various ways.
4. The multi-terminal fusion cooperative command system based on augmented reality in the military field of claim 1, wherein the external content loading module provides the contents with various formats for loading and display, including external video, picture, text, PPT, Word, Excel, and can perform content page turning and click operations on the loading and display contents.
5. The military field augmented reality based multi-terminal fusion cooperative command system of claim 1, wherein the virtual battlefield space representation module provides multiple browsing modes including eagle eye, roaming, mission-specific region cut-in and cut-out, target magnification and reduction.
6. The military field augmented reality-based multi-terminal fusion cooperative command system of claim 1, wherein the human-target interaction module analyzes and senses a gesture of a user pointing to a target, thereby achieving accurate positioning of the target in a virtual battlefield environment.
7. The military field augmented reality-based multi-terminal fusion cooperative command system of claim 1, wherein the human-target interaction module converts a user gesture into a manipulation instruction and sends the manipulation instruction to a target selected by a user on the basis of realizing accurate positioning of the target, and triggers a target feedback action.
8. The military field augmented reality-based multi-terminal fusion cooperative command system of claim 1, wherein the human-target interaction module enables target feedback actions triggered by manipulation commands to be perceived by all commanders in a virtual battlefield environment in synchronization.
CN201910586319.XA 2019-07-01 2019-07-01 Multi-terminal fusion cooperative command system based on augmented reality in military field Active CN110365666B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910586319.XA CN110365666B (en) 2019-07-01 2019-07-01 Multi-terminal fusion cooperative command system based on augmented reality in military field

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910586319.XA CN110365666B (en) 2019-07-01 2019-07-01 Multi-terminal fusion cooperative command system based on augmented reality in military field

Publications (2)

Publication Number Publication Date
CN110365666A CN110365666A (en) 2019-10-22
CN110365666B true CN110365666B (en) 2021-09-14

Family

ID=68217624

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910586319.XA Active CN110365666B (en) 2019-07-01 2019-07-01 Multi-terminal fusion cooperative command system based on augmented reality in military field

Country Status (1)

Country Link
CN (1) CN110365666B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110989842A (en) * 2019-12-06 2020-04-10 国网浙江省电力有限公司培训中心 Training method and system based on virtual reality and electronic equipment
CN111274910B (en) * 2020-01-16 2024-01-30 腾讯科技(深圳)有限公司 Scene interaction method and device and electronic equipment
CN111467789A (en) * 2020-04-30 2020-07-31 厦门潭宏信息科技有限公司 Mixed reality interaction system based on Holo L ens
CN111651057A (en) * 2020-06-11 2020-09-11 浙江商汤科技开发有限公司 Data display method and device, electronic equipment and storage medium
CN112232172B (en) * 2020-10-12 2021-12-21 上海大学 Multi-person cooperation simulation system for electronic warfare equipment
CN113037616B (en) * 2021-03-31 2022-11-04 中国工商银行股份有限公司 Interactive method and device for cooperatively controlling multiple robots
CN113254641B (en) * 2021-05-27 2021-11-16 中国电子科技集团公司第十五研究所 Information data fusion method and device
CN114579023B (en) * 2021-12-13 2023-04-18 北京市建筑设计研究院有限公司 Modeling method and device and electronic equipment
CN115439635B (en) * 2022-06-30 2024-04-26 亮风台(上海)信息科技有限公司 Method and equipment for presenting marking information of target object
CN115808974B (en) * 2022-07-29 2023-08-29 深圳职业技术学院 Immersive command center construction method, immersive command center construction system and storage medium
CN115826763B (en) * 2023-01-09 2023-05-02 南京宇天智云仿真技术有限公司 Special combat simulation system and method based on virtual reality

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6215498B1 (en) * 1998-09-10 2001-04-10 Lionhearth Technologies, Inc. Virtual command post
CN101964019A (en) * 2010-09-10 2011-02-02 北京航空航天大学 Against behavior modeling simulation platform and method based on Agent technology
WO2013111146A3 (en) * 2011-12-14 2013-09-19 Virtual Logic Systems Private Ltd System and method of providing virtual human on human combat training operations
CN107545788A (en) * 2017-10-17 2018-01-05 北京华如科技股份有限公司 Goods electronic sand map system is deduced based on the operation that augmented reality is shown
CN108664121A (en) * 2018-03-31 2018-10-16 中国人民解放军海军航空大学 A kind of emulation combat system-of-systems drilling system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6215498B1 (en) * 1998-09-10 2001-04-10 Lionhearth Technologies, Inc. Virtual command post
CN101964019A (en) * 2010-09-10 2011-02-02 北京航空航天大学 Against behavior modeling simulation platform and method based on Agent technology
WO2013111146A3 (en) * 2011-12-14 2013-09-19 Virtual Logic Systems Private Ltd System and method of providing virtual human on human combat training operations
CN107545788A (en) * 2017-10-17 2018-01-05 北京华如科技股份有限公司 Goods electronic sand map system is deduced based on the operation that augmented reality is shown
CN108664121A (en) * 2018-03-31 2018-10-16 中国人民解放军海军航空大学 A kind of emulation combat system-of-systems drilling system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Technical issues of Virtual Sand Table system;Petr Frantis;《2017 International Conference on Military Technologies (ICMT)》;20170724;第410-413页 *
信息***中作战资源虚拟化应用技术研究;杜思良;《指挥与控制学报》;20190615;第5卷(第2期);第141-146页 *
基于人工智能的战场环境目标分析***研究;孟德地;《第六届中国指挥控制大会论文集(上册)》;20180702;第543-547页 *

Also Published As

Publication number Publication date
CN110365666A (en) 2019-10-22

Similar Documents

Publication Publication Date Title
CN110365666B (en) Multi-terminal fusion cooperative command system based on augmented reality in military field
US11915670B2 (en) Systems, methods, and media for displaying interactive augmented reality presentations
US11403595B2 (en) Devices and methods for creating a collaborative virtual session
CN107479705B (en) Command institute collaborative operation electronic sand table system based on HoloLens
CN107463262A (en) A kind of multi-person synergy exchange method based on HoloLens
WO2018098720A1 (en) Virtual reality-based data processing method and system
CN102625129A (en) Method for realizing remote reality three-dimensional virtual imitated scene interaction
CN107463248A (en) A kind of remote interaction method caught based on dynamic with line holographic projections
US20200202634A1 (en) Intelligent management of content related to objects displayed within communication sessions
CN104469256A (en) Immersive and interactive video conference room environment
CN103095828A (en) Web three dimensional (3D) synchronous conference system based on rendering cloud and method of achieving synchronization
KR20200097637A (en) Simulation sandbox system
WO2018232346A1 (en) Intelligent fusion middleware for spatially-aware or spatially-dependent hardware devices and systems
CN204721476U (en) Immersion and interactively video conference room environment
CN202551219U (en) Long-distance three-dimensional virtual simulation synthetic system
Lang The impact of video systems on architecture
Sun et al. Video Conference System in Mixed Reality Using a Hololens
Leung et al. Creating a multiuser 3-D virtual environment
Wilmsherst et al. Utilizing virtual reality and three-dimensional space, visual space design for digital media art
Nijholt et al. The distributed virtual meeting room exercise
CN110442963A (en) A kind of Interior Decoration Design System based on AR interaction technique
Karreman How does motion capture mediate dance?
Xiaocheng Application research of virtual 3D animation technology in the design of human computer interface
US12020667B2 (en) Systems, methods, and media for displaying interactive augmented reality presentations
Zhao et al. Application of computer virtual simulation technology in tourism industry

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant