KR20170044318A - Method for collaboration using head mounted display - Google Patents
Method for collaboration using head mounted display Download PDFInfo
- Publication number
- KR20170044318A KR20170044318A KR1020150143855A KR20150143855A KR20170044318A KR 20170044318 A KR20170044318 A KR 20170044318A KR 1020150143855 A KR1020150143855 A KR 1020150143855A KR 20150143855 A KR20150143855 A KR 20150143855A KR 20170044318 A KR20170044318 A KR 20170044318A
- Authority
- KR
- South Korea
- Prior art keywords
- image
- user
- present
- space
- hmd
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Optics & Photonics (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
The present invention relates to a collaboration system using a head mount display. More particularly, the present invention relates to a collaborative system capable of providing a collaborative environment close to a real world to participants of a collaborative system with only a minimum number of devices in a limited environment.
Mixed Reality is a technique for combining real and virtual images. The main issue of synthetic reality is to blur the line between virtual and reality to give the user an image without boundaries between the boundaries of real and virtual images. In this regard, the Head Mounted Display (HMD) is a device that enables the virtual reality experience, but so far it has only been used in highly controlled environments such as labs.
In recent years, consumer level HMDs have become common and some devices are being offered to users at affordable prices. Although such consumer-level HMDs are still heavy and burdensome, they have become an opportunity for general users to use synthetic reality, just as portable devices have made augmented reality famous in the past.
Over the years, the teleconference system has been limited to voice and video communication channels, using a camera that captures users in front of the screen. The disadvantage of this method is that users do not cross their territory. Therefore, such a system has a problem that verbal communication or eye contact is considered to be more important than supporting actual cooperation among users.
This limited area problem can be solved with immersive display technology, e.g., a large two-dimensional display that can provide a deep clue to the appearance of a remote user. For example, a method is disclosed for utilizing a wall sized screen that fits a spaced space into a connected room. However, a single display has a limited view angle at which the user always has to view the screen, even if the head position is tracked, and this problem is called the 2.5D problem. That is, since the recent remote support environments are only setting a single display in front of the user, there is a problem of limiting the viewing and viewing directions of the user.
The present invention provides a method for enabling remote collaboration.
The present invention employs an HMD as a main display in order to overcome the above-mentioned 2.5D problem. With this option, the present invention aims to recall a remote user of the remote space as an avatar to the local space where the user is present.
In addition, since the HMD has a screen immediately before the user's eyes, the user's head direction is not limited to the front side of the screen but is free, and the user's view can be expanded to the entire local space.
In addition, although the existing technologies allow collaborations only within a screen, the present invention has another purpose in enabling real collaboration with local users and remote users in a common space.
In order to accomplish the above object, a representative structure of the present invention is as follows.
The present invention relates to a method of collaborating using a head mount display device, and is a method of collaborating using a head mounted display device, in which a user in a local area and a user in a remote area can collaborate using a common object in a virtual world ; Obtaining an image of a real world including the common space from a stereo camera; Determining a position of the user's hand in the image of the real world from the hand tracking information obtained from the depth sensor; Using the hand tracking information, generating a mask mesh that exists at a position corresponding to a position of the user's hand and is displayed on an image of the virtual world; Using an HMD tracking information and hand tracking information of a remote space user, to generate an avatar to be displayed on an image of the virtual world; Combining the image of the real world and the image of the virtual world to generate an output image in which the common space, the mask mesh, and the avatar are displayed on a real-world image; Displaying the output image on the HMD; A method of collaborating using an HMD.
In the present invention, the step of generating the avatar generates the body motion of the avatar using the body tracking information acquired from the eccentric camera.
In the present invention, the step of generating the common space uses a global tracker and a global tracker to create a common space, wherein the local tracker is used only in an initialization step of the common space.
According to the present invention, a remote collaboration system that provides an HMD-based synthesized reality is provided, so that a remote user and a local user can easily collaborate. More specifically, each user can maintain their local space, use the HMD to view virtual objects, virtual space within the common space, and other users summoned as avatars, Collaboration can be performed effectively with objects.
Additionally, since the present invention uses vision-based hand tracking, there is the effect of allowing direct interaction with bare-hand shared objects without additional devices or controllers.
FIG. 1 is a block diagram illustrating a collaboration system according to an embodiment of the present invention. Referring to FIG.
2 is a diagram illustrating an internal configuration of a control computer according to an embodiment of the present invention.
3 is a view for explaining generation of an output image of an HMD according to an embodiment of the present invention.
FIG. 4 is a flowchart illustrating an operation according to an exemplary embodiment of the present invention. Referring to FIG.
The following detailed description of the invention refers to the accompanying drawings, which illustrate, by way of illustration, specific embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention. It should be understood that the various embodiments of the present invention are different, but need not be mutually exclusive. For example, the specific shapes, structures, and characteristics described herein may be implemented by changing from one embodiment to another without departing from the spirit and scope of the invention. It should also be understood that the location or arrangement of individual components within each embodiment may be varied without departing from the spirit and scope of the present invention. Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of the present invention should be construed as encompassing the scope of the appended claims and all equivalents thereof. In the drawings, like reference numbers designate the same or similar components throughout the several views.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings in order to facilitate a person skilled in the art to which the present invention pertains.
1 is a block diagram for explaining a collaboration system according to an embodiment of the present invention.
According to an embodiment of the present invention, the collaboration system of the present invention includes a
The collaboration system of the present invention is very useful in that it can share the motion of the workspace and the user with remotely located users. For example, the collaborative system of the present invention may be applied to remote operations of surgeons in the local and remote spaces that coordinate the operation of the same patient. The primary surgeon and the patient are physically located in the local space and the peer surgeons are located in the remote space and the operation can be performed using the system of the present invention. At this time, the patient becomes a shared object, the operations performed by the surgeon in the remote space are tracked in real time, and can be replicated in the local space of the main surgeon with a virtual character such as an avatar, You can see the behavior of a remote surgeon mirrored in local space without limitation.
To implement such a useful collaborative system, the present invention provides a collaborative system based on HMD 200. Hereinafter, the present invention will be described based on the role of each device.
First, the
In addition, the
1, an HMD 200, a
Hereinafter, the present collaborative system will be described with the operation of the devices controlled by the
The HMD 200 is a device that can be mounted on the head of the user in the local space and the remote space, which can provide a virtual object and a surrounding view of the image in which the avatar is displayed to the user. More specifically, the HMD 200 displays a remote space avatar in a common space, and provides a see-through image of a hand when there is a hand between the user's eyes and a virtual object, It is a device that can display images that collaborate.
The HMD 200 basically mounts on the head of the user and can present the image directly in front of the user's eyes. The HMD 200 used in the present invention may include a left screen and a right screen. Since the left screen and the right screen are respectively displayed in the left and right eyes of the user, it is possible to naturally provide a stereoscopic image to the user. In other words, it is possible to give the image a sense of depth by changing the image shown on the left eye and the image on the right eye, just as the human eye sees it.
The present invention can overcome the 2.5D problem by using the
In existing 3D systems, if there is a hand between a user's eye and a virtual object displayed by the screen, the system could not see the virtual object because the user's hand physically blocked the screen. This is referred to as a problem of closed handling.
In order to solve such a problem, the present invention uses a see-through HMD (200). The see-through
Accordingly, the present invention employs a depth mask generator that selects a video-see-through
Next, the stereo camera 300 (Stereoscopic Camera) is a camera capable of generating a stereoscopic image. The
In an embodiment of the present invention, the
The images generated by the
The generated images of the
Next, the depth sensor 400 (Depth Sensor) is a device that enables hand tracking, interaction with a virtual object, and a mask mesh. Using the
The
The most common and ideal method for 3D user interaction is direct interaction with bare hands and fingers. Humans are accustomed to using their hands through everyday work, and human fingers have a very high degree of freedom. However, providing hand interaction in synthetic reality presents difficulties in tracking hands and fingers in real time. Conventional hand tracking devices include data globes using infrared markers. These devices were very expensive and hindered the naturalness of the user experience. Accordingly, the present invention can track movement of a hand using a vision-based depth sensor.
Next, the
Meanwhile, the
2 is a diagram illustrating an internal configuration of a
2, the
First, the
In addition, the
The common space setting unit 110 sets up a virtual common space in which the user of the local space and the remote space can collaborate, in which the shared object can be located. The main object of the present invention is immersive and intuitive remote collaboration using hand-based interaction. The user's motion can be mirrored on the common space while using the summoned avatar as the display of the remote user. At this time, a part of the user's local space becomes a common space, and the common space is a space allowing the local user and the remote user to share the virtual object and to control the virtual objects together.
To register and track the coordinate system for a lightweight system without resorting to environmentally bound sensors and displays, the present invention provides a hybrid method for locating the user ' s
In the hybrid method of the present invention, two types of trackers are used: an outside-in global tracker and an inside-out global tracker. The global tracker can be tracked while it is in the defined space and has more flexibility. However, the local tracker must always keep the marker in view, thus limiting the camera view direction. The global tracker can not register a common space within the user's virtual world coordinates, although the global tracker may eliminate the user's viewpoint limitation. Thus, the present invention uses a local tracker for registering local markers as a basis for a common space.
For example, a user may only look at local objects in an initialization stage for using a remote collaboration system. The present invention can use the local tracker only once in the initial setup stage and provide unrestricted views using the global tracker in the stages for the remaining remote collaboration systems.
When the common space setting unit 110 of the present invention pauses a local object registered in a global tracker, common space coordinate information is calculated. The poses of the registered local objects are the basis of the virtual objects shared in the user's space. The user's hand or body data is transformed into local coordinates based on the underlying object pose and transmitted to the remote user's space.
The mask mesh generation unit 120 generates a mask mesh having the same shape as the user's hand. The generated mask mesh is set to be transparent or opaque, thereby providing a solution to the occlusion handling.
Occlusion handling is an important issue for the see-through
The
Accurate hand tracking is required in order to apply the mask mesh generated by the mask mesh generation unit to the hand output from the
The
The present invention uses the local user's hand tracking results to interact and coordinate with virtual objects. Hand tracking information and head pose are transmitted to the remote space over the
As local users and remote users share the common space, it becomes easier to summon a virtual avatar as a remote user with local space. Initialization of the avatar within the real world is completed by placing a chess board marker on the floor. The chessboard marker operates as a virtual anchor to the summoned remote space and can be physically relocated by the local user as required. In addition, the chessboard marker creates a virtual floor plane that aligns with the real world floor plane. For this reason, the summoned avatar can be appropriately positioned on this plane.
In networking, the present invention only transmits
The tracking sensor of the
To this end, the present invention can combine the hand information obtained from the
The
In the initial implementation of the present invention, the left camera image of the
The virtual stereoscopic images are rendered by the
During the preparation of the use of the present collaboration system, a two-step calibration process is required to obtain the internal and external parameters of the cameras, 1) calibration within the same module and 2) calibration between different modules.
In the first step, the present invention can calibrate the
In the second step, the present invention calibrates the devices in other device modules, principally the images of the cameras. After the first step, the present invention assumes that two or more cameras in the same module have been properly calibrated. In the present invention, the images of the
The output
3 is a diagram for explaining that the output
Referring to FIG. 3A, a real-left image and a right image generated by the
FIG. 3 (b) is a photograph showing the simulation result according to an embodiment of the present invention. Referring to FIG. 3 (b), it can be seen that the real-left image L and the real-world right image R are acquired from the
Referring to (OUTPUT) in FIG. 3 (b), it can be seen that the avatar, the chess board, and the transparent common space of sky blue are displayed on the existing real-world left image L and the real world right image R. As described above, a virtual object such as an avatar, a chess board, and a common space is a result of combining an image of a virtual world into a real-world image. In this embodiment, the shader is set to the transparent state and the user's hand is displayed as a completely opaque state. However, in another embodiment of the present invention, the mask mesh may be displayed and the user's hand may be seen in the see-through state.
FIG. 4 is a flowchart illustrating an operation according to an exemplary embodiment of the present invention. Referring to FIG.
First, a common space in which a user in a local area and a user in a remote area can collaborate using a common object is created in an image of a virtual world. (S1)
Next, the image of the real world including the common space is acquired from the stereo camera. (S2)
Next, the position of the user's hand in the image of the real world is determined from the hand tracking information obtained from the depth sensor. (S3)
Next, using the hand tracking information, a mask mesh exists in a position corresponding to the position of the user's hand and is displayed on the image of the virtual world. (S4)
Next, using the HMD tracking information and the hand tracking information of the remote space user, an avatar to be displayed on the image of the virtual world is generated. (S5)
Next, the image of the real world and the image of the virtual world are combined with each other to generate an output image in which a common space, a mask mesh, and an avatar are displayed on the image of the real world. (S6)
Finally, the output image is displayed on the HMD. (S7)
The specific acts described in the present invention are, by way of example, not intended to limit the scope of the invention in any way. For brevity of description, descriptions of conventional electronic configurations, control systems, software, and other functional aspects of such systems may be omitted. Also, the connections or connecting members of the lines between the components shown in the figures are illustrative of functional connections and / or physical or circuit connections, which may be replaced or additionally provided by a variety of functional connections, physical Connection, or circuit connections. Also, unless stated otherwise such as " essential ", " importantly ", etc., it may not be a necessary component for application of the present invention.
The use of the terms " above " and similar indication words in the specification of the present invention (particularly in the claims) may refer to both singular and plural. In addition, in the present invention, when a range is described, it includes the invention to which the individual values belonging to the above range are applied (unless there is contradiction thereto), and each individual value constituting the above range is described in the detailed description of the invention The same. Finally, the steps may be performed in any suitable order, unless explicitly stated or contrary to the description of the steps constituting the method according to the invention. The present invention is not necessarily limited to the order of description of the above steps. The use of all examples or exemplary language (e.g., etc.) in this invention is for the purpose of describing the present invention only in detail and is not to be limited by the scope of the claims, It is not. It will also be appreciated by those skilled in the art that various modifications, combinations, and alterations may be made depending on design criteria and factors within the scope of the appended claims or equivalents thereof.
The embodiments of the present invention described above can be implemented in the form of program instructions that can be executed through various computer components and recorded in a computer-readable recording medium. The computer-readable recording medium may include program commands, data files, data structures, and the like, alone or in combination. The program instructions recorded on the computer-readable recording medium may be those specifically designed and configured for the present invention or may be those known and used by those skilled in the computer software arts. Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape, optical recording media such as CD-ROM and DVD, magneto-optical media such as floptical disks, medium, and hardware devices specifically configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like. Examples of program instructions include machine language code, such as those generated by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like. The hardware device may be modified into one or more software modules for performing the processing according to the present invention, and vice versa.
While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, Those skilled in the art will appreciate that various modifications and changes may be made thereto without departing from the scope of the present invention.
Accordingly, the spirit of the present invention should not be construed as being limited to the above-described embodiments, and all ranges that are equivalent to or equivalent to the claims of the present invention as well as the claims .
100: control computer 200: HMD
300: Stereo camera 400: Depth sensor
500: External camera 600: Network
Claims (3)
Creating a common space in a virtual world in which a user in a local area and a user in a remote area can collaborate using a common object;
Obtaining an image of a real world including the common space from a stereo camera;
Determining a position of the user's hand in the image of the real world from the hand tracking information obtained from the depth sensor;
Using the hand tracking information, generating a mask mesh that exists at a position corresponding to a position of the user's hand and is displayed on an image of the virtual world;
Using an HMD tracking information and hand tracking information of a remote space user to generate an avatar to be displayed on an image of the virtual world;
Combining the image of the real world and the image of the virtual world to generate an output image in which the common space, the mask mesh, and the avatar are displayed on a real-world image;
Displaying the output image on the HMD;
Wherein the collaborative method uses an HMD.
Wherein the generating of the avatar comprises generating body motion of the avatar using the body tracking information acquired from the eccentric camera.
The creating of the common space may include creating a common space using a global tracker and a global tracker,
Wherein the local tracker is used only in an initialization step of the common space.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150143855A KR101763636B1 (en) | 2015-10-15 | 2015-10-15 | Method for collaboration using head mounted display |
PCT/KR2015/013636 WO2017065348A1 (en) | 2015-10-15 | 2015-12-14 | Collaboration method using head mounted display |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150143855A KR101763636B1 (en) | 2015-10-15 | 2015-10-15 | Method for collaboration using head mounted display |
Publications (2)
Publication Number | Publication Date |
---|---|
KR20170044318A true KR20170044318A (en) | 2017-04-25 |
KR101763636B1 KR101763636B1 (en) | 2017-08-02 |
Family
ID=58517342
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020150143855A KR101763636B1 (en) | 2015-10-15 | 2015-10-15 | Method for collaboration using head mounted display |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR101763636B1 (en) |
WO (1) | WO2017065348A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10250845B1 (en) | 2017-10-19 | 2019-04-02 | Korea Institute Of Science And Technology | Remote collaboration system with projector-camera based robot device and head mounted display, and remote interaction method using the same |
WO2019124726A1 (en) * | 2017-12-19 | 2019-06-27 | (주) 알큐브 | Method and system for providing mixed reality service |
KR20210059079A (en) * | 2019-11-13 | 2021-05-25 | 경일대학교산학협력단 | Hand tracking system using epth camera and electromyogram sensors |
KR102377988B1 (en) * | 2021-09-30 | 2022-03-24 | 주식회사 아진엑스텍 | Method and device for assisting collaboration with robot |
KR102458491B1 (en) * | 2022-03-17 | 2022-10-26 | 주식회사 메디씽큐 | System for providing remote collaborative treatment for tagging realtime surgical video |
WO2023163376A1 (en) * | 2022-02-25 | 2023-08-31 | 계명대학교 산학협력단 | Virtual collaboration non-contact real-time remote experimental system |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11475652B2 (en) | 2020-06-30 | 2022-10-18 | Samsung Electronics Co., Ltd. | Automatic representation toggling based on depth camera field of view |
GB202020196D0 (en) * | 2020-12-18 | 2021-02-03 | Univ Dublin Technological | Virtual reality environment |
US20220350299A1 (en) * | 2021-05-03 | 2022-11-03 | Digital Multiverse, Inc. | Real-Virtual Hybrid Venue |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120218395A1 (en) * | 2011-02-25 | 2012-08-30 | Microsoft Corporation | User interface presentation and interactions |
US9122321B2 (en) * | 2012-05-04 | 2015-09-01 | Microsoft Technology Licensing, Llc | Collaboration environment using see through displays |
KR20140108428A (en) * | 2013-02-27 | 2014-09-11 | 한국전자통신연구원 | Apparatus and method for remote collaboration based on wearable display |
KR102387314B1 (en) * | 2013-03-11 | 2022-04-14 | 매직 립, 인코포레이티드 | System and method for augmented and virtual reality |
US9524588B2 (en) * | 2014-01-24 | 2016-12-20 | Avaya Inc. | Enhanced communication between remote participants using augmented and virtual reality |
-
2015
- 2015-10-15 KR KR1020150143855A patent/KR101763636B1/en active IP Right Grant
- 2015-12-14 WO PCT/KR2015/013636 patent/WO2017065348A1/en active Application Filing
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10250845B1 (en) | 2017-10-19 | 2019-04-02 | Korea Institute Of Science And Technology | Remote collaboration system with projector-camera based robot device and head mounted display, and remote interaction method using the same |
WO2019124726A1 (en) * | 2017-12-19 | 2019-06-27 | (주) 알큐브 | Method and system for providing mixed reality service |
US11206373B2 (en) | 2017-12-19 | 2021-12-21 | R Cube Co., Ltd. | Method and system for providing mixed reality service |
KR20210059079A (en) * | 2019-11-13 | 2021-05-25 | 경일대학교산학협력단 | Hand tracking system using epth camera and electromyogram sensors |
KR102377988B1 (en) * | 2021-09-30 | 2022-03-24 | 주식회사 아진엑스텍 | Method and device for assisting collaboration with robot |
WO2023163376A1 (en) * | 2022-02-25 | 2023-08-31 | 계명대학교 산학협력단 | Virtual collaboration non-contact real-time remote experimental system |
KR102458491B1 (en) * | 2022-03-17 | 2022-10-26 | 주식회사 메디씽큐 | System for providing remote collaborative treatment for tagging realtime surgical video |
WO2023177002A1 (en) * | 2022-03-17 | 2023-09-21 | 주식회사 메디씽큐 | Remote collaboration support system in which real-time surgical video can be tagged |
Also Published As
Publication number | Publication date |
---|---|
KR101763636B1 (en) | 2017-08-02 |
WO2017065348A1 (en) | 2017-04-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101763636B1 (en) | Method for collaboration using head mounted display | |
US11928838B2 (en) | Calibration system and method to align a 3D virtual scene and a 3D real world for a stereoscopic head-mounted display | |
US10622111B2 (en) | System and method for image registration of multiple video streams | |
Azuma | Augmented reality: Approaches and technical challenges | |
JP4804256B2 (en) | Information processing method | |
US10942024B2 (en) | Information processing apparatus, information processing method, and recording medium | |
CN105354820B (en) | Adjust the method and device of virtual reality image | |
US9538167B2 (en) | Methods, systems, and computer readable media for shader-lamps based physical avatars of real and virtual people | |
JP6364022B2 (en) | System and method for role switching in a multiple reality environment | |
KR101295471B1 (en) | A system and method for 3D space-dimension based image processing | |
JP2022502800A (en) | Systems and methods for augmented reality | |
JP2013061937A (en) | Combined stereo camera and stereo display interaction | |
US20050264559A1 (en) | Multi-plane horizontal perspective hands-on simulator | |
CN109598796A (en) | Real scene is subjected to the method and apparatus that 3D merges display with dummy object | |
US20210304509A1 (en) | Systems and methods for virtual and augmented reality | |
JP2023100820A (en) | Photo-real character configuration for spatial computing | |
CN108830944B (en) | Optical perspective three-dimensional near-to-eye display system and display method | |
Noh et al. | An HMD-based Mixed Reality System for Avatar-Mediated Remote Collaboration with Bare-hand Interaction. | |
US11210843B1 (en) | Virtual-world simulator | |
Schönauer et al. | Wide area motion tracking using consumer hardware | |
CN110060349B (en) | Method for expanding field angle of augmented reality head-mounted display equipment | |
Saraiji et al. | Real-time egocentric superimposition of operator's own body on telexistence avatar in virtual environment | |
WO2021166751A1 (en) | Information-processing device, information-processing method, and computer program | |
JP7488210B2 (en) | Terminal and program | |
WO2022249592A1 (en) | Information processing device, information processing method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E701 | Decision to grant or registration of patent right | ||
GRNT | Written decision to grant |