CN117453037A - Interactive method, head display device, electronic device and readable storage medium - Google Patents

Interactive method, head display device, electronic device and readable storage medium Download PDF

Info

Publication number
CN117453037A
CN117453037A CN202311361553.5A CN202311361553A CN117453037A CN 117453037 A CN117453037 A CN 117453037A CN 202311361553 A CN202311361553 A CN 202311361553A CN 117453037 A CN117453037 A CN 117453037A
Authority
CN
China
Prior art keywords
virtual
target
space
virtual reality
pose information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311361553.5A
Other languages
Chinese (zh)
Inventor
杨天翼
尹子硕
陈昊芝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Positive Negative Infinite Technology Co ltd
Original Assignee
Beijing Positive Negative Infinite Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Positive Negative Infinite Technology Co ltd filed Critical Beijing Positive Negative Infinite Technology Co ltd
Priority to CN202311361553.5A priority Critical patent/CN117453037A/en
Publication of CN117453037A publication Critical patent/CN117453037A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides an interaction method, head display equipment, electronic equipment and a readable storage medium, and relates to the technical field of virtual reality. The method comprises the following steps: collecting real-time pose information of a target part of a target object in a physical space; mapping the pose information to a virtual part in a preset virtual space, wherein the virtual space comprises a target virtual object; and displaying a virtual reality picture in real time in the sight range of the target object, wherein the virtual reality picture displays the result of interaction on the virtual object based on the real-time pose information of the virtual part in the virtual space. According to the method and the device for achieving the interaction result, the target object is enabled to watch the interaction result, fatigue caused by interaction between the target object low head or the hand watching target part and the target virtual object can be relieved, and meanwhile accuracy of interaction operation can be improved.

Description

Interactive method, head display device, electronic device and readable storage medium
Technical Field
The application relates to the technical field of virtual reality, in particular to an interaction method, head display equipment, electronic equipment and a readable storage medium.
Background
After wearing an augmented reality (Augmented reality, AR) or Mixed Reality (MR) head-display device, a target object can simultaneously view a physical space of reality and a virtual space displayed by the head-display device, and can interact with a virtual object (e.g., a virtual keyboard) in the virtual space by using a target part (e.g., a hand) of the target object.
When the target object directly interacts with the virtual object through the hand, because of lack of touch information, it is difficult to blindly control the virtual object by touch (also called eye-free interaction), and for this reason, the target object needs to view the interaction between the hand in the physical space and the virtual object in the virtual space through the head display device.
However, since the visual angle of the target object after wearing the head display device is limited, it is difficult to see the interactive operation between the hand and the virtual object, if the target object wants to see the interactive operation between the hand and the virtual object, the target object is required to be low, but for physiological reasons, fatigue and discomfort are obviously felt once the head rotates downwards by more than 30 degrees; or the need for the target to raise the handle portion, can quickly lead to upper arm fatigue in use situations where the hand is not supported (e.g., standing).
Disclosure of Invention
The embodiment of the application provides an interaction method, head display equipment, electronic equipment, a computer readable storage medium and a computer program product, which are used for solving the technical problems in the background technology.
According to a first aspect of an embodiment of the present application, there is provided an interaction method, applied to a head display device, the method including:
collecting real-time pose information of a target part of a target object in a physical space;
mapping the pose information to a virtual part in a preset virtual space, wherein the virtual space comprises a target virtual object; the virtual space is a virtual space displayed by the head display equipment;
displaying a virtual reality picture in real time in the sight range of the target object; the virtual reality picture displays the result of interaction of the target virtual object based on real-time pose information of the virtual part in the virtual space.
According to a second aspect of embodiments of the present application, there is provided a head-display apparatus including:
the acquisition module is used for acquiring real-time pose information of a target part of a target object in a physical space;
the mapping module is used for mapping the pose information to a virtual part in a preset virtual space, and the virtual space comprises a target virtual object; the virtual space is a virtual space displayed by the head display equipment;
The display module is used for displaying the virtual reality picture in real time in the sight range of the target object; the virtual reality picture displays the result of interaction of the target virtual object based on real-time pose information of the virtual part in the virtual space.
In one possible implementation, the head display device further includes:
the target area determining module is used for determining a target area in the virtual space, wherein the target area comprises a virtual part and a target virtual object;
and the virtual reality picture acquisition module is used for acquiring pictures of the target area in real time through a preset virtual camera to obtain virtual reality pictures.
In one possible implementation, the pose information includes pose information and a first target coordinate in a first coordinate system; the first coordinate system is a coordinate system established in a physical space;
the mapping module is specifically used for acquiring a conversion rule between a first coordinate system and a second coordinate system which are established in advance; the second coordinate system is a coordinate system established in the virtual space; converting the first target coordinate under the first coordinate system into the second target coordinate under the second coordinate system according to the conversion rule; and displaying the virtual part at the second target coordinate of the virtual space, and mapping the gesture information to the virtual part.
In one possible implementation, the head display device further includes:
the target virtual object determining module is used for determining the target virtual object in any one of the following modes:
responding to the triggering operation of the target object on the target control, determining the gaze point of the target object, and taking the virtual object where the gaze point is located as a target virtual object; the target control is a physical control of the head display device or a physical control or a virtual control of an external device of the head display device;
setting a preset interaction area in the virtual space, and taking a virtual object in the preset interaction area as a target virtual object;
responding to the virtual part triggering virtual object to be in an activated state, and taking the virtual object in the activated state as a target virtual object;
and determining the virtual object contacted by the virtual part, and taking the virtual object contacted by the virtual part as a target virtual object.
In one possible implementation, the presentation module includes:
the display area setting sub-module is used for setting a display area in the virtual space, wherein the display area is positioned in a sight line range;
and the display sub-module is used for displaying the virtual reality picture in the display area.
The embodiment of the application provides a possible implementation manner, and the virtual reality picture acquisition module is further used for setting a plurality of virtual cameras in the virtual space, and simultaneously acquiring a plurality of virtual reality pictures with different visual angles through the plurality of virtual cameras;
The display submodule is specifically used for correspondingly displaying a virtual reality picture in each display area if the sight line range comprises a plurality of display areas; if the sight range comprises a display area, splicing the plurality of virtual reality pictures to obtain a spliced virtual reality picture, and displaying the spliced virtual reality picture in the display area.
In one possible implementation manner, the display submodule is specifically further configured to, if the virtual reality image is a two-dimensional image, perform at least one of clipping, changing a projection angle, and changing transparency on the virtual reality image, and display the processed virtual reality image in the display area.
In a possible implementation manner, the display submodule is specifically further used for identifying the virtual reality picture if the virtual reality picture is a three-dimensional image, and obtaining three-dimensional model data of the virtual part and the target virtual object and relative pose information between the virtual part and the target virtual object; respectively generating a three-dimensional model of the virtual part and a three-dimensional model of the target virtual object according to the three-dimensional model data of the virtual part and the target virtual object; in the display area, a three-dimensional model of the virtual part and a three-dimensional model of the target virtual object are displayed according to the relative pose information.
According to a third aspect of embodiments of the present application, there is provided an electronic device comprising a memory, a processor and a computer program stored on the memory, the processor implementing the steps of the method as provided in the first aspect when the program is executed.
According to a fourth aspect of embodiments of the present application, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method as provided by the first aspect.
According to a fifth aspect of embodiments of the present application, there is provided a computer program product comprising computer instructions stored in a computer readable storage medium, which when read from the computer readable storage medium by a processor of a computer device, the computer instructions are executed by the processor causing the computer device to perform the steps of the method as provided by the first aspect.
The beneficial effects that technical scheme that this application embodiment provided brought are:
the embodiment of the application acquires the real-time pose information of the target part of the target object in the physical space; mapping the pose information to a virtual part in a preset virtual space, wherein the virtual space comprises a target virtual object; in the sight range of the target object, the virtual reality picture is displayed in real time, the virtual reality picture displays the interactive result of the virtual object based on the real-time pose information of the virtual part in the virtual space, the target object can realize the interactive operation between the target part of the target object and the target virtual object by watching the virtual reality picture, so that the fatigue caused by the interaction between the target part and the target virtual object when the target object is low in head or the hand is lifted to watch the target part can be relieved, and the accuracy of the interactive operation can be improved by watching the interactive operation on the other side.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings that are required to be used in the description of the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic flow chart of an interaction method provided in an embodiment of the present application;
fig. 2 is a schematic diagram showing a virtual reality screen in a specific scene according to an embodiment of the present application;
fig. 3a is a schematic diagram of an application scenario for determining a target virtual object according to an embodiment of the present application;
fig. 3b is a schematic view of an application scenario for displaying a virtual reality screen in real time within a line of sight range of a target object according to an embodiment of the present application;
fig. 3c is a schematic diagram of an application scenario in which the shape and pose information of the three-dimensional model of the virtual hand and the three-dimensional model of the virtual teapot provided in the embodiment of the present application are changed along with the change of the shape and pose information of the virtual hand and the virtual teapot;
fig. 4 is a schematic structural diagram of a head display device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below with reference to the drawings in the present application. It should be understood that the embodiments described below with reference to the drawings are exemplary descriptions for explaining the technical solutions of the embodiments of the present application, and the technical solutions of the embodiments of the present application are not limited.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and "comprising," when used in this application, specify the presence of stated features, information, data, steps, operations, elements, and/or components, but do not preclude the presence or addition of other features, information, data, steps, operations, elements, components, and/or groups thereof, all of which may be included in the present application. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein indicates that at least one of the items defined by the term, e.g., "a and/or B" may be implemented as "a", or as "B", or as "a and B".
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Several terms which are referred to in this application are first introduced and explained:
virtual Reality (VR) refers to that a VR device is utilized to simulate and generate a completely Virtual space, and a target object enters the Virtual space after wearing a head display device, so as to achieve an immersive experience.
Augmented reality (Augmented Reality, AR) refers to superimposing virtual information into the real world, combining a real scene with a virtual scene.
Mixed Reality (MR) refers to mixing together a real physical space and a virtual space to generate a new visual space, where the space contains both a physical entity and a virtual object, unlike an AR device, where the relative position of the virtual object displayed by the AR device moves along with the movement of the device, and where the relative position of the virtual object displayed by the MR device does not move along with the movement of the device.
If the VR device is worn by the target object, the target object sees the picture in the virtual space and cannot see the picture in the physical space; if the AR or MR device is worn by the target object, the physical space and the virtual space can be simultaneously watched by the target object, and the target object can interact with a virtual object (e.g. virtual keyboard, virtual box) in the virtual space by using a target part (e.g. hand) of the target object.
However, when the target object directly interacts with the virtual object through the target portion, because of lack of haptic information, it is difficult to blindly control the virtual object by means of touch (also called eye-free interaction), and for this reason, the target object needs to view the interaction between the target in the physical space and the virtual object in the virtual space through the head display device.
However, since the visual angle of the target object after wearing the head display device is limited, it is difficult to see the interactive operation between the hand and the virtual object, if the target object wants to see the interactive operation between the hand and the virtual object, the target object is required to be low, but for physiological reasons, fatigue and discomfort are obviously felt once the head rotates downwards by more than 30 degrees; or the need for the target to raise the handle portion, can quickly lead to upper arm fatigue in use situations where the hand is not supported (e.g., standing).
The interactive method, the head display device, the electronic device, the computer readable storage medium and the computer program product provided by the application aim to solve the technical problems in the prior art.
The technical solutions of the embodiments of the present application and technical effects produced by the technical solutions of the present application are described below by describing several exemplary embodiments. It should be noted that the following embodiments may be referred to, or combined with each other, and the description will not be repeated for the same terms, similar features, similar implementation steps, and the like in different embodiments.
The embodiment of the application provides an interaction method applied to a head display device, as shown in fig. 1, the method includes:
step S101, acquiring real-time pose information of a target part of a target object in a physical space.
The interaction method of the embodiment of the application is applied to the head display device, and the head display device can be any one of AR device, VR device and MR device.
According to the head display device, the head display device is worn on the head of a target object, the target object can be a person or an animal, the target object can also be a robot with activity, the target object is located in a physical space (also called a physical world) in reality, after the target object wears the head display device, a virtual space (also called a virtual world) displayed by the head display device can be watched, the virtual space comprises at least one interactive virtual object, the virtual object can be a preset virtual object, and the virtual object can also be a virtual model generated based on objects in the physical space.
The target part of the target object can be a hand or a finger, and after the head display device is worn by the target object, the target object can select a target virtual object from at least one virtual object and interact with the target virtual object.
The embodiment of the application provides a possible implementation manner, and the target virtual object can be determined by any one of the following ways:
mode one: responding to the triggering operation of the target object on the target control, determining the gaze point (also called a gaze falling point or a gaze point) of the target object, and taking the virtual object where the gaze point is located as a target virtual object; the target control is a physical control of the head display device or a physical control or a virtual control of an external device of the head display device.
In an optional embodiment, when the target object has an interaction intention (meaning that the intention interacts with the virtual object), the triggering operation is performed on the target control, where the target control may be a physical control on the head display device, for example, the triggering operation is performed by pressing the physical control for a long time, the target control may also be a physical control or a virtual control of an external device of the head display device, where the external device may be any terminal that may be connected to the head display device, for example, a smart watch, a mobile phone, etc., and the virtual control of the external device may be, for example, a control displayed in a screen of the external device, and the triggering operation may be performed by double clicking the virtual control.
After the head display device detects the triggering operation of the target object on the target control, the target object is determined to have an interactive intention, the eye tracking is carried out on the sight line of the target object, the head display device further comprises an eye tracking unit, the gaze point of the user can be obtained in real time through the eye tracking unit, the virtual object where the gaze point is located is determined, and the virtual object where the gaze point is located is taken as the target virtual object.
Mode two: and setting a preset interaction area in the virtual space, and taking the virtual object in the preset interaction area as a target virtual object.
In an optional implementation manner, a preset interaction area may be further set in the virtual space, when the target object intends to interact with the virtual object, the virtual object may be moved to the preset interaction area (the virtual object may be moved to the preset interaction area by moving the head display device, or the virtual object may be moved to the preset interaction area by the virtual hand), and the virtual object in the preset interaction area may be used as the target virtual object.
Mode three: and responding to the virtual part triggering the virtual object to be in an activated state, and taking the virtual object in the activated state as a target virtual object.
In an alternative embodiment, when the target object has the interaction intention, the virtual object may be triggered to be in an activated state by the virtual part (for example, by double-clicking the virtual object by the target part, the virtual object is activated by double-clicking the virtual object by the virtual part), and the virtual object in the activated state is determined to be the target virtual object for which the target object intends to interact.
Mode four: and determining the virtual object contacted by the virtual part, and taking the virtual object contacted by the virtual part as a target virtual object.
In an alternative embodiment, a virtual object contacted by the virtual part of the target may be determined, and the virtual object where the virtual part is located is determined to be the target virtual object for which the target object intends to interact.
Of course, in addition to the four embodiments described above, the target virtual object may be determined by other methods, which are not limited in this application.
In order to facilitate the target object to watch the interaction between the target part of the target object and the target virtual object in the virtual space, real-time pose information of the target part of the target object in the physical space can be acquired, wherein the pose information comprises the position and the pose of the target object.
In practical application, real-time pose information of a target part can be acquired through a physical camera carried by the head display device, the physical camera can be a depth camera, an infrared camera and the like, and the position of the physical camera can be a position or an angle for conveniently observing the target part.
Step S102, mapping pose information to a virtual part in a preset virtual space, wherein the virtual space comprises a target virtual object; the virtual space is a virtual space displayed by the head display device.
The embodiment of the application aims to obtain and display an interactive picture of a target part of a target object and a target virtual object, however, because the target part is in a physical space and the target virtual object is in a virtual space, the target virtual object is in a different space, and the interactive picture of the target virtual object and the virtual object cannot be directly acquired.
For example, if the target portion is a hand, the preset three-dimensional model generating algorithm is a hand skeleton tracking algorithm, a hand image of the target object may be acquired by a physical camera, the hand image includes a hand, three-dimensional coordinates of each skeleton point of the hand are identified by the hand skeleton tracking algorithm, and a virtual hand is constructed based on the three-dimensional coordinates of each skeleton point.
After the real-time pose information of the target object in the physical space is obtained, the pose information is mapped to a virtual part of the virtual space, and specifically, the pose information of the target part comprises the pose information of the target part and a first target coordinate under a first coordinate system, wherein the first coordinate system is a coordinate system established in the physical space.
In addition, a second coordinate system is also established in the virtual space, that is, the second coordinate system is the coordinate system established in the virtual space, a conversion rule between the first coordinate system and the second coordinate system is established, after the posture information of the target part and the first target coordinate under the first coordinate system are obtained, the first target coordinate system of the first coordinate system can be converted into the second target coordinate under the second target coordinate system according to the conversion rule, and the first target coordinate and the second target coordinate can be three-dimensional space coordinates.
For the posture information, the posture information of the target part in the physical space can be directly mapped to the virtual part, so that the target part in the physical space and the virtual part in the virtual space can keep the same posture information.
If the first target coordinate and the second target coordinate are the same and the target site and the virtual site are the same posture information, the target site and the virtual site have the same posture information, and a state of being "superimposed" is exhibited.
Step S103, displaying a virtual reality picture in real time in the sight line range of the target object; the virtual reality picture displays the result of interaction of the target virtual object based on real-time pose information of the virtual part in the virtual space.
In order to facilitate the target object to watch the interaction between the target part and the target virtual object, in the sight range of the target object, the embodiment of the application displays a virtual reality picture in real time, wherein the virtual reality picture is the result of the interaction on the target virtual object based on the real-time position information of the virtual part in the virtual space, and the result of the interaction comprises the virtual part and the target virtual object which changes along with the interaction (such as shape change and color change).
The embodiment of the application realizes the display of the virtual reality picture, namely, the projection of the interaction picture between the virtual part in the virtual space and the target virtual object to another place in the virtual space, wherein the other place is positioned in the sight range of the target object, so that the head of the target object is allowed to turn to other directions when the target object interacts with the target virtual object, and the target virtual object which is interacting can be observed.
In an alternative embodiment, two modes may be set for the head display device, namely a normal mode and a projection mode, where the head display device does not display the virtual reality image in the normal mode; in the projection mode, the virtual reality picture is displayed, one preset virtual button in the virtual space can be triggered through the virtual part, or one preset physical button in the head display device is triggered by the target object, the head display device is triggered to enter the projection mode, in the projection mode, the real-time pose information of the target part in the physical space is mapped to the virtual part in the preset virtual space, the virtual reality picture is displayed in real time in the sight range of the target object, and the virtual reality picture is displayed based on the real-time pose information of the virtual part in the virtual space, so that the interaction result of the virtual object is achieved.
The embodiment of the application acquires the real-time pose information of the target part of the target object in the physical space; mapping the pose information to a virtual part in a preset virtual space, wherein the virtual space comprises a target virtual object; in the sight range of the target object, the virtual reality picture is displayed in real time, the virtual reality picture displays the interactive result of the virtual object based on the real-time pose information of the virtual part in the virtual space, the target object can realize the interactive operation between the target part of the target object and the target virtual object by watching the virtual reality picture, so that the fatigue caused by the interaction between the target part and the target virtual object when the target object is low in head or the hand is lifted to watch the target part can be relieved, and the accuracy of the interactive operation can be improved by watching the interactive operation on the other side.
The embodiment of the application provides a possible implementation manner, in a sight range of a target object, a virtual reality picture is displayed in real time, and the method further comprises the following steps:
determining a target area in the virtual space, wherein the target area comprises a virtual part and a target virtual object;
and acquiring the picture of the target area in real time through a preset virtual camera to obtain a virtual reality picture.
According to the embodiment of the application, the target area can be arranged in the virtual space, the target area is a three-dimensional space area, the size and the shape of the target area are not limited, and the virtual area and the target virtual object are needed to be included.
According to the method and the device, the virtual camera is arranged in the virtual space, the virtual camera simulates the function of the physical camera, and the image of the target area can be acquired through the virtual camera, so that the virtual reality image to be displayed is obtained.
The embodiment of the application provides a possible implementation manner, wherein the pose information comprises pose information and a first target coordinate under a first coordinate system; the first coordinate system is a coordinate system established in a physical space;
mapping the pose information to a virtual part in a preset virtual space, including:
acquiring a conversion rule between a first coordinate system and a second coordinate system which are established in advance; the second coordinate system is a coordinate system established in the virtual space;
converting the first target coordinate under the first coordinate system into the second target coordinate under the second coordinate system according to the conversion rule;
and displaying the virtual part at the second target coordinate of the virtual space, and mapping the gesture information to the virtual part.
The foregoing embodiment has described that the physical space and the virtual space are two different spaces, a first coordinate system is established in the physical space, a second coordinate system is established in the virtual space, pose information of a target part of a target object in the physical space includes pose information of the target part and a first target coordinate in the first coordinate system, and mapping of the first target coordinate and mapping of the pose information are achieved for mapping the pose information to a virtual part in a preset virtual space.
The conversion rule between the first coordinate system and the second coordinate system established in advance needs to be acquired, and the conversion rule may be an existing coordinate system conversion algorithm, which is not limited in the embodiment of the present application. After the conversion rule is obtained, converting the first target coordinate under the first coordinate system into the second target coordinate under the second coordinate system according to the conversion rule, displaying the virtual part at the second target coordinate, and simultaneously mapping the gesture information to the virtual part in real time, so that the description is omitted.
The embodiment of the application provides a possible implementation manner, in which a virtual reality picture is displayed in real time in a sight line range of a target object, including:
Setting a display area in the virtual space, wherein the display area is positioned in a sight range;
and displaying the virtual reality picture in a display area.
In this embodiment of the present application, a display area is further disposed in the target space, where the display area is located in the line of sight range of the target object, and the display area may be a two-dimensional area or a three-dimensional area.
After obtaining the virtual reality picture, displaying the virtual reality picture in the display area in real time, so as to display the virtual reality picture in real time, namely, realize the interaction between the real-time display virtual part and the target virtual object, when the virtual reality picture is a two-dimensional picture, the display area can be a two-dimensional area, and when the virtual reality picture is a three-dimensional picture, the display area can be a three-dimensional area, and of course, in some schemes, the two-dimensional picture can be rendered into the three-dimensional picture, and displayed in the three-dimensional area, and the three-dimensional picture can be rendered into the two-dimensional picture, and displayed in the two-dimensional area.
Fig. 2 is an exemplary illustration showing a virtual reality picture in a specific scene provided by the embodiment of the present application, where in the scene, the head display device is VR glasses, the target object is a person, the target portion is a hand, the virtual space shown by the VR glasses includes a virtual hand (coincident with the hand) and a virtual keyboard, and of course, other virtual objects (not shown in the drawing) may also be included, and when the target object intends to perform text input through the virtual keyboard, the virtual keyboard is triggered by the virtual hand to be in an active state, and the virtual keyboard is determined to be the target virtual object. When the hands of the target object interact with the virtual keyboard (the keys in the virtual keyboard are pressed and the pressed keyboard is highlighted), the VR glasses can collect real-time pose information of the hands and map the pose information of the hands to the virtual hands, so that the hands and the virtual hands keep unified pose information.
When the hand of the target object interacts with the virtual keyboard, the virtual hand also interacts with the virtual keyboard, the virtual space displays the text being input through the virtual keyboard and the text already input, the text already input is displayed in a text box, and the text box records the text input through the virtual keyboard.
In addition, VR glasses still capture the virtual reality picture that the interaction between virtual hand and the virtual keyboard corresponds through virtual camera to throw this virtual reality picture in the sight scope of target object, corresponding virtual reality picture when throwing virtual hand and virtual keyboard interaction, make target object can watch this virtual reality picture, target object's gaze point is located this virtual reality picture promptly, can alleviate target object low head or raise the hand and watch the tired that the interaction between hand and the target virtual object caused, another party through watching interactive operation, can improve interactive operation's accuracy.
The embodiment of the application provides a possible implementation manner, which is to collect a picture of a target area in real time through a preset virtual camera, and includes:
Setting a plurality of virtual cameras in a virtual space, and simultaneously acquiring a plurality of virtual reality pictures with different visual angles through the plurality of virtual cameras;
displaying the virtual reality picture in a display area, including:
if the sight line range comprises a plurality of display areas, displaying a virtual reality picture correspondingly in each display area;
if the sight range comprises a display area, splicing the plurality of virtual reality pictures to obtain a spliced virtual reality picture, and displaying the spliced virtual reality picture in the display area.
According to the embodiment of the application, multi-window display can be achieved, one window is a display area, specifically, a plurality of virtual cameras can be arranged in a virtual space, a plurality of virtual reality pictures with different visual angles are collected through the plurality of virtual cameras at the same time, and recognition and segmentation of the plurality of virtual reality pictures can be carried out on preset software.
If the sight range of the target object includes a plurality of display areas, a virtual reality picture can be correspondingly displayed in each display area, so that a plurality of virtual reality pictures can be displayed, the position and the size of a window corresponding to each display area can be adjusted, the windows corresponding to the display areas can be tiled and displayed, and can also be stacked and displayed.
If the view range comprises a display area, a plurality of virtual reality pictures can be spliced, and the pictures can be spliced according to the contents with different view angles or attributes to form more complex scenes and effects.
In this embodiment of the present application, a possible implementation manner is provided, where displaying a virtual reality image in a display area includes:
if the virtual reality picture is a two-dimensional image, at least one of cutting, changing a projection angle and changing transparency is performed on the virtual reality picture, and the processed virtual reality picture is displayed in a display area.
According to the embodiment of the application, two-dimensional display can be performed on the virtual reality picture, if the virtual reality picture is a two-dimensional image, at least one of cutting, changing the projection angle and changing the transparency is performed on the virtual reality picture, after the processing, the target object can conveniently distinguish the original virtual part (or the target virtual object) from the virtual part (or the target virtual object) in the display area, and the processed virtual reality picture is displayed in the display area.
In this embodiment of the present application, a possible implementation manner is provided, where a virtual reality picture is displayed in a display area, and the method further includes:
If the virtual reality picture is a three-dimensional image, identifying the virtual reality picture, and obtaining three-dimensional model data of the virtual part and the target virtual object and relative pose information between the target virtual object of the virtual part;
respectively generating a three-dimensional model of the virtual part and a three-dimensional model of the target virtual object according to the three-dimensional model data of the virtual part and the target virtual object;
in the display area, a three-dimensional model of the virtual part and a three-dimensional model of the target virtual object are displayed according to the relative pose information.
The embodiment of the application can also carry out three-dimensional display on the virtual reality picture, if the virtual reality picture is a three-dimensional image, the virtual reality picture can be identified, and particularly, the virtual reality picture can be identified through a simultaneous positioning and map construction (Simultaneous Localization and Mapping, SLAM) algorithm, so as to obtain three-dimensional model data of the virtual part and the target virtual object and relative pose information between the virtual part and the target virtual object; the three-dimensional model data includes a grid, a texture (or an animation), a material, and the like, a three-dimensional model of a virtual part can be generated according to the three-dimensional model data of the virtual part, a three-dimensional model of a target virtual object (the three-dimensional model is a polygonal representation of an object, the object can be a real object or a virtual object) can be generated according to the three-dimensional model data of the target virtual object, specifically, the three-dimensional model data of the virtual part and the target virtual object can be respectively rendered through a preset three-dimensional reference, so that three-dimensional models corresponding to the virtual part and the target virtual object can be obtained, and effects such as light shadow and shielding can be realized in the rendering process.
After the three-dimensional model and the relative pose information corresponding to the virtual part and the target virtual object are obtained, the relative pose information is registered to the three-dimensional model of the virtual part, so that the three-dimensional model of the virtual part and the three-dimensional model of the target virtual object are displayed according to the relative pose information in the display area.
Fig. 3a-3c are schematic diagrams illustrating an application scenario of an interaction method according to an embodiment of the present application.
Fig. 3a is a schematic diagram of an application scenario for determining a target virtual object, and fig. 3 shows that a hand (target portion) of the target object touches a virtual teapot, so that the virtual hand (and the hand are coincident) touches the virtual teapot, and the virtual teapot is the target virtual object.
Fig. 3b is a schematic view of an application scenario in which a virtual reality screen is displayed in real time within a line of sight of a target object, in which the virtual reality screen is a three-dimensional image, the system displays the virtual reality screen at a position 1 m in front of the target object in combination with a head gesture, and performs three-dimensional display on the virtual reality screen, in which a three-dimensional model of a virtual hand and a three-dimensional model of a virtual teapot in the virtual reality screen are generated, and relative gesture information of the three-dimensional model of the virtual hand and the three-dimensional model of the virtual teapot are the same as relative gesture information of the virtual hand and the virtual teapot, and the target object can view interaction between the three-dimensional model of the virtual hand and the three-dimensional model of the virtual teapot, that is, the point of regard of the target object falls on the displayed three-dimensional model of the virtual hand and the three-dimensional model of the virtual teapot.
Fig. 3c is a schematic diagram of an application scenario in which the shape and pose information of the three-dimensional model of the virtual hand and the three-dimensional model of the virtual teapot are changed along with the change of the shape and pose information of the virtual hand and the virtual teapot, the virtual hand stretches, rotates, scales and other gestures of the virtual teapot, the shape and pose of the virtual teapot can be changed along with the change of the gestures, the three-dimensional model of the virtual hand can be synchronously picked up, and meanwhile, the three-dimensional model of the virtual teapot can be synchronously changed along with the change of the shape and the pose of the virtual teapot, so that the same effect is achieved.
The embodiment of the application provides a head display device 40, as shown in fig. 4, the head display device 40 may include:
the acquisition module 410 is configured to acquire real-time pose information of a target part of a target object in a physical space;
the mapping module 420 is configured to map the pose information to a virtual location in a preset virtual space, where the virtual space includes a target virtual object; the virtual space is a virtual space displayed by the head display equipment;
the display module 430 is configured to display the virtual reality screen in real time within the line of sight of the target object; the virtual reality picture displays the result of interaction of the target virtual object based on real-time pose information of the virtual part in the virtual space.
In an embodiment of the present application, a possible implementation manner is provided, where the head display device further includes:
the target area determining module is used for determining a target area in the virtual space, wherein the target area comprises a virtual part and a target virtual object;
and the virtual reality picture acquisition module is used for acquiring pictures of the target area in real time through a preset virtual camera to obtain virtual reality pictures.
The embodiment of the application provides a possible implementation manner, wherein the pose information comprises pose information and a first target coordinate under a first coordinate system; the first coordinate system is a coordinate system established in a physical space;
the mapping module is specifically used for acquiring a conversion rule between a first coordinate system and a second coordinate system which are established in advance; the second coordinate system is a coordinate system established in the virtual space; converting the first target coordinate under the first coordinate system into the second target coordinate under the second coordinate system according to the conversion rule; and displaying the virtual part at the second target coordinate of the virtual space, and mapping the gesture information to the virtual part.
In an embodiment of the present application, a possible implementation manner is provided, where the head display device further includes:
the target virtual object determining module is used for determining the target virtual object in any one of the following modes:
Responding to the triggering operation of the target object on the target control, determining the gaze point of the target object, and taking the virtual object where the gaze point is located as a target virtual object; the target control is a physical control of the head display device or a physical control or a virtual control of an external device of the head display device;
setting a preset interaction area in the virtual space, and taking a virtual object in the preset interaction area as a target virtual object;
responding to the virtual part triggering virtual object to be in an activated state, and taking the virtual object in the activated state as a target virtual object;
and determining the virtual object contacted by the virtual part, and taking the virtual object contacted by the virtual part as a target virtual object.
In an embodiment of the present application, a possible implementation manner is provided, where a display module includes:
the display area setting sub-module is used for setting a display area in the virtual space, wherein the display area is positioned in a sight line range;
and the display sub-module is used for displaying the virtual reality picture in the display area.
The embodiment of the application provides a possible implementation manner, and the virtual reality picture acquisition module is further used for setting a plurality of virtual cameras in the virtual space, and simultaneously acquiring a plurality of virtual reality pictures with different visual angles through the plurality of virtual cameras;
The display submodule is specifically used for correspondingly displaying a virtual reality picture in each display area if the sight line range comprises a plurality of display areas; if the sight range comprises a display area, splicing the plurality of virtual reality pictures to obtain a spliced virtual reality picture, and displaying the spliced virtual reality picture in the display area.
The embodiment of the application provides a possible implementation manner, and the display sub-module is specifically further configured to, if the virtual reality picture is a two-dimensional image, perform at least one of cutting, changing a projection angle, and changing transparency on the virtual reality picture, and display the processed virtual reality picture in a display area.
The embodiment of the application provides a possible implementation manner, and the display sub-module is specifically further configured to identify the virtual reality picture if the virtual reality picture is a three-dimensional stereoscopic image, and obtain three-dimensional model data of the virtual part and the target virtual object and relative pose information between the virtual part and the target virtual object; respectively generating a three-dimensional model of the virtual part and a three-dimensional model of the target virtual object according to the three-dimensional model data of the virtual part and the target virtual object; in the display area, a three-dimensional model of the virtual part and a three-dimensional model of the target virtual object are displayed according to the relative pose information.
The apparatus of the embodiments of the present application may perform the method provided by the embodiments of the present application, and implementation principles of the method are similar, and actions performed by each module in the apparatus of each embodiment of the present application correspond to steps in the method of each embodiment of the present application, and detailed functional descriptions of each module of the apparatus may be referred to in the corresponding method shown in the foregoing, which is not repeated herein.
The embodiment of the application provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory, wherein the processor executes the computer program to realize steps of an interaction method, and compared with the related technology, the steps of the interaction method can be realized: the embodiment of the application acquires the real-time pose information of the target part of the target object in the physical space; mapping the pose information to a virtual part in a preset virtual space, wherein the virtual space comprises a target virtual object; in the sight range of the target object, the virtual reality picture is displayed in real time, the virtual reality picture displays the interactive result of the virtual object based on the real-time pose information of the virtual part in the virtual space, the target object can realize the interactive operation between the target part of the target object and the target virtual object by watching the virtual reality picture, so that the fatigue caused by the interaction between the target part and the target virtual object when the target object is low in head or the hand is lifted to watch the target part can be relieved, and the accuracy of the interactive operation can be improved by watching the interactive operation on the other side.
In an alternative embodiment, an electronic device is provided, as shown in fig. 5, the electronic device 5000 shown in fig. 5 includes: a processor 5001 and a memory 5003. The processor 5001 is coupled to the memory 5003, e.g., via bus 5002. Optionally, the electronic device 5000 may further include a transceiver 5004, the transceiver 5004 may be used for data interaction between the electronic device and other electronic devices, such as transmission of data and/or reception of data, etc. Note that, in practical applications, the transceiver 5004 is not limited to one, and the structure of the electronic device 5000 is not limited to the embodiment of the present application.
The processor 5001 may be a CPU (Central Processing Unit ), general purpose processor, DSP (Digital Signal Processor, data signal processor), ASIC (Application Specific Integrated Circuit ), FPGA (Field Programmable Gate Array, field programmable gate array) or other programmable logic device, transistor logic device, hardware components, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules, and circuits described in connection with this disclosure. The processor 5001 may also be a combination of computing functions, e.g., including one or more microprocessor combinations, a combination of a DSP and a microprocessor, etc.
Bus 5002 may include a path to transfer information between the aforementioned components. Bus 5002 may be a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus or EISA (Extended Industry Standard Architecture ) bus, among others. The bus 5002 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 5, but not only one bus or one type of bus.
The Memory 5003 may be, but is not limited to, ROM (Read Only Memory) or other type of static storage device capable of storing static information and instructions, RAM (Random Access Memory ) or other type of dynamic storage device capable of storing information and instructions, EEPROM (Electrically Erasable Programmable Read Only Memory ), CD-ROM (Compact Disc Read Only Memory, compact disc Read Only Memory) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media, other magnetic storage devices, or any other medium that can be used to carry or store computer programs and that can be Read by a computer.
The memory 5003 is for storing a computer program for executing embodiments of the present application, and is controlled for execution by the processor 5001. The processor 5001 is operative to execute computer programs stored in the memory 5003 to implement the steps illustrated in the foregoing method embodiments.
Among them, the electronic device package may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 5 is merely an example, and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
Embodiments of the present application provide a computer readable storage medium having a computer program stored thereon, where the computer program, when executed by a processor, may implement the steps and corresponding content of the foregoing method embodiments. Compared with the prior art, can realize: the embodiment of the application acquires the real-time pose information of the target part of the target object in the physical space; mapping the pose information to a virtual part in a preset virtual space, wherein the virtual space comprises a target virtual object; in the sight range of the target object, the virtual reality picture is displayed in real time, the virtual reality picture displays the interactive result of the virtual object based on the real-time pose information of the virtual part in the virtual space, the target object can realize the interactive operation between the target part of the target object and the target virtual object by watching the virtual reality picture, so that the fatigue caused by the interaction between the target part and the target virtual object when the target object is low in head or the hand is lifted to watch the target part can be relieved, and the accuracy of the interactive operation can be improved by watching the interactive operation on the other side.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The embodiments of the present application also provide a computer program product, which includes a computer program, where the computer program can implement the steps of the foregoing method embodiments and corresponding content when executed by a processor. Compared with the prior art, can realize: the embodiment of the application acquires the real-time pose information of the target part of the target object in the physical space; mapping the pose information to a virtual part in a preset virtual space, wherein the virtual space comprises a target virtual object; in the sight range of the target object, the virtual reality picture is displayed in real time, the virtual reality picture displays the interactive result of the virtual object based on the real-time pose information of the virtual part in the virtual space, the target object can realize the interactive operation between the target part of the target object and the target virtual object by watching the virtual reality picture, so that the fatigue caused by the interaction between the target part and the target virtual object when the target object is low in head or the hand is lifted to watch the target part can be relieved, and the accuracy of the interactive operation can be improved by watching the interactive operation on the other side.
The terms "first," "second," "third," "fourth," "1," "2," and the like in the description and in the claims of this application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the present application described herein may be implemented in other sequences than those illustrated or otherwise described.
It should be understood that, although the flowcharts of the embodiments of the present application indicate the respective operation steps by arrows, the order of implementation of these steps is not limited to the order indicated by the arrows. In some implementations of embodiments of the present application, the implementation steps in the flowcharts may be performed in other orders as desired, unless explicitly stated herein. Furthermore, some or all of the steps in the flowcharts may include multiple sub-steps or multiple stages based on the actual implementation scenario. Some or all of these sub-steps or phases may be performed at the same time, or each of these sub-steps or phases may be performed at different times, respectively. In the case of different execution time, the execution sequence of the sub-steps or stages may be flexibly configured according to the requirement, which is not limited in the embodiment of the present application.
The foregoing is merely an optional implementation manner of some implementation scenarios of the present application, and it should be noted that, for those skilled in the art, other similar implementation manners based on the technical ideas of the present application are adopted without departing from the technical ideas of the solution of the present application, which also belongs to the protection scope of the embodiments of the present application.

Claims (11)

1. An interaction method, characterized by being applied to a head display device, the method comprising:
collecting real-time pose information of a target part of a target object in a physical space;
mapping the pose information to a virtual part in a preset virtual space, wherein the virtual space comprises a target virtual object; the virtual space is a virtual space displayed by the head display device;
displaying a virtual reality picture in real time in the sight range of the target object; and the virtual reality picture displays the interaction result of the target virtual object based on the real-time pose information of the virtual part in the virtual space.
2. The method according to claim 1, wherein displaying the virtual reality screen in real time within the line of sight of the target object, further comprises:
determining a target area in the virtual space, wherein the target area comprises the virtual part and the target virtual object;
and acquiring the picture of the target area in real time through a preset virtual camera to obtain the virtual reality picture.
3. The method of claim 1, wherein the pose information comprises pose information and a first target coordinate in a first coordinate system; the first coordinate system is a coordinate system established in the physical space;
The mapping the pose information to a virtual part in a preset virtual space includes:
acquiring a conversion rule between a first coordinate system and a second coordinate system which are established in advance; the second coordinate system is a coordinate system established in the virtual space;
converting the first target coordinate under the first coordinate system into a second target coordinate under a second coordinate system according to the conversion rule;
and displaying the virtual part at a second target coordinate of the virtual space, and mapping the gesture information to the virtual part.
4. The method according to claim 2, wherein the capturing, by a preset virtual camera, the picture of the target area in real time further comprises:
the target virtual object is determined by any one of the following means:
responding to the triggering operation of the target object on a target control, determining the gaze point of the target object, and taking a virtual object where the gaze point is located as a target virtual object; the target control is a physical control of the head display device or a physical control or a virtual control of an external device of the head display device;
setting a preset interaction area in the virtual space, and taking a virtual object in the preset interaction area as a target virtual object;
Responding to the virtual part triggering virtual object to be in an activated state, and taking the virtual object in the activated state as a target virtual object;
and determining the virtual object contacted by the virtual part, and taking the virtual object contacted by the virtual part as a target virtual object.
5. The method according to claim 2, wherein displaying the virtual reality screen in real time within the line of sight of the target object comprises:
setting a display area in the virtual space, wherein the display area is positioned in the sight line range;
and displaying the virtual reality picture in the display area.
6. The method according to claim 5, wherein capturing the image of the target area in real time by a preset virtual camera comprises:
setting a plurality of virtual cameras in the virtual space, and simultaneously acquiring a plurality of virtual reality pictures with different visual angles through the plurality of virtual cameras;
the displaying the virtual reality picture in the display area includes:
if the sight line range comprises a plurality of display areas, displaying a virtual reality picture correspondingly in each display area;
and if the sight line range comprises a display area, splicing the plurality of virtual reality pictures to obtain a spliced virtual reality picture, and displaying the spliced virtual reality picture in the display area.
7. The method of claim 5, wherein displaying the virtual reality screen on the display area comprises:
and if the virtual reality picture is a two-dimensional image, at least one of cutting, changing a projection angle and changing transparency is carried out on the virtual reality picture, and the processed virtual reality picture is displayed in the display area.
8. The method of claim 5, wherein the displaying the virtual reality screen on the display area further comprises:
if the virtual reality picture is a three-dimensional image, identifying the virtual reality picture, and obtaining three-dimensional model data of the virtual part and the target virtual object and relative pose information between the virtual part and the target virtual object;
generating a three-dimensional model of the virtual part and a three-dimensional model of the target virtual object according to the three-dimensional model data of the virtual part and the target virtual object respectively;
and displaying the three-dimensional model of the virtual part and the three-dimensional model of the target virtual object according to the relative pose information in the display area.
9. A head display device, characterized by comprising:
the acquisition module is used for acquiring real-time pose information of a target part of a target object in a physical space;
the mapping module is used for mapping the pose information to a virtual part in a preset virtual space, and the virtual space comprises a target virtual object; the virtual space is a virtual space displayed by the head display device;
the display module is used for displaying the virtual reality picture in real time in the sight range of the target object; and the virtual reality picture displays the interaction result of the target virtual object based on the real-time pose information of the virtual part in the virtual space.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory, characterized in that the processor executes the computer program to carry out the steps of the method according to any one of claims 1-8.
11. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method according to any of claims 1-8.
CN202311361553.5A 2023-10-19 2023-10-19 Interactive method, head display device, electronic device and readable storage medium Pending CN117453037A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311361553.5A CN117453037A (en) 2023-10-19 2023-10-19 Interactive method, head display device, electronic device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311361553.5A CN117453037A (en) 2023-10-19 2023-10-19 Interactive method, head display device, electronic device and readable storage medium

Publications (1)

Publication Number Publication Date
CN117453037A true CN117453037A (en) 2024-01-26

Family

ID=89592179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311361553.5A Pending CN117453037A (en) 2023-10-19 2023-10-19 Interactive method, head display device, electronic device and readable storage medium

Country Status (1)

Country Link
CN (1) CN117453037A (en)

Similar Documents

Publication Publication Date Title
KR101453815B1 (en) Device and method for providing user interface which recognizes a user's motion considering the user's viewpoint
US9639988B2 (en) Information processing apparatus and computer program product for processing a virtual object
US8751969B2 (en) Information processor, processing method and program for displaying a virtual image
US9268410B2 (en) Image processing device, image processing method, and program
US20220148279A1 (en) Virtual object processing method and apparatus, and storage medium and electronic device
KR101890459B1 (en) Method and system for responding to user's selection gesture of object displayed in three dimensions
US20140075370A1 (en) Dockable Tool Framework for Interaction with Large Scale Wall Displays
CN110968187B (en) Remote touch detection enabled by a peripheral device
EP2558924B1 (en) Apparatus, method and computer program for user input using a camera
Budhiraja et al. Using a HHD with a HMD for mobile AR interaction
Fiorentino et al. Design review of CAD assemblies using bimanual natural interface
CN112068698A (en) Interaction method and device, electronic equipment and computer storage medium
WO2014194148A2 (en) Systems and methods involving gesture based user interaction, user interface and/or other features
CN111860252A (en) Image processing method, apparatus and storage medium
CN113961107B (en) Screen-oriented augmented reality interaction method, device and storage medium
CN115191006B (en) 3D model for displayed 2D elements
CN111913674A (en) Virtual content display method, device, system, terminal equipment and storage medium
CN117130518A (en) Control display method, head display device, electronic device and readable storage medium
Schöning et al. Bimanual interaction with interscopic multi-touch surfaces
EP3088991B1 (en) Wearable device and method for enabling user interaction
CN115480639A (en) Human-computer interaction system, human-computer interaction method, wearable device and head display device
CN117453037A (en) Interactive method, head display device, electronic device and readable storage medium
CN113421343A (en) Method for observing internal structure of equipment based on augmented reality
CN110941389A (en) Method and device for triggering AR information points by focus
Wieland Designing and Evaluating Interactions for Handheld AR

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination