CN118057266A - Method and device for controlling user position in virtual scene - Google Patents

Method and device for controlling user position in virtual scene Download PDF

Info

Publication number
CN118057266A
CN118057266A CN202211462815.2A CN202211462815A CN118057266A CN 118057266 A CN118057266 A CN 118057266A CN 202211462815 A CN202211462815 A CN 202211462815A CN 118057266 A CN118057266 A CN 118057266A
Authority
CN
China
Prior art keywords
ray
user
movement
target gesture
virtual scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211462815.2A
Other languages
Chinese (zh)
Inventor
饶小林
方迟
刘硕
刘静薇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202211462815.2A priority Critical patent/CN118057266A/en
Publication of CN118057266A publication Critical patent/CN118057266A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a control method and a device for a user position in a virtual scene, wherein when a target gesture is detected, rays are displayed in the virtual scene, the starting points of the rays represent the current position of the user, the tail ends of the rays represent the target position after the user moves, when the movement speed of the target gesture is smaller than a preset speed, the tail end position of the rays is controlled to move according to the movement direction of the target gesture, and when the movement speed of the target gesture is greater than or equal to the preset speed, the user in the virtual scene is controlled to move to the tail end position of the rays. According to the method, the quick movement of the user position in the virtual scene can be controlled according to the movement direction and the movement speed of the user gesture in the real scene, the movement of the user position can be realized without an additional controller, the user operation is convenient, and the user experience is improved.

Description

Method and device for controlling user position in virtual scene
Technical Field
The embodiment of the application relates to the technical field of electronic equipment, in particular to a method and a device for controlling a user position in a virtual scene.
Background
Augmented Reality (XR), which is a common term for various technologies such as Virtual Reality (VR), augmented Reality (Augmented Reality, AR), and Mixed Reality (MR), is created by combining Reality with Virtual through a computer. By integrating the visual interaction technologies of the three, the method brings the 'immersion' of seamless transition between the virtual world and the real world for the experienter.
In an XR scenario, a user may interact with the XR device through one or more of gaze control, hand-held hardware device (e.g., controller) control, gesture control, wearable device (e.g., wristband) control, voice control, etc., to effect control of virtual objects in a virtual environment corresponding to the XR device, e.g., selecting objects, moving, rotating, resizing, launching controls, changing colors or skin, defining interactions between virtual objects, setting virtual forces to act on virtual objects, etc.
In the prior art, the movement of the virtual object corresponding to the user is generally realized through the movement of a handheld controller worn by the user, and the operation mode is inconvenient.
Disclosure of Invention
The embodiment of the application provides a method and a device for controlling the position of a user in a virtual scene, which can control the rapid movement of the position of the user in the virtual scene according to the movement direction and the movement speed of the gesture of the user in a real scene, can realize the movement of the position of the user without an additional controller, are convenient for the operation of the user and improve the user experience.
In a first aspect, an embodiment of the present application provides a method for controlling a user position in a virtual scene, where the method includes:
When a target gesture is detected, displaying rays in a virtual scene, wherein the starting points of the rays represent the current position of a user, and the tail ends of the rays represent the target position of the user after moving;
detecting the movement direction and movement speed of the target gesture;
When the movement speed of the target gesture is smaller than a preset speed, controlling the position movement of the ray according to the movement direction of the target gesture;
and when the movement speed of the target gesture is greater than or equal to the preset speed, controlling the user in the virtual scene to move to the tail end position of the ray.
In some embodiments, the controlling the positional movement of the ray according to the direction of motion of the target gesture includes:
When the movement direction of the target gesture is up and down movement and/or inward and outward rotation, controlling the position of the ray to move far and near; and/or
And when the movement direction of the target gesture is left and right movement and/or left and right rotation, controlling the position of the ray to move left and right.
In some embodiments, the position of the ray is controlled to move far, near, left and right, the position of the start of the ray is unchanged, and the position of the end of the ray is moved.
In some embodiments, the ray is controlled to move far, near, left, and right, the ray having a start position and an end position, the start position of the ray moving a distance less than the end position.
In some embodiments, when a target gesture is detected, a ray is displayed in the virtual scene, including:
and when the target gesture is detected, generating and displaying the ray by taking a preset position in the virtual scene as the starting point of the ray.
In some embodiments, when a target gesture is detected, a ray is displayed in the virtual scene, including:
And when the target gesture is detected, displaying a virtual object in the virtual scene, and generating and displaying the ray by taking a point on the virtual object as a starting point of the ray.
In some embodiments, the virtual object is a virtual gesture corresponding to the target gesture, and the starting point of the ray is a palm position or a fingertip position of the virtual gesture.
In some embodiments, the initial length of the ray is a preset length.
In some embodiments, the direction of extension of the ray is the direction in which the user faces.
In some embodiments, in controlling the position movement of the ray according to the movement direction of the target gesture, further comprising:
the initial position and the moved position of the ray are displayed in a distinguishing manner.
In some embodiments, the target gesture is a palm up and finger pinch gesture, the speed of movement of the target gesture being the speed at which the pinch fingers pop open.
In some embodiments, when the motion speed of the target gesture is greater than or equal to the preset speed, controlling the user to move to the end position of the ray in the virtual scene includes:
When the movement speed is greater than or equal to the preset speed, controlling the user to move to the tail end position of the ray instantaneously;
Or when the movement speed is greater than or equal to the preset speed, controlling the user to move to the end position of the ray according to the preset target speed.
In some embodiments, further comprising: after the user moves to the end position of the ray, the ray is hidden in the virtual scene.
In another aspect, an embodiment of the present application provides a control device for a user location in a virtual scene, where the device includes:
The display module is used for displaying rays in the virtual scene when the target gesture is detected, the starting points of the rays represent the current position of a user, and the tail ends of the rays represent the target position of the user after the user moves;
the detection module is used for detecting the movement direction and the movement speed of the target gesture;
the control module is used for controlling the position movement of the ray according to the movement direction of the target gesture when the movement speed of the target gesture is smaller than a preset speed;
And the control module is further used for controlling the user to move to the tail end position of the ray in the virtual scene when the movement speed of the target gesture is greater than or equal to the preset speed.
On the other hand, an embodiment of the present application provides a control device for a user position in a virtual scene, where the control device for a user position in a virtual scene includes: a processor and a memory for storing a computer program, the processor being for invoking and running the computer program stored in the memory to perform the method as described in any of the above.
In another aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program, the computer program causing a computer to perform the method as set forth in any one of the preceding claims.
In another aspect, embodiments of the present application provide a computer program product comprising a computer program which, when executed by a processor, implements a method as claimed in any one of the preceding claims.
According to the method and the device for controlling the position of the user in the virtual scene, when the target gesture is detected, rays are displayed in the virtual scene, the starting points of the rays represent the current position of the user, the tail ends of the rays represent the moved target position of the user, when the moving speed of the target gesture is smaller than the preset speed, the position movement of the rays is controlled according to the moving direction of the target gesture, and when the moving speed of the target gesture is greater than or equal to the preset speed, the user is controlled to move to the tail end position of the rays in the virtual scene. According to the method, the quick movement of the user position in the virtual scene can be controlled according to the movement direction and the movement speed of the user gesture in the real scene, the movement of the user position can be realized without an additional controller, the user operation is convenient, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for controlling a user position in a virtual scene according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an interface of a virtual scene when a user's position movement is triggered;
FIG. 3 is a diagram of interface transformation of a virtual scene when a user's position movement triggers to end;
Fig. 4 is a schematic structural diagram of a control device for a user position in a virtual scene according to an embodiment of the present application;
Fig. 5 is a schematic structural diagram of a control device for a user position in a virtual scene according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the application provides a control method for a user position in a virtual scene, which can be applied to an XR device, wherein the XR device comprises a VR device, an AR device and an MR device.
VR: the technology of creating and experiencing a virtual world, calculating and generating a virtual environment, which is a multi-source information (the virtual reality mentioned herein at least comprises visual perception, auditory perception, tactile perception, motion perception, even taste perception, olfactory perception and the like, and the virtual reality also comprises gustatory perception, olfactory perception and the like), realizes the simulation of the fusion, interactive three-dimensional dynamic view and entity behaviors of the virtual environment, immerses a user into the simulated virtual reality environment, and realizes the application in various virtual environments such as maps, games, videos, education, medical treatment, simulation, collaborative training, sales, assistance in manufacturing, maintenance and repair and the like.
VR devices are terminals for realizing virtual reality effects, and can be generally provided in the form of glasses, head mounted displays (Head Mount Display, HMD), or contact lenses for realizing visual perception and other forms of perception, although the form of virtual reality device realization is not limited thereto, and can be further miniaturized or enlarged as needed.
AR: an AR set refers to a simulated set with at least one virtual object superimposed over a physical set or representation thereof. For example, the electronic system may have an opaque display and at least one imaging sensor for capturing images or videos of the physical set, which are representations of the physical set. The system combines the image or video with the virtual object and displays the combination on an opaque display. The individual uses the system to view the physical set indirectly via an image or video of the physical set and observe a virtual object superimposed over the physical set. When the system captures images of a physical set using one or more image sensors and presents an AR set on an opaque display using those images, the displayed images are referred to as video passthrough. Alternatively, the electronic system for displaying the AR scenery may have a transparent or translucent display through which the individual may directly view the physical scenery. The system may display the virtual object on a transparent or semi-transparent display such that an individual uses the system to view the virtual object superimposed over the physical scenery. As another example, the system may include a projection system that projects the virtual object into a physical set. The virtual object may be projected, for example, on a physical surface or as a hologram, such that an individual uses the system to view the virtual object superimposed over the physical scene. In particular, a technique for calculating camera attitude parameters of a camera in the real world (or three-dimensional world, real world) in real time in the process of acquiring images by the camera and adding virtual elements on the images acquired by the camera according to the camera attitude parameters. Virtual elements include, but are not limited to: images, videos, and three-dimensional models. The goal of AR technology is to socket the virtual world over the real world on the screen for interaction.
MR: by presenting virtual scene information in a real scene, an interactive feedback information loop is built up among the real world, the virtual world and the user, so as to enhance the sense of realism of the user experience. For example, integrating computer-created sensory input (e.g., virtual objects) with sensory input from a physical scenery or representations thereof in a simulated scenery, in some MR sceneries, the computer-created sensory input may be adapted to changes in sensory input from the physical scenery. In addition, some electronic systems for rendering MR scenes may monitor orientation and/or position relative to the physical scene to enable virtual objects to interact with real objects (i.e., physical elements from the physical scene or representations thereof). For example, the system may monitor movement such that the virtual plants appear to be stationary relative to the physical building.
A virtual scene is a virtual scene that an application program displays (or provides) when running on an electronic device. The virtual scene may be a simulation environment for the real world, a semi-simulation and semi-fictional virtual scene, or a pure fictional virtual scene. The virtual scene may be any one of a two-dimensional (2D) virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional (3D) virtual scene, and the dimensions of the virtual scene are not limited in the embodiment of the present application. For example, a virtual scene may include sky, land, sea, etc., the land may include environmental elements of a desert, city, etc., and a user may control a virtual object to move in the virtual scene.
A virtual Field Of View, which may be referred to as a virtual viewpoint, represents a region in a virtual environment that a user can perceive through a lens in a virtual reality device, by using a Field Of View (FOV) Of the virtual Field Of View.
Virtual objects, objects that interact in a virtual scene, objects that are under the control of a user or a robot program (e.g., an artificial intelligence based robot program) are capable of being stationary, moving, and performing various actions in the virtual scene, such as various characters in a game, and the like.
The virtual field of view is constantly changing, and in the prior art, the change in the virtual field of view of the user is typically controlled by the controller of the XR device. Taking an HMD as an example, a sensor (such as a nine-axis sensor) for detecting the posture change of the HMD in real time is provided in the HMD, if the user wears the HMD, when the posture of the head of the user changes, the real-time posture of the head is transmitted to the processor, so as to calculate the gaze point of the line of sight of the user in the virtual environment, calculate the image in the gaze range (i.e. virtual field of view) of the user in the three-dimensional model of the virtual environment according to the gaze point, and display the image on the display screen, so that the user can feel as if the user were watching in the real environment.
The change of the virtual field of view can be understood as the change of the position of the user, and the change of the virtual field of view can lead to the change of the virtual scene seen by the user, so that the movement of the position of the user can be understood as the movement of the virtual field of view in the embodiment of the application. In some scenarios, a virtual object corresponding to a user is displayed in the virtual scene, and accordingly, movement of the user position may be represented as movement of the virtual object in the virtual scene.
In the embodiment of the application, the virtual scene can be called as a virtual environment, and the real scene can be a real environment or an artificial reality environment.
An embodiment of the present application provides a method for controlling a user position in a virtual scene, and fig. 1 is a flowchart of the method for controlling a user position in a virtual scene provided by the embodiment of the present application, where an execution body is an XR device, and as shown in fig. 1, the method for controlling a user position in a virtual scene includes the following steps:
S101, when a target gesture is detected, displaying rays in the virtual scene, wherein the starting points of the rays represent the current position of a user, and the tail ends of the rays represent the target position after the user moves.
The XR device may monitor the hand pose of the user in a real-world scene, and the XR device may continuously or periodically detect the hand pose (i.e., gesture) of the user. A gesture may be understood as a shape formed by a user's hand in a real scene. In one implementation, the user gestures may be recognized using images acquired by an XR device's own camera or an external camera that captures a description of the user's hand. In another implementation, the gesture may be based on input from a wearable device, such as a glove or wristband that tracks the movement of a user's hand. Optionally, when the gesture is identified by the image acquired by the camera or the motion data acquired by the wearable device, the acquired data can be input into a machine learning model which is obtained by training in advance and is used for identifying the gesture, and the gesture is identified by the machine learning model, so that the accuracy of gesture identification is improved.
The target gesture is used to trigger a position movement of the user, also referred to as position transfer, i.e. when the target gesture is detected to occur in a real scene, it indicates that the user is about to start a position movement.
The target gesture may be a gesture made by two hands together, such as a hand holding a fist, and fingers holding a pinch, an "OK" gesture, a "like" gesture, etc., and the target gesture may be a gesture made by two hands together, such as a hand holding a fist, a hand holding ten, a hand holding up, a hand holding down, etc., which are merely illustrative, and the embodiment of the present application does not limit the specific gesture of the target gesture.
After the target gesture is detected, a ray is displayed in the virtual scene, and the ray can be used for notifying the user that the position movement is triggered, however, the user can be prompted to be triggered by other manners, for example, the position movement is triggered by text in a blank position or a fixed position of the virtual scene, or the position movement is notified to the user in a voice playing mode.
The ray is also used for representing the position before and after the user moves, the starting point of the ray represents the current position of the user, and the tail end of the ray represents the target position after the user moves, and it can be understood that the starting point of the ray does not represent the current actual position of the user, but is only a position schematic diagram for representing the change of the position of the user, the change of the tail end position of the ray represents the change of the target position, and the user can continuously adjust the target position according to the requirement of the user.
The color and pattern of the ray are not limited in this embodiment, and the ray pattern includes line thickness, solid line, broken line, and the like.
Alternatively, an arrow may be added at the end of the ray, by which the start and end of the ray are distinguished, or other patterns may be added at the end of the ray, such as a circular cursor, solid dots, to distinguish the start and end of the ray.
In one implementation, when a target gesture is detected, a ray is generated and displayed with a preset position in the virtual scene as a starting point of the ray.
The preset position may be any position in the virtual scene, for example, the preset position is a position near the middle of the bottom of the virtual scene, or a blank position in the virtual scene, and may also be a position at the lower left corner or the lower right corner of the virtual scene.
In another implementation, when a target gesture is detected, a virtual object is displayed in the virtual scene, and a ray is generated and displayed with a point on the virtual object as a starting point of the ray.
The virtual object may be a virtual gesture corresponding to a target gesture, an arbitrary gesture, a cartoon image, an eye, a five-pointed star, a circle, or other objects, which is not limited in this embodiment.
The display position of the virtual object may be any position in the virtual scene, for example, the display position of the virtual object is a position near the middle of the bottom of the virtual scene, or a blank position in the virtual scene, and may also be a position at the lower left corner or the lower right corner of the virtual scene.
When the virtual object is a virtual gesture corresponding to the target gesture, the starting point of the ray may be any point on the virtual gesture, for example, a palm center, a fingertip position, and the fingertip position may be a fingertip position of any one finger, for example, a middle finger fingertip position or a thumb fingertip position.
In addition to determining the origin of the ray, the XR device also needs to determine the initial length and direction of extension of the ray. Alternatively, the initial length of the ray may be a preset length, which may be configured by the system or may be set by the user.
The extending direction of the ray may be a preset direction, which is configured by the system, or may be set by the user. The extending direction of the ray may also be related to the current direction of the user, for example, the extending direction of the ray is set to the direction in which the user faces, or to the front left or front right of the user, which is not limited by the present embodiment.
Fig. 2 is a schematic interface diagram of a virtual scene when the position movement of a user is triggered, as shown in fig. 2, when the gesture of the user is detected to be a palm-up and finger pinch gesture, the gesture is displayed in the virtual scene, a ray is emitted from the finger pinch position as a starting point, and the tail end of the ray is one end with a circular cursor, and the position of the circular cursor is the target position after the user moves.
S102, detecting the movement direction and movement speed of the target gesture.
After the position movement is triggered, the user may hold the target gesture and move and/or rotate the position of the hand, adjusting the position of the ray by the change in direction of the hand and the speed of movement.
The movement direction and movement speed of the target gesture may be the movement speed and movement direction of the hand in which the target gesture is located, and the detection of the movement direction and movement speed of the target gesture may be obtained by using an image acquired by a camera of the XR device or an external camera, or may be based on an input from the wearable device, for example, a glove or wristband tracking the movement of the hand of the user, and the movement direction and movement speed of the hand may be detected according to data acquired by the wearable device. The motion direction and the motion speed of the target gesture can be detected by combining the image data acquired by the camera and the sensor data acquired by the wearable device worn by the hand. Optionally, the movement speed of the target gesture may also be the flick speed of the finger in the target gesture.
The movement of the target gesture comprises movement and/or rotation, and correspondingly, the movement direction of the target gesture comprises movement direction and/or rotation direction, and the movement speed of the target gesture comprises movement speed and/or rotation speed. Illustratively, the direction of movement is, for example, upward, downward, leftward, rightward, etc., and the direction of rotation is, for example, inward rotation, outward rotation, leftward rotation, rightward rotation, etc.
When the movement direction of the target gesture is a movement direction, the movement direction of the target gesture may be a single direction, for example, the movement direction of the target gesture is an upward movement. The movement direction of the target gesture may also be a plurality of directions, for example, the movement direction of the target gesture is an upward right movement, and in this case, the movement direction may be understood as two directions: move up and right.
When the movement direction of the target gesture is a rotation direction, the rotation direction is relative to a rotation point, which may be the wrist, at which time the user keeps the wrist stationary, the hand rotates inward, outward, left or right, and so on. The rotation point may also be the elbow, where the user holds the elbow stationary, the hand rotates inward, outward, left or right, etc.
Optionally, the movement direction of the target gesture may also be a direction towards the user and a direction away from the user, where the direction towards the user refers to a direction towards the face of the user, and the direction away from the user refers to a direction away from the face of the user, and the directions towards the user and away from the user are opposite.
It will be appreciated that in the actual operation, it is difficult to move or rotate the target in only one direction when the target gesture is moving, and at this time, a main direction may be selected as the moving direction to control the movement of the position of the ray, or a combination of multiple directions may be selected to control the movement of the position of the ray.
And S103, when the movement speed of the target gesture is smaller than a preset speed, controlling the position movement of the ray according to the movement direction of the target gesture.
Optionally, whether the movement speed of the target gesture is smaller than a preset speed is judged, and if the movement speed of the target gesture is smaller than the preset speed, the position movement of the ray is controlled according to the movement direction of the target gesture. If the movement speed of the gesture of the user is greater than or equal to the preset speed, the user is controlled to move to the tail end position of the ray in the virtual scene, and at the moment, the position of the ray is not adjusted, and the moved position of the user is the initial position of the ray, namely, the user moves to a default position.
In this embodiment, the user may keep the target posture unchanged, then slowly lift his/her hands up and down or swing his/her hands left and right, and change the position of the ray, thereby changing the target position of the user.
For example, when the movement direction of the target gesture is up and down movement and/or when the target gesture rotates inwards and outwards, the position of the control ray moves far and near; and/or when the movement direction of the target gesture is left and right movement and/or when the target gesture rotates left and right, controlling the position of the ray to move left and right.
For example, when the user keeps the target posture to raise his/her hand upward, or when the user keeps the target posture to rotate outward, the position of the ray moves far; when the user keeps the target posture and lifts hands downwards, or when the user keeps the target posture and rotates inwards, the position of the ray moves proximally; when the user keeps the target gesture to swing to the right, or when the user keeps the target gesture to rotate to the right, the position of the ray moves to the right; when the user keeps the target posture to swing the hand to the left, or when the user keeps the target posture to rotate to the left, the position of the ray moves to the left.
For another example, when the direction of movement of the target gesture is toward the user, the position of the ray moves closer, and when the direction of movement of the target gesture is away from the user, the position of the ray moves farther. In this scenario, when determining the movement direction of the target gesture of the user, it is not necessary to pay attention to whether the movement of the target finger is movement, rotation or a combination of the movement and rotation, when the user moves the target gesture, the movement direction may be toward the user and away from the user, when the user rotates the target gesture, the rotation direction may be toward the user and away from the user, and when the user moves and rotates the target gesture at the same time, the movement direction may be toward the user and away from the user.
The correspondence between the motion direction of the target gesture and the motion direction of the ray may be set according to the requirement of the user, and is not limited to the above-mentioned correspondence.
In one exemplary manner, as the position of the control ray is moved far, near, left, and right, the starting position of the ray is unchanged and the end position of the ray is moved. I.e. the current position of the user is kept unchanged, and only the target position of the user is changed.
In another exemplary manner, when the position of the control ray is moved far, near, left, and right, both the start position and the end position of the ray are moved, but the movement distance of the start position of the ray is smaller than the movement distance of the end position. I.e. the distance of movement of the starting position of the ray has a proportional relation to the distance of movement of the end position of the ray, which is for example 1:3, i.e. the end position of the ray is moved 3 cm and the starting position is moved 1 cm. It will be appreciated that in this manner, although the origin position of the ray is moved, the actual position of the user is not moved.
In one implementation, the distance that the end of the ray moves is related to the distance that the target gesture moves, for example, the greater the distance that the target gesture moves, the greater the distance that the end of the ray moves, the smaller the distance that the target gesture moves, and the smaller the distance that the end of the ray moves, which may be in a linear relationship or a nonlinear relationship, which is not limited by the present embodiment.
In another implementation, the distance that the end of the ray moves is independent of the distance that the target gesture moves, and when the target gesture movement is detected each time, the end of the ray is controlled to move a preset distance, no matter how far the target gesture moves, the end of the ray moves by a fixed distance, and the user can move the end of the ray to the target position required by the user through moving the hand for multiple times.
Optionally, in the process of controlling the position movement of the ray, the initial position of the ray and the moved position can be distinguished and displayed, where the initial position of the ray refers to the position of the ray when the ray is triggered to be displayed for the first time by the target gesture.
For example, the initial position of the ray and the moved position are distinguished by rays of different colors, the initial position of the ray may be represented by rays of black, and the moved position of the ray may be represented by rays of red. The initial position of the ray and the moved position can also be distinguished by different line lines, for example, the initial position of the ray is shown by a solid line, and the moved position of the ray is shown by a broken line.
In an actual scene, the user may adjust the position of the ray by moving the hand multiple times, and it can be understood that the initial position of the ray remains unchanged after each adjustment, and only the adjusted position of the ray is moved.
And S104, when the movement speed of the target gesture is greater than or equal to a preset speed, controlling a user in the virtual scene to move to the tail end position of the ray.
After the end of the ray is adjusted to the final position, the user can keep the target gesture unchanged and quickly move the hand in any direction, and the detected movement speed of the target gesture is greater than or equal to the preset speed, so that the position movement of the user is enabled to be effective, or the transmission is enabled to be effective.
Alternatively, when the target gesture is a palm-up and finger pinching gesture, the pinching gesture may be flicked rapidly to achieve a rapid moving hand, and the movement speed of the detected target gesture at this time is the flick speed of the pinching finger.
For example, when the movement speed is greater than or equal to the preset speed, the user may be controlled to move to the end position of the ray, and the user cannot generally feel the position moving process in the moving mode, that is, from the perspective of the user, only the viewing angle before movement and the viewing angle after movement can be felt.
Optionally, when the movement speed is greater than or equal to the preset speed, the user may be further controlled to move to the end position of the ray according to the preset target speed, and the movement is usually slow according to the target speed, so that the user may feel a position movement process, and the user viewing angle is continuously changed during the movement process of the user position.
Optionally, after the user moves to the end position of the ray, the ray is hidden in the virtual scene, the ray hiding indicating that the user position movement is over. Optionally, in the virtual scene, the user may also be prompted to end the position movement by text and/or voice, which is not limited by the embodiment of the present application. If a ray is processed in the virtual scene and other relevant virtual objects are displayed, such as a virtual gesture corresponding to a target gesture, the virtual objects are hidden after the user moves to the end position of the ray, i.e. all objects on the interface triggered to be introduced due to the position movement are hidden.
After the position movement is finished, the XR device can continue to detect the user gesture, and when the user gesture is detected, but the user gesture is not the target gesture, the XR device returns to continue to detect the user gesture, and when the user gesture is the target gesture, the position movement flow is triggered.
Fig. 3 is a schematic diagram of interface transformation of a virtual scene when the user's position movement triggers to end, as shown in fig. 3, dividing the whole process into 4 stages. Stage a: when the gesture of the user is detected to be a palm-up and finger pinch gesture, the gesture is displayed in the virtual scene, and a ray is emitted with the finger pinch position as a starting point. In stage b, when the user moves in the right-up direction while maintaining the posture, ray 1 is a ray initial position in fig. 3, and ray 2 is a ray 1 moved position. Stage c-after moving the end of the ray to the target position, the user keeps the gesture unchanged, moves the hand in either direction quickly, and moves the hand to the right quickly as shown in fig. 3, the position movement is effective. Stage d: after the user moves from the current position to the target position corresponding to the end of the ray, the ray disappears, and a character image is displayed on the end of the ray, namely, the position of the annular cursor, to indicate that the user moves to the position, although not shown in fig. 3, optionally, after the ray is hidden, the gesture of the user and the annular cursor are also hidden in sequence, namely, all objects on the interface, which are triggered to be introduced due to the position movement, are hidden.
In this embodiment, when a target gesture is detected, a ray is displayed in a virtual scene, a starting point of the ray represents a current position of a user, an end of the ray represents a target position after the user moves, when the movement speed of the target gesture is less than a preset speed, the position movement of the ray is controlled according to the movement direction of the target gesture, and when the movement speed of the target gesture is greater than or equal to the preset speed, the user is controlled to move to the end position of the ray in the virtual scene. According to the method, the quick movement of the user position in the virtual scene can be controlled according to the movement direction and the movement speed of the user gesture in the real scene, the movement of the user position can be realized without an additional controller, the user operation is convenient, and the user experience is improved.
In order to facilitate better implementation of the method for controlling the user position in the virtual scene in the embodiment of the application, the embodiment of the application also provides a device for controlling the user position in the virtual scene. Fig. 4 is a schematic structural diagram of a device for controlling a user position in a virtual scene according to an embodiment of the present application, as shown in fig. 4, the device 100 for controlling a user position in a virtual scene may include:
A display module 11, configured to display a ray in a virtual scene when a target gesture is detected, where a start point of the ray represents a current position of a user, and an end point of the ray represents a target position after the user moves;
A detection module 12, configured to detect a movement direction and a movement speed of the target gesture;
The control module 13 is used for controlling the position movement of the ray according to the movement direction of the target gesture when the movement speed of the target gesture is smaller than a preset speed;
the control module 13 is further configured to control the user to move to the end position of the ray in the virtual scene when the movement speed of the target gesture is greater than or equal to the preset speed.
In some embodiments, the control module 13 is specifically configured to:
When the movement direction of the target gesture is up and down movement and/or inward and outward rotation, controlling the tail end position of the ray to move far and near; and/or
And when the movement direction of the target gesture is left and right movement and/or left and right rotation, controlling the tail end position of the ray to move left and right.
In some embodiments, the position of the ray is controlled to move far, near, left and right, the position of the start of the ray is unchanged, and the position of the end of the ray is moved.
In some embodiments, the ray is controlled to move far, near, left, and right, the ray having a start position and an end position, the start position of the ray moving a distance less than the end position.
In some embodiments, the display module 11 is specifically configured to:
and when the target gesture is detected, generating and displaying the ray by taking a preset position in the virtual scene as the starting point of the ray.
In other embodiments, the display module 11 is specifically configured to:
And when the target gesture is detected, displaying a virtual object in the virtual scene, and generating and displaying the ray by taking a point on the virtual object as a starting point of the ray.
In some embodiments, the virtual object is a virtual gesture corresponding to the target gesture, and the starting point of the ray is a palm position or a fingertip position of the virtual gesture.
In some embodiments, the initial length of the ray is a preset length.
In some embodiments, the direction of extension of the ray is the direction in which the user faces.
In some embodiments, the display module 11 is further configured to:
the initial position and the moved position of the ray are displayed in a distinguishing manner.
In some embodiments, the target gesture is a palm up and finger pinch gesture, the speed of movement of the target gesture being the speed at which the pinch fingers pop open.
In some embodiments, the control module 13 is specifically configured to:
When the movement speed is greater than or equal to the preset speed, controlling the user to move to the tail end position of the ray instantaneously;
Or when the movement speed is greater than or equal to the preset speed, controlling the user to move to the end position of the ray according to the preset target speed.
In some embodiments, the apparatus 100 further comprises:
And the hiding module is used for hiding the ray in the virtual scene after the user moves to the tail end position of the ray.
It should be understood that apparatus embodiments and method embodiments may correspond with each other and that similar descriptions may refer to the method embodiments. To avoid repetition, no further description is provided here.
The apparatus 100 of the embodiment of the present application is described above from the perspective of the functional module in conjunction with the accompanying drawings. It should be understood that the functional module may be implemented in hardware, or may be implemented by instructions in software, or may be implemented by a combination of hardware and software modules. Specifically, each step of the method embodiment in the embodiment of the present application may be implemented by an integrated logic circuit of hardware in a processor and/or an instruction in a software form, and the steps of the method disclosed in connection with the embodiment of the present application may be directly implemented as a hardware decoding processor or implemented by a combination of hardware and software modules in the decoding processor. Alternatively, the software modules may be located in a well-established storage medium in the art such as random access memory, flash memory, read-only memory, programmable read-only memory, electrically erasable programmable memory, registers, and the like. The storage medium is located in a memory, and the processor reads information in the memory, and in combination with hardware, performs the steps in the above method embodiments.
The embodiment of the application also provides a control device for the user position in the virtual scene. Fig. 5 is a schematic structural diagram of a control device for a user position in a virtual scene according to an embodiment of the present application, and as shown in fig. 5, a control device 200 for a user position in a virtual scene may include:
A memory 21 and a processor 22, the memory 21 being adapted to store a computer program and to transfer the program code to the processor 22. In other words, the processor 22 may call and run a computer program from the memory 21 to implement the method in an embodiment of the present application.
For example, the processor 22 may be configured to perform the above-described method embodiments according to instructions in the computer program.
In some embodiments of the present application, the processor 22 may include, but is not limited to:
A general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field programmable gate array (Field Programmable GATE ARRAY, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
In some embodiments of the present application, the memory 21 includes, but is not limited to:
Volatile memory and/or nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM) which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available, such as static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate Synchronous dynamic random access memory (Double DATA RATE SDRAM, DDR SDRAM), enhanced Synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCH LINK DRAM, SLDRAM), and Direct memory bus RAM (DR RAM).
In some embodiments of the application, the computer program may be split into one or more modules that are stored in the memory 21 and executed by the processor 22 to perform the methods provided by the application. The one or more modules may be a series of computer program instruction segments capable of performing particular functions for describing the execution of the computer program in a control device for the location of a user in the virtual scene.
As shown in fig. 5, the control device for a user location in the virtual scene may further include: a transceiver 23, the transceiver 23 being connectable to the processor 22 or the memory 21.
The processor 22 may control the transceiver 23 to communicate with other devices, and in particular, may send information or data to other devices or receive information or data sent by other devices. The transceiver 23 may include a transmitter and a receiver. The transceiver 23 may further include antennas, the number of which may be one or more.
It can be appreciated that, although not shown in fig. 5, the control device 200 for viewing a user position in a virtual scene may further include a camera module, a WIFI module, a positioning module, a bluetooth module, a display, a controller, and the like, which are not described herein.
It will be appreciated that the various components in the control device for the user's location in the virtual scene are connected by a bus system comprising, in addition to a data bus, a power bus, a control bus and a status signal bus.
The present application also provides a computer storage medium having stored thereon a computer program which, when executed by a computer, enables the computer to perform the method of the above-described method embodiments. Alternatively, embodiments of the present application also provide a computer program product comprising instructions which, when executed by a computer, cause the computer to perform the method of the method embodiments described above.
The present application also provides a computer program product comprising a computer program stored in a computer readable storage medium. The processor of the control device for the user position in the virtual scene reads the computer program from the computer readable storage medium, and the processor executes the computer program, so that the control device for the user position in the virtual scene executes a corresponding flow in the control method for the user position in the virtual scene in the embodiment of the present application, which is not described herein for brevity.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules illustrated as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. For example, functional modules in various embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
The above is only a specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (17)

1. The method for controlling the user position in the virtual scene is characterized by comprising the following steps:
When a target gesture is detected, displaying rays in a virtual scene, wherein the starting points of the rays represent the current position of a user, and the tail ends of the rays represent the target position of the user after moving;
detecting the movement direction and movement speed of the target gesture;
When the movement speed of the target gesture is smaller than a preset speed, controlling the position movement of the ray according to the movement direction of the target gesture;
and when the movement speed of the target gesture is greater than or equal to the preset speed, controlling the user in the virtual scene to move to the tail end position of the ray.
2. The method of claim 1, wherein controlling the positional movement of the ray according to the direction of motion of the target gesture comprises:
When the movement direction of the target gesture is up and down movement and/or inward and outward rotation, controlling the position of the ray to move far and near; and/or
And when the movement direction of the target gesture is left and right movement and/or left and right rotation, controlling the position of the ray to move left and right.
3. The method of claim 2, wherein the position of the ray is controlled to move far, near, left, and right while the position of the start of the ray is unchanged and the position of the end of the ray is moved.
4. The method according to claim 2, wherein the start position and the end position of the ray are moved when the position of the ray is controlled to move far, near, left, and right, and the movement distance of the start position of the ray is smaller than the movement distance of the end position.
5. The method of claim 1, wherein displaying rays in the virtual scene when the target gesture is detected comprises:
and when the target gesture is detected, generating and displaying the ray by taking a preset position in the virtual scene as the starting point of the ray.
6. The method of claim 1, wherein displaying rays in the virtual scene when the target gesture is detected comprises:
And when the target gesture is detected, displaying a virtual object in the virtual scene, and generating and displaying the ray by taking a point on the virtual object as a starting point of the ray.
7. The method of claim 6, wherein the virtual object is a virtual gesture corresponding to the target gesture, and the origin of the ray is a palm position or a fingertip position of the virtual gesture.
8. The method of claim 5 or 6, wherein the initial length of the ray is a preset length.
9. The method of claim 7, wherein the direction of extension of the ray is the direction the user is facing.
10. The method of any of claims 1-7, further comprising, during controlling the positional movement of the ray in accordance with the direction of motion of the target gesture:
the initial position and the moved position of the ray are displayed in a distinguishing manner.
11. The method of any of claims 1-7, wherein the target gesture is a palm up and finger pinch gesture, and the speed of movement of the target gesture is the speed at which the pinch finger pops open.
12. The method of any of claims 1-7, wherein controlling the user in the virtual scene to move to the end position of the ray when the speed of movement of the target gesture is greater than or equal to the preset speed comprises:
When the movement speed is greater than or equal to the preset speed, controlling the user to move to the tail end position of the ray instantaneously;
Or when the movement speed is greater than or equal to the preset speed, controlling the user to move to the end position of the ray according to the preset target speed.
13. The method of any one of claims 1-7, further comprising:
After the user moves to the end position of the ray, the ray is hidden in the virtual scene.
14. A control device for a user position in a virtual scene, comprising:
The display module is used for displaying rays in the virtual scene when the target gesture is detected, the starting points of the rays represent the current position of a user, and the tail ends of the rays represent the target position of the user after the user moves;
the detection module is used for detecting the movement direction and the movement speed of the target gesture;
the control module is used for controlling the position movement of the ray according to the movement direction of the target gesture when the movement speed of the target gesture is smaller than a preset speed;
And the control module is further used for controlling the user to move to the tail end position of the ray in the virtual scene when the movement speed of the target gesture is greater than or equal to the preset speed.
15. A control apparatus for a user position in a virtual scene, comprising:
A processor and a memory for storing a computer program, the processor being for invoking and running the computer program stored in the memory to perform the method of any of claims 1 to 13.
16. A computer readable storage medium storing a computer program for causing a computer to perform the method of any one of claims 1 to 13.
17. A computer program product comprising a computer program which, when executed by a processor, implements the method of any one of claims 1 to 13.
CN202211462815.2A 2022-11-21 2022-11-21 Method and device for controlling user position in virtual scene Pending CN118057266A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211462815.2A CN118057266A (en) 2022-11-21 2022-11-21 Method and device for controlling user position in virtual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211462815.2A CN118057266A (en) 2022-11-21 2022-11-21 Method and device for controlling user position in virtual scene

Publications (1)

Publication Number Publication Date
CN118057266A true CN118057266A (en) 2024-05-21

Family

ID=91069200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211462815.2A Pending CN118057266A (en) 2022-11-21 2022-11-21 Method and device for controlling user position in virtual scene

Country Status (1)

Country Link
CN (1) CN118057266A (en)

Similar Documents

Publication Publication Date Title
US11003307B1 (en) Artificial reality systems with drawer simulation gesture for gating user interface elements
CN113853575A (en) Artificial reality system with sliding menu
KR20220012990A (en) Gating Arm Gaze-Driven User Interface Elements for Artificial Reality Systems
CN114766038A (en) Individual views in a shared space
US11086475B1 (en) Artificial reality systems with hand gesture-contained content window
CN113826058A (en) Artificial reality system with self-tactile virtual keyboard
EP3106963B1 (en) Mediated reality
US10921879B2 (en) Artificial reality systems with personal assistant element for gating user interface elements
KR20230117639A (en) Methods for adjusting and/or controlling immersion associated with user interfaces
US11043192B2 (en) Corner-identifiying gesture-driven user interface element gating for artificial reality systems
WO2017021587A1 (en) Sharing mediated reality content
CN113785262A (en) Artificial reality system with finger mapping self-touch input method
AU2024200190A1 (en) Presenting avatars in three-dimensional environments
US10852839B1 (en) Artificial reality systems with detachable personal assistant for gating user interface elements
US20180059812A1 (en) Method for providing virtual space, method for providing virtual experience, program and recording medium therefor
US11209903B2 (en) Rendering of mediated reality content
KR20230037054A (en) Systems, methods, and graphical user interfaces for updating a display of a device relative to a user's body
GB2590718A (en) Mediated reality
CN118057266A (en) Method and device for controlling user position in virtual scene
KR20150044243A (en) Electronic learning apparatus and method for controlling contents by hand avatar
EP3534241A1 (en) Method, apparatus, systems, computer programs for enabling mediated reality
CN117994839A (en) Gesture recognition method, gesture recognition device, gesture recognition apparatus, gesture recognition medium, and gesture recognition program
CN118051118A (en) Gesture recognition method, gesture recognition device, gesture recognition apparatus, gesture recognition medium, and gesture recognition program
CN117930983A (en) Display control method, device, equipment and medium
CN117687499A (en) Virtual object interaction processing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination