KR20150059099A - Method for predicting human motion in virtual environment and apparatus thereof - Google Patents
Method for predicting human motion in virtual environment and apparatus thereof Download PDFInfo
- Publication number
- KR20150059099A KR20150059099A KR1020140152182A KR20140152182A KR20150059099A KR 20150059099 A KR20150059099 A KR 20150059099A KR 1020140152182 A KR1020140152182 A KR 1020140152182A KR 20140152182 A KR20140152182 A KR 20140152182A KR 20150059099 A KR20150059099 A KR 20150059099A
- Authority
- KR
- South Korea
- Prior art keywords
- user
- virtual environment
- motion
- information
- posture
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/251—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Embodiments of the present invention relate to a method and apparatus for predicting a user's motion in a virtual environment and an apparatus for predicting a user's motion in a virtual environment according to an embodiment of the present invention includes at least one sensor data And a motion tracking module for estimating a posture of the user at the current time based on the pre-learned motion model; And a motion model module for predicting a set of possible postures of the user at the next time point based on the motion model, the estimated posture of the user at the current time point, and the virtual environment context information at the next time point. According to the embodiments of the present invention, the immersion feeling of the virtual environment can be maximized.
Description
Embodiments of the present invention relate to a method and apparatus for predicting a user's motion in a virtual environment.
A motion tracking device is used as a tool for detecting a user's motion in a virtual environment and for interacting with a user and a virtual environment.
1 is an exemplary diagram illustrating a virtual environment system for interaction between a user and a virtual environment.
The
The motion tracking device tracks the user's motion based on information received from multiple sensors. The motion tracking device uses a motion mode to improve the accuracy of motion tracking and to provide the information required in the
In most cases, the user's recent motion itself does not provide sufficient information to the motion model that predicts a transition from one motion to the next. For example, it is difficult to predict from previous motions that the user suddenly stops walking while suddenly walking or suddenly changes the direction of travel.
Embodiments of the present invention provide a method for predicting a possible posture of a user at a next time point in consideration of situation information of a virtual reality.
An apparatus for predicting a user's motion in a virtual environment according to an embodiment of the present invention includes a motion tracking module for estimating a posture of a user at a current point based on at least one sensor data and a pre-learned motion model; And a motion model module for predicting a set of possible postures of the user at the next time point based on the motion model, the estimated posture of the user at the current time point, and the virtual environment context information at the next time point.
In one embodiment, the motion model may include information on the virtual environment situation information of the current time point, the posture of the user at the previous time, and the posture of the user at the current time point. Here, the virtual environment context information at the current time may include at least one of an object existing in the virtual environment at the current time and information about an event occurring in the virtual environment at the current time.
In one embodiment, the virtual environment context information at the next time point may include at least one of an object existing in the virtual environment at the next time point and information about an event occurring in the virtual environment at the next time point.
In one embodiment, the information about the object may include at least one of information about the distance between the user and the corresponding object, the type of the corresponding object, and the visibility of the corresponding object based on the user.
In one embodiment, the information about the event may include at least one of a type of the event and information about a direction in which the event occurs based on the user.
In one embodiment, the user motion prediction apparatus controls the virtual environment, and generates virtual environment situation information of the next time point based on the virtual environment situation information of the current point of time and the attitude of the user of the estimated current point of time And providing the motion model module with the virtual environment control module.
In one embodiment, the user moves on a locomotion interface device, and the user's motion prediction device controls the locomotion interface device based on the posture of the user at the current time point and the posture of the user at the next time point And a locomotion interface control module.
In one embodiment, the locomotion interface control module may control the locomotion interface device to further address the speed of the user.
A method for predicting a user's motion in a virtual environment according to an embodiment of the present invention includes estimating a posture of a user at a current point based on at least one sensor data and a pre-learned motion model; And predicting a set of possible postures of the user at the next time point based on the motion model, the posture of the user at the estimated current time point, and the virtual environment situation information at the next time point.
In one embodiment, the method may further comprise constructing the motion model based on virtual environment context information at a current time point, information on a posture of a user at a previous time point, and information on a posture of a user at a current time point.
In one embodiment, the method may further include generating virtual environment situation information of the next time point based on the virtual environment situation information of the current point of time and the posture of the user of the estimated current point of time.
According to embodiments of the present invention, interaction with the locomotion interface device can be stably performed.
According to the embodiments of the present invention, the immersion feeling of the virtual environment can be maximized.
Embodiments of the present invention may be utilized as part of a system for tracking user motion in a virtual reality environment that can interact with a user for purposes such as training and entertainment.
1 is an exemplary view for explaining a virtual environment system for interaction between a user and a virtual environment,
FIG. 2 is a block diagram for explaining a user motion prediction apparatus according to an embodiment of the present invention;
3 is an exemplary diagram for explaining a motion model network according to the related art,
4 (a) and 4 (b) are views for explaining a motion model network according to an embodiment of the present invention,
5A to 7B are diagrams for explaining a process of predicting a set of possible postures of a user at a next time point in consideration of virtual environment situation information according to embodiments of the present invention,
8 is a flowchart illustrating a user motion prediction method according to an embodiment of the present invention.
In the following description of the embodiments of the present invention, a detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present invention rather unclear.
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings.
2 is a block diagram illustrating a user motion prediction apparatus according to an embodiment of the present invention.
The sensor
The sensor
The
The motion model may be a skeleton model, which has been learned in advance for the user. The estimated posture of the user at the current time point can be expressed by the joint angle of the skeleton model.
The motion model can be generated using various methods. For example, a motion model may be generated using a method of attaching a marker to a user's body and tracking the location of attached markers, or using a marker-free technique using a depth camera without a marker Lt; / RTI >
The posture of the user at the current time point can be estimated using various methods. For example, an initial guess for the user's attitude can be performed, and a 3D silhouette can be constructed using the posture of an assumed user. Then, the constructed 3D silhouette can be compared with the observed value (e.g., the 3D point cloud obtained from the depth image). The error (inconsistency) is measured while changing the attitude parameters (for example, joint angles), and the attitude in which the error is minimized can be estimated as the attitude of the user at the current point of time.
Here, the earlier the estimation result is, the faster the estimation can be performed, and the posture of the user with less error can be estimated. This implies that the performance of the motion model is important when the motion model is used for initial guessing.
The
The prediction may be based on at least one of a pre-learned model (i.e., a motion model), an estimated posture of the user at the current time point, other features extracted from the sensor data, and virtual environment context information.
Other features extracted from the sensor data include, for example, linear velocities and accelerations calculated from joint positions, angular velocities and accelerations calculated from joint angles, symmetry measures calculated for multiple joints, ), And a volume extending from the multiple joints.
The virtual environment context information includes information about objects and events that exist in the virtual environment that is demonstrated to the user. This will be described later with reference to related drawings.
In the paper (DJ Fleet, "Motion Models for People Tracking", Visual Analysis of Humans: Looking at People, TB Moeslund, A. Hilton, V. Kruger and L. Sigal, Eds., Springer, 2011, -198.), Human pose tracking is expressed as a Bayesian filtering problem as shown in Equation (1).
Here, x t is an attitude at a time t , z t is an observation value at a time t (for example, a depth image or a point cloud), z 1 : t- ) To the time point (t-1). The relationship between the variables is shown in FIG.
p (x t | x t -1 ) is a normal expression for a motion model modeled as a first order markov process, and the posture (x t ) of the current time (t) t-1) (x t -1 ).
However, only the posture (x t -1 ) observed at the previous time (t-1) may be insufficient to estimate the posture of the current time (t).
The motion model having improved performance can be constructed when context information of the virtual environment is further considered, compared with the case of constructing the motion model using only the information of the user's motion.
Therefore, in the embodiments of the present invention, a motion model with improved performance is constructed by further considering the situation information of the virtual environment. The motion model may be constructed by the
When the context information of the virtual environment is used, the motion model can be expressed as p (x t | x t -1 , c t ) as shown in Equation (2).
Here, c t represents virtual environment context information at time t. Equation (2) shows an example of using the virtual environment context information (c) on the assumption that variables at different points in time are independent from each other. The relationship between the variables (virtual environment situation information c, posture x, and observation value z) at successive times t-1, t, t + 1 is shown in FIG. Respectively.
If the interaction between the user's operation and the virtual environment situation information (for example, the virtual environment scenario according to the user's operation is changed, the virtual environment situation information c t +1 at the time t + 1 ) (x t ) of the virtual environment condition information ( t )), the virtual environment context information at successive times indicates dependency. For example, the virtual environment context information c t at the time t depends on the virtual environment context information c t -1 at the previous time t-1. The dependency between these variables is shown in Fig. 4 (b).
On the other hand, the vector (c t ) representing the virtual environment situation information may include various information about objects and events existing in the virtual environment. The information may be, for example, information on the presence or absence of an object, a distance from the object, the occurrence of a specific event, the type of a specific event, and the location of a specific event.
Table 1 shows an example of data transmitted between each module of the motion tracking apparatus according to an embodiment of the present invention.
(Expressed in a skeleton technique, including joint angles and speeds)
(1) the distance to the obstacle in the direction in which the user moves (2)
(3) Type of obstacle
2. Information about the agent
(1) the distance between the user and the virtual agent
(2) Type of Agent (Friendly, Enemy)
(3) Visibility in the user's field of view
Virtual environment control module
Loco-motion interface control module
(Represented by the skeleton technique)
The
As an example, the visibility of the object by the user may be used to predict the set of possible postures of the user at the next time. For example, the presence of an object that is not visible in the user's sight but will be seen suddenly can increase the probability that the user will move in a particular direction. For example, as shown in FIG. 5 (a), the adversary behind the closed door is not visible to the user at the present time. 5 (b), if the virtual environment situation information at the next time point is the one at which the door is opened, the next user will be made visible to the next user. Thus, the user is more likely to move in the opposite direction of the direction in which the enemy is, or take a motion to avoid an enemy attack. Accordingly, the
As an example, the presence of an obstacle or the distance from the obstacle may be used to predict a set of possible postures of the user at a future point in time. For example, as shown in (a) of FIG. 6, it is assumed that an obstacle lies in the direction in which the user is moving at the present time, and the distance between the user and the obstacle is sufficiently long at the present time. If the distance between the user and the obstacle is very close to the virtual environment situation information of the next time point as shown in FIG. 6 (b), the presence of such an obstacle can affect the moving direction of the user. That is, the user has a high probability of changing the moving direction to avoid collision with the obstacle. Accordingly, the
As an example, the occurrence of a particular event may be used to predict a set of possible postures of the user at a future point in time. For example, as shown in (a) of FIG. 7, it is assumed that no event occurs around the user at the present time. If the beep sound is generated from the specific object positioned at the front of the virtual environment situation information at the next time point as shown in FIG. 7 (b), the beep sound may affect the moving direction of the user . That is, the user has a high probability of changing the moving direction toward the object where the beep sound is generated. Accordingly, the
The virtual
The locomotion
8 is a flowchart illustrating a user motion prediction method according to an embodiment of the present invention.
In
In
In
In
The embodiments of the invention described above may be implemented in any of a variety of ways. For example, embodiments of the present invention may be implemented using hardware, software, or a combination thereof. When implemented in software, it may be implemented as software running on one or more processors using various operating systems or platforms. Additionally, such software may be written using any of a number of suitable programming languages, and may also be compiled into machine code or intermediate code executable in a framework or virtual machine.
Also, when embodiments of the present invention are implemented on one or more processors, one or more programs for carrying out the methods of implementing the various embodiments of the invention discussed above may be stored on a processor readable medium (e.g., memory, A floppy disk, a hard disk, a compact disk, an optical disk, a magnetic tape, or the like).
Claims (18)
A motion tracking module for estimating a posture of a user at a current time based on at least one sensor data and a pre-learned motion model; And
A motion model module for predicting a set of possible postures of the user at the next time point based on the motion model, the estimated posture of the user at the current time point,
Wherein the user motion prediction apparatus comprises:
The virtual environment status information of the current point of time, the posture of the user at the previous point of time, and the posture of the user at the current point of time
User motion prediction device in virtual environment.
And information on an event occurring in the virtual environment of the current time and an object existing in the virtual environment of the current time
User motion prediction device in virtual environment.
And information about an event occurring in the virtual environment of the next time and the object existing in the virtual environment of the next time
User motion prediction device in virtual environment.
The information about the distance between the user and the corresponding object, the type of the corresponding object, and the visibility of the corresponding object based on the user
User motion prediction device in virtual environment.
A type of the event, and information on a direction in which the event occurs based on the user
User motion prediction device in virtual environment.
The virtual environment control module controlling the virtual environment and generating virtual environment context information of the next time point based on the virtual environment context information of the current time and the user's attitude of the estimated current time point and providing the virtual environment context information to the motion model module
And a motion estimation unit for estimating motion of the user.
The user moves on the locomotion interface device,
The user motion prediction apparatus further includes a locomotion interface control module for controlling the locomotion interface device based on the posture of the user at the current time point and the posture of the user at the next time point
User motion prediction device in virtual environment.
Wherein the locomotion interface control module controls the locomotion interface device to further control the speed of the user
User motion prediction device in virtual environment.
Estimating a posture of a user at a current time based on at least one sensor data and a pre-learned motion model; And
Predicting a set of possible posture of the user at the next time point based on the motion model, the posture of the user at the estimated current time point, and the virtual environment situation information at the next time point
And estimating the user motion in the virtual environment.
Constructing the motion model based on virtual environment situation information of the current point of time, information of the posture of the user at the previous point of time, and information of the attitude of the user at the current point of time
Further comprising the steps of:
And information on an event occurring in the virtual environment of the current time and an object existing in the virtual environment of the current time
A user motion prediction method in a virtual environment.
And information about an event occurring in the virtual environment of the next time and the object existing in the virtual environment of the next time
A user motion prediction method in a virtual environment.
The information about the distance between the user and the corresponding object, the type of the corresponding object, and the visibility of the corresponding object based on the user
A user motion prediction method in a virtual environment.
A type of the event, and information on a direction in which the event occurs based on the user
A user motion prediction method in a virtual environment.
Generating virtual environment situation information at the next time point based on the virtual environment situation information of the current point of time and the attitude of the user at the estimated current point of time
Further comprising the steps of:
The user moves on the locomotion interface device,
Controlling the locomotion interface device based on the user's posture of the current time point and the set of possible postures of the user at the next time point
Further comprising the steps of:
Wherein the step of controlling the locomotion interface device includes the step of controlling the locomotion interface device in consideration of the speed of the user
A user motion prediction method in a virtual environment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/543,506 US20150139505A1 (en) | 2013-11-18 | 2014-11-17 | Method and apparatus for predicting human motion in virtual environment |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20130140201 | 2013-11-18 | ||
KR1020130140201 | 2013-11-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20150059099A true KR20150059099A (en) | 2015-05-29 |
Family
ID=53393160
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020140152182A KR20150059099A (en) | 2013-11-18 | 2014-11-04 | Method for predicting human motion in virtual environment and apparatus thereof |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20150059099A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102474117B1 (en) * | 2021-11-12 | 2022-12-05 | 재단법인대구경북과학기술원 | System for location tracking in sensor fusion-assisted virtual reality micro-manipulation environments |
-
2014
- 2014-11-04 KR KR1020140152182A patent/KR20150059099A/en not_active Application Discontinuation
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102474117B1 (en) * | 2021-11-12 | 2022-12-05 | 재단법인대구경북과학기술원 | System for location tracking in sensor fusion-assisted virtual reality micro-manipulation environments |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20150139505A1 (en) | Method and apparatus for predicting human motion in virtual environment | |
US20240202938A1 (en) | Fault-tolerance to provide robust tracking for autonomous and non-autonomous positional awareness | |
US10832056B1 (en) | Visual-inertial positional awareness for autonomous and non-autonomous tracking | |
US10354396B1 (en) | Visual-inertial positional awareness for autonomous and non-autonomous device | |
CN103925920B (en) | A kind of MAV indoor based on perspective image autonomous navigation method | |
EP3447448B1 (en) | Fault-tolerance to provide robust tracking for autonomous and non-autonomous positional awareness | |
CN107655473B (en) | Relative autonomous navigation system of spacecraft based on S L AM technology | |
Brubaker et al. | Physics-based person tracking using the anthropomorphic walker | |
Huijun et al. | Virtual-environment modeling and correction for force-reflecting teleoperation with time delay | |
CN111263921B (en) | Collision detection system and computer-implemented method | |
US10402663B1 (en) | Visual-inertial positional awareness for autonomous and non-autonomous mapping | |
Bascetta et al. | Towards safe human-robot interaction in robotic cells: an approach based on visual tracking and intention estimation | |
Motai et al. | Human tracking from a mobile agent: optical flow and Kalman filter arbitration | |
EP2590042A1 (en) | Mobile apparatus performing position recognition using several local filters and a fusion filter | |
KR101347840B1 (en) | Body gesture recognition method and apparatus | |
JP2009146406A (en) | Visually tracking an object in real world using 2d appearance and multicue depth estimations | |
CN106105184A (en) | Time delay in camera optical projection system reduces | |
KR101423139B1 (en) | Method for localization and mapping using 3D line, and mobile body thereof | |
CN106708037A (en) | Autonomous mobile equipment positioning method and device, and autonomous mobile equipment | |
JP6598191B2 (en) | Image display system and image display method | |
US20210042526A1 (en) | Information processing apparatus, information processing method, and recording medium | |
Rungsarityotin et al. | Finding location using omnidirectional video on a wearable computing platform | |
CN105447886A (en) | Dynamic cinema playback control method | |
KR20150059099A (en) | Method for predicting human motion in virtual environment and apparatus thereof | |
Ikoma | Hands and arms motion estimation of a car driver with depth image sensor by using particle filter |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E601 | Decision to refuse application |