KR20150059099A - Method for predicting human motion in virtual environment and apparatus thereof - Google Patents

Method for predicting human motion in virtual environment and apparatus thereof Download PDF

Info

Publication number
KR20150059099A
KR20150059099A KR1020140152182A KR20140152182A KR20150059099A KR 20150059099 A KR20150059099 A KR 20150059099A KR 1020140152182 A KR1020140152182 A KR 1020140152182A KR 20140152182 A KR20140152182 A KR 20140152182A KR 20150059099 A KR20150059099 A KR 20150059099A
Authority
KR
South Korea
Prior art keywords
user
virtual environment
motion
information
posture
Prior art date
Application number
KR1020140152182A
Other languages
Korean (ko)
Inventor
블라고
이소연
박상준
박종현
정교일
Original Assignee
한국전자통신연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국전자통신연구원 filed Critical 한국전자통신연구원
Priority to US14/543,506 priority Critical patent/US20150139505A1/en
Publication of KR20150059099A publication Critical patent/KR20150059099A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Embodiments of the present invention relate to a method and apparatus for predicting a user's motion in a virtual environment and an apparatus for predicting a user's motion in a virtual environment according to an embodiment of the present invention includes at least one sensor data And a motion tracking module for estimating a posture of the user at the current time based on the pre-learned motion model; And a motion model module for predicting a set of possible postures of the user at the next time point based on the motion model, the estimated posture of the user at the current time point, and the virtual environment context information at the next time point. According to the embodiments of the present invention, the immersion feeling of the virtual environment can be maximized.

Description

TECHNICAL FIELD The present invention relates to a method and apparatus for predicting a user's motion in a virtual environment,

Embodiments of the present invention relate to a method and apparatus for predicting a user's motion in a virtual environment.

A motion tracking device is used as a tool for detecting a user's motion in a virtual environment and for interacting with a user and a virtual environment.

1 is an exemplary diagram illustrating a virtual environment system for interaction between a user and a virtual environment.

The user 100 moves on the locomotion interface device 25 and moves in a specific direction or takes a specific action according to the virtual reality scene projected on the screen 24. [ The locomotion interface device 25 is driven so that the user can stay in a limited space of the real world. For example, the locomotion interface device 25 drives in a direction opposite to the direction in which the user moves, based on the direction information of the user received from the motion tracking device (not shown) So that it can stay in position.

The motion tracking device tracks the user's motion based on information received from multiple sensors. The motion tracking device uses a motion mode to improve the accuracy of motion tracking and to provide the information required in the locomotion interface device 25. [

In most cases, the user's recent motion itself does not provide sufficient information to the motion model that predicts a transition from one motion to the next. For example, it is difficult to predict from previous motions that the user suddenly stops walking while suddenly walking or suddenly changes the direction of travel.

Embodiments of the present invention provide a method for predicting a possible posture of a user at a next time point in consideration of situation information of a virtual reality.

An apparatus for predicting a user's motion in a virtual environment according to an embodiment of the present invention includes a motion tracking module for estimating a posture of a user at a current point based on at least one sensor data and a pre-learned motion model; And a motion model module for predicting a set of possible postures of the user at the next time point based on the motion model, the estimated posture of the user at the current time point, and the virtual environment context information at the next time point.

In one embodiment, the motion model may include information on the virtual environment situation information of the current time point, the posture of the user at the previous time, and the posture of the user at the current time point. Here, the virtual environment context information at the current time may include at least one of an object existing in the virtual environment at the current time and information about an event occurring in the virtual environment at the current time.

In one embodiment, the virtual environment context information at the next time point may include at least one of an object existing in the virtual environment at the next time point and information about an event occurring in the virtual environment at the next time point.

In one embodiment, the information about the object may include at least one of information about the distance between the user and the corresponding object, the type of the corresponding object, and the visibility of the corresponding object based on the user.

In one embodiment, the information about the event may include at least one of a type of the event and information about a direction in which the event occurs based on the user.

In one embodiment, the user motion prediction apparatus controls the virtual environment, and generates virtual environment situation information of the next time point based on the virtual environment situation information of the current point of time and the attitude of the user of the estimated current point of time And providing the motion model module with the virtual environment control module.

In one embodiment, the user moves on a locomotion interface device, and the user's motion prediction device controls the locomotion interface device based on the posture of the user at the current time point and the posture of the user at the next time point And a locomotion interface control module.

In one embodiment, the locomotion interface control module may control the locomotion interface device to further address the speed of the user.

A method for predicting a user's motion in a virtual environment according to an embodiment of the present invention includes estimating a posture of a user at a current point based on at least one sensor data and a pre-learned motion model; And predicting a set of possible postures of the user at the next time point based on the motion model, the posture of the user at the estimated current time point, and the virtual environment situation information at the next time point.

In one embodiment, the method may further comprise constructing the motion model based on virtual environment context information at a current time point, information on a posture of a user at a previous time point, and information on a posture of a user at a current time point.

In one embodiment, the method may further include generating virtual environment situation information of the next time point based on the virtual environment situation information of the current point of time and the posture of the user of the estimated current point of time.

According to embodiments of the present invention, interaction with the locomotion interface device can be stably performed.

According to the embodiments of the present invention, the immersion feeling of the virtual environment can be maximized.

Embodiments of the present invention may be utilized as part of a system for tracking user motion in a virtual reality environment that can interact with a user for purposes such as training and entertainment.

1 is an exemplary view for explaining a virtual environment system for interaction between a user and a virtual environment,
FIG. 2 is a block diagram for explaining a user motion prediction apparatus according to an embodiment of the present invention;
3 is an exemplary diagram for explaining a motion model network according to the related art,
4 (a) and 4 (b) are views for explaining a motion model network according to an embodiment of the present invention,
5A to 7B are diagrams for explaining a process of predicting a set of possible postures of a user at a next time point in consideration of virtual environment situation information according to embodiments of the present invention,
8 is a flowchart illustrating a user motion prediction method according to an embodiment of the present invention.

In the following description of the embodiments of the present invention, a detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present invention rather unclear.

Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings.

2 is a block diagram illustrating a user motion prediction apparatus according to an embodiment of the present invention.

The sensor data collection module 210 collects sensor data necessary for motion tracking. Embodiments of the present invention can be applied to the virtual environment system shown in Fig. Thus, the sensor data acquisition module 210 collects sensor data necessary for motion tracking from at least one depth camera 21b, 21c, 21d and at least one motion sensor 21a attached to the user 100 body .

The sensor data collection module 210 may perform time synchronization and preprocessing on the collected sensor data, and may transmit the performed result to the motion tracking module 220. [

The motion tracking module 220 estimates the posture of the user at the current point based on the sensor data received from the sensor data collection module 210 and the set information of possible postures obtained from the motion model.

The motion model may be a skeleton model, which has been learned in advance for the user. The estimated posture of the user at the current time point can be expressed by the joint angle of the skeleton model.

The motion model can be generated using various methods. For example, a motion model may be generated using a method of attaching a marker to a user's body and tracking the location of attached markers, or using a marker-free technique using a depth camera without a marker Lt; / RTI >

The posture of the user at the current time point can be estimated using various methods. For example, an initial guess for the user's attitude can be performed, and a 3D silhouette can be constructed using the posture of an assumed user. Then, the constructed 3D silhouette can be compared with the observed value (e.g., the 3D point cloud obtained from the depth image). The error (inconsistency) is measured while changing the attitude parameters (for example, joint angles), and the attitude in which the error is minimized can be estimated as the attitude of the user at the current point of time.

Here, the earlier the estimation result is, the faster the estimation can be performed, and the posture of the user with less error can be estimated. This implies that the performance of the motion model is important when the motion model is used for initial guessing.

The motion model module 230 stores a motion model and predicts a set of attitudes that the user can take after the posture of the user at the current time estimated by the motion tracking module 220, do.

The prediction may be based on at least one of a pre-learned model (i.e., a motion model), an estimated posture of the user at the current time point, other features extracted from the sensor data, and virtual environment context information.

Other features extracted from the sensor data include, for example, linear velocities and accelerations calculated from joint positions, angular velocities and accelerations calculated from joint angles, symmetry measures calculated for multiple joints, ), And a volume extending from the multiple joints.

The virtual environment context information includes information about objects and events that exist in the virtual environment that is demonstrated to the user. This will be described later with reference to related drawings.

In the paper (DJ Fleet, "Motion Models for People Tracking", Visual Analysis of Humans: Looking at People, TB Moeslund, A. Hilton, V. Kruger and L. Sigal, Eds., Springer, 2011, -198.), Human pose tracking is expressed as a Bayesian filtering problem as shown in Equation (1).

Figure pat00001

Here, x t is an attitude at a time t , z t is an observation value at a time t (for example, a depth image or a point cloud), z 1 : t- ) To the time point (t-1). The relationship between the variables is shown in FIG.

p (x t | x t -1 ) is a normal expression for a motion model modeled as a first order markov process, and the posture (x t ) of the current time (t) t-1) (x t -1 ).

However, only the posture (x t -1 ) observed at the previous time (t-1) may be insufficient to estimate the posture of the current time (t).

The motion model having improved performance can be constructed when context information of the virtual environment is further considered, compared with the case of constructing the motion model using only the information of the user's motion.

Therefore, in the embodiments of the present invention, a motion model with improved performance is constructed by further considering the situation information of the virtual environment. The motion model may be constructed by the motion model module 230 through training. The motion model module 230 can construct a motion model based on the posture of the user at the previous time point, the posture of the user at the current time point, and the virtual environment situation information at the current time point. For example, the motion model module 230 may be configured as a virtual environment context information variable (which can be expressed as a vector) at the current time, and the user's posture at the previous time and the posture of the user at the current time A motion model can be generated.

When the context information of the virtual environment is used, the motion model can be expressed as p (x t | x t -1 , c t ) as shown in Equation (2).

Figure pat00002

Here, c t represents virtual environment context information at time t. Equation (2) shows an example of using the virtual environment context information (c) on the assumption that variables at different points in time are independent from each other. The relationship between the variables (virtual environment situation information c, posture x, and observation value z) at successive times t-1, t, t + 1 is shown in FIG. Respectively.

If the interaction between the user's operation and the virtual environment situation information (for example, the virtual environment scenario according to the user's operation is changed, the virtual environment situation information c t +1 at the time t + 1 ) (x t ) of the virtual environment condition information ( t )), the virtual environment context information at successive times indicates dependency. For example, the virtual environment context information c t at the time t depends on the virtual environment context information c t -1 at the previous time t-1. The dependency between these variables is shown in Fig. 4 (b).

On the other hand, the vector (c t ) representing the virtual environment situation information may include various information about objects and events existing in the virtual environment. The information may be, for example, information on the presence or absence of an object, a distance from the object, the occurrence of a specific event, the type of a specific event, and the location of a specific event.

Table 1 shows an example of data transmitted between each module of the motion tracking apparatus according to an embodiment of the present invention.

Source module Arrival module data Motion Tracking Module Motion model module The estimated posture of the user at the present time
(Expressed in a skeleton technique, including joint angles and speeds)
Virtual environment control module Motion model module 1. Information about obstacles
(1) the distance to the obstacle in the direction in which the user moves (2)
(3) Type of obstacle
2. Information about the agent
(1) the distance between the user and the virtual agent
(2) Type of Agent (Friendly, Enemy)
(3) Visibility in the user's field of view
Motion model module Motion Tracking Module
Virtual environment control module
Loco-motion interface control module
User's attitude predicted at the next time
(Represented by the skeleton technique)

The motion model module 230 can predict a set of possible postures of the user at the next time point, considering the virtual environment situation information received from the virtual environment control module 240, as shown in Table 1. In other words, the motion model module 230 can estimate the set of possible postures of the user at the next time point by applying the virtual environment situation information of the current point of time to the parameters of the motion model.

As an example, the visibility of the object by the user may be used to predict the set of possible postures of the user at the next time. For example, the presence of an object that is not visible in the user's sight but will be seen suddenly can increase the probability that the user will move in a particular direction. For example, as shown in FIG. 5 (a), the adversary behind the closed door is not visible to the user at the present time. 5 (b), if the virtual environment situation information at the next time point is the one at which the door is opened, the next user will be made visible to the next user. Thus, the user is more likely to move in the opposite direction of the direction in which the enemy is, or take a motion to avoid an enemy attack. Accordingly, the motion model module 230 can apply the virtual environment situation information as a parameter of the motion model to predict a set of possible postures of the user at the next time point.

As an example, the presence of an obstacle or the distance from the obstacle may be used to predict a set of possible postures of the user at a future point in time. For example, as shown in (a) of FIG. 6, it is assumed that an obstacle lies in the direction in which the user is moving at the present time, and the distance between the user and the obstacle is sufficiently long at the present time. If the distance between the user and the obstacle is very close to the virtual environment situation information of the next time point as shown in FIG. 6 (b), the presence of such an obstacle can affect the moving direction of the user. That is, the user has a high probability of changing the moving direction to avoid collision with the obstacle. Accordingly, the motion model module 230 can apply the virtual environment situation information as a parameter of the motion model to predict a set of possible postures of the user at the next time point.

As an example, the occurrence of a particular event may be used to predict a set of possible postures of the user at a future point in time. For example, as shown in (a) of FIG. 7, it is assumed that no event occurs around the user at the present time. If the beep sound is generated from the specific object positioned at the front of the virtual environment situation information at the next time point as shown in FIG. 7 (b), the beep sound may affect the moving direction of the user . That is, the user has a high probability of changing the moving direction toward the object where the beep sound is generated. Accordingly, the motion model module 230 can apply the virtual environment situation information as a parameter of the motion model to predict a set of possible postures of the user at the next time point.

The virtual environment control module 240 controls the virtual environment projected on the screen 24. [ For example, the virtual environment control module 240 controls events such as appearance, disappearance, and movement of objects such as objects and persons, and the state of the objects (for example, the open or closed states of the doors).

The locomotion interface control module 250 controls the driving of the locomotion interface device 25. The locomotion interface control module 250 can control the locomotion interface device based on the estimated posture of the user at the current point in time, the direction of movement, the speed, and the posture at the next possible point in time. The moving direction and speed information of the user can be received from a separate measuring device.

8 is a flowchart illustrating a user motion prediction method according to an embodiment of the present invention.

In step 801, the user motion prediction apparatus acquires the sensor data. The sensor data may be received as data necessary for motion tracking, for example, from at least one depth camera for capturing a user and at least one motion sensor attached to the user's body.

In step 803, the user's motion prediction device estimates the posture of the user at the current point of time. The posture of the user at the current time point can be estimated based on the previously learned motion model and the collected sensor data.

In step 805, the user motion prediction apparatus predicts the posture of the user at the next time point. The user's motion prediction apparatus may use at least one of a motion model, a posture of the user at the current time point, features extracted from the sensor data, and virtual environment situation information for predicting the posture of the user at the next time point.

In step 807, the user's motion prediction device controls the locomotion interface device based on the predicted set of postures at the next time. For example, if the predicted set of postures at the next time point indicates forward movement, the user motion prediction device drives the locomotion interface device backward.

The embodiments of the invention described above may be implemented in any of a variety of ways. For example, embodiments of the present invention may be implemented using hardware, software, or a combination thereof. When implemented in software, it may be implemented as software running on one or more processors using various operating systems or platforms. Additionally, such software may be written using any of a number of suitable programming languages, and may also be compiled into machine code or intermediate code executable in a framework or virtual machine.

Also, when embodiments of the present invention are implemented on one or more processors, one or more programs for carrying out the methods of implementing the various embodiments of the invention discussed above may be stored on a processor readable medium (e.g., memory, A floppy disk, a hard disk, a compact disk, an optical disk, a magnetic tape, or the like).

Claims (18)

An apparatus for predicting a user's motion in a virtual environment,
A motion tracking module for estimating a posture of a user at a current time based on at least one sensor data and a pre-learned motion model; And
A motion model module for predicting a set of possible postures of the user at the next time point based on the motion model, the estimated posture of the user at the current time point,
Wherein the user motion prediction apparatus comprises:
2. The method of claim 1,
The virtual environment status information of the current point of time, the posture of the user at the previous point of time, and the posture of the user at the current point of time
User motion prediction device in virtual environment.
3. The method of claim 2, wherein the virtual environment context information
And information on an event occurring in the virtual environment of the current time and an object existing in the virtual environment of the current time
User motion prediction device in virtual environment.
The method as claimed in claim 1,
And information about an event occurring in the virtual environment of the next time and the object existing in the virtual environment of the next time
User motion prediction device in virtual environment.
5. The method of claim 4,
The information about the distance between the user and the corresponding object, the type of the corresponding object, and the visibility of the corresponding object based on the user
User motion prediction device in virtual environment.
5. The method of claim 4,
A type of the event, and information on a direction in which the event occurs based on the user
User motion prediction device in virtual environment.
The method according to claim 1,
The virtual environment control module controlling the virtual environment and generating virtual environment context information of the next time point based on the virtual environment context information of the current time and the user's attitude of the estimated current time point and providing the virtual environment context information to the motion model module
And a motion estimation unit for estimating motion of the user.
The method according to claim 1,
The user moves on the locomotion interface device,
The user motion prediction apparatus further includes a locomotion interface control module for controlling the locomotion interface device based on the posture of the user at the current time point and the posture of the user at the next time point
User motion prediction device in virtual environment.
9. The method of claim 8,
Wherein the locomotion interface control module controls the locomotion interface device to further control the speed of the user
User motion prediction device in virtual environment.
A method for predicting a user's motion in a virtual environment,
Estimating a posture of a user at a current time based on at least one sensor data and a pre-learned motion model; And
Predicting a set of possible posture of the user at the next time point based on the motion model, the posture of the user at the estimated current time point, and the virtual environment situation information at the next time point
And estimating the user motion in the virtual environment.
11. The method of claim 10,
Constructing the motion model based on virtual environment situation information of the current point of time, information of the posture of the user at the previous point of time, and information of the attitude of the user at the current point of time
Further comprising the steps of:
12. The method according to claim 11,
And information on an event occurring in the virtual environment of the current time and an object existing in the virtual environment of the current time
A user motion prediction method in a virtual environment.
11. The method as claimed in claim 10,
And information about an event occurring in the virtual environment of the next time and the object existing in the virtual environment of the next time
A user motion prediction method in a virtual environment.
14. The method of claim 13,
The information about the distance between the user and the corresponding object, the type of the corresponding object, and the visibility of the corresponding object based on the user
A user motion prediction method in a virtual environment.
14. The method of claim 13,
A type of the event, and information on a direction in which the event occurs based on the user
A user motion prediction method in a virtual environment.
11. The method of claim 10,
Generating virtual environment situation information at the next time point based on the virtual environment situation information of the current point of time and the attitude of the user at the estimated current point of time
Further comprising the steps of:
11. The method of claim 10,
The user moves on the locomotion interface device,
Controlling the locomotion interface device based on the user's posture of the current time point and the set of possible postures of the user at the next time point
Further comprising the steps of:
18. The method of claim 17,
Wherein the step of controlling the locomotion interface device includes the step of controlling the locomotion interface device in consideration of the speed of the user
A user motion prediction method in a virtual environment.
KR1020140152182A 2013-11-18 2014-11-04 Method for predicting human motion in virtual environment and apparatus thereof KR20150059099A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/543,506 US20150139505A1 (en) 2013-11-18 2014-11-17 Method and apparatus for predicting human motion in virtual environment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR20130140201 2013-11-18
KR1020130140201 2013-11-18

Publications (1)

Publication Number Publication Date
KR20150059099A true KR20150059099A (en) 2015-05-29

Family

ID=53393160

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020140152182A KR20150059099A (en) 2013-11-18 2014-11-04 Method for predicting human motion in virtual environment and apparatus thereof

Country Status (1)

Country Link
KR (1) KR20150059099A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102474117B1 (en) * 2021-11-12 2022-12-05 재단법인대구경북과학기술원 System for location tracking in sensor fusion-assisted virtual reality micro-manipulation environments

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102474117B1 (en) * 2021-11-12 2022-12-05 재단법인대구경북과학기술원 System for location tracking in sensor fusion-assisted virtual reality micro-manipulation environments

Similar Documents

Publication Publication Date Title
US20150139505A1 (en) Method and apparatus for predicting human motion in virtual environment
US20240202938A1 (en) Fault-tolerance to provide robust tracking for autonomous and non-autonomous positional awareness
US10832056B1 (en) Visual-inertial positional awareness for autonomous and non-autonomous tracking
US10354396B1 (en) Visual-inertial positional awareness for autonomous and non-autonomous device
CN103925920B (en) A kind of MAV indoor based on perspective image autonomous navigation method
EP3447448B1 (en) Fault-tolerance to provide robust tracking for autonomous and non-autonomous positional awareness
CN107655473B (en) Relative autonomous navigation system of spacecraft based on S L AM technology
Brubaker et al. Physics-based person tracking using the anthropomorphic walker
Huijun et al. Virtual-environment modeling and correction for force-reflecting teleoperation with time delay
CN111263921B (en) Collision detection system and computer-implemented method
US10402663B1 (en) Visual-inertial positional awareness for autonomous and non-autonomous mapping
Bascetta et al. Towards safe human-robot interaction in robotic cells: an approach based on visual tracking and intention estimation
Motai et al. Human tracking from a mobile agent: optical flow and Kalman filter arbitration
EP2590042A1 (en) Mobile apparatus performing position recognition using several local filters and a fusion filter
KR101347840B1 (en) Body gesture recognition method and apparatus
JP2009146406A (en) Visually tracking an object in real world using 2d appearance and multicue depth estimations
CN106105184A (en) Time delay in camera optical projection system reduces
KR101423139B1 (en) Method for localization and mapping using 3D line, and mobile body thereof
CN106708037A (en) Autonomous mobile equipment positioning method and device, and autonomous mobile equipment
JP6598191B2 (en) Image display system and image display method
US20210042526A1 (en) Information processing apparatus, information processing method, and recording medium
Rungsarityotin et al. Finding location using omnidirectional video on a wearable computing platform
CN105447886A (en) Dynamic cinema playback control method
KR20150059099A (en) Method for predicting human motion in virtual environment and apparatus thereof
Ikoma Hands and arms motion estimation of a car driver with depth image sensor by using particle filter

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E601 Decision to refuse application