WO2013041152A1 - Methods to command a haptic renderer from real motion data - Google Patents
Methods to command a haptic renderer from real motion data Download PDFInfo
- Publication number
- WO2013041152A1 WO2013041152A1 PCT/EP2011/073704 EP2011073704W WO2013041152A1 WO 2013041152 A1 WO2013041152 A1 WO 2013041152A1 EP 2011073704 W EP2011073704 W EP 2011073704W WO 2013041152 A1 WO2013041152 A1 WO 2013041152A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- describing
- sensor
- haptic effects
- signal
- data
- Prior art date
Links
- 230000033001 locomotion Effects 0.000 title claims abstract description 50
- 238000000034 method Methods 0.000 title claims abstract description 23
- 230000000694 effects Effects 0.000 claims abstract description 63
- 230000001133 acceleration Effects 0.000 claims abstract description 21
- 238000009877 rendering Methods 0.000 claims description 34
- 238000001914 filtration Methods 0.000 claims description 2
- 230000001360 synchronised effect Effects 0.000 abstract description 11
- 238000012545 processing Methods 0.000 description 16
- 230000035807 sensation Effects 0.000 description 3
- 230000001131 transforming effect Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000000605 extraction Methods 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012552 review Methods 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000000638 stimulation Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/016—Input arrangements with force or tactile feedback as computer generated output to the user
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
- G11B27/28—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/79—Processing of colour television signals in connection with recording
- H04N9/80—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
- H04N9/82—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
- H04N9/8205—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/79—Processing of colour television signals in connection with recording
- H04N9/80—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
- H04N9/82—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
- H04N9/8205—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
- H04N9/8211—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal the additional signal being a sound signal
Definitions
- the present invention relates to a method for automatically generating data describing haptic effects in motion data. Further, the invention relates to an apparatus for
- the first issue deals with the content creation, i.e. how the haptic effects can be produced in order to add a
- haptic effects are synthesized and the editor of the content manually adds effects to the media.
- the effects are directly captured from physical sensors using a specific device.
- these devices may be i) external sensors such as Inertial Measurement Units (IMU) (see A. Brady, B. MacDonald, I. Oakley, S.
- IMU Inertial Measurement Units
- Univertsity TR 2006 or by using a camera enhanced with infra-red capabilities (e.g. Microsoft Kinect device) .
- a camera enhanced with infra-red capabilities e.g. Microsoft Kinect device
- the second issue refers to the visualization of the
- the haptic feedback should be rendered by a
- motion simulators are well-known devices designed to make the user feel motion. They are intensively used as driving or flight simulators for learning purposes or in amusement parks. Most of them are based on the
- a method for automatically generating data describing haptic effects in motion data is proposed.
- For generating the haptic data For generating the haptic data,
- the first, second and third sensor signals are real-time signals.
- the at least one of an audio and video input signal are real-time signals.
- This has the advantage that motion data are generated from real-time signals.
- the data are sensored on a real-object. They are not
- the method according to the invention focuses on both, producing haptic effects and rendering haptic effects, and is designed to enhance a video viewing session and to enhance realistic scenes (i.e. natural scenes) captured from a real live experience.
- the invention provides a way to enhance a non-synthetic/natural film viewing session in a consumer context.
- the viewer feels what the main actor is currently feeling in terms of motion.
- the viewing experience of the user is enhanced.
- the acceleration signal and the at least one of audio and video signal is used for the method, alignment of the apparatus performing the method with regard to the ground is assumed. This means, the z-axis of the acceleration sensor is perpendicular to the surface of the earth.
- the gravitational component from the sensor input signals is removed to generate manipulated sensor signals for rendering a motion effect.
- the gravitational component is included in the signal describing rotation and the signal describing acceleration and leads to
- the first sensor is a gyroscope
- the second sensor is an accelerator
- the third sensor is an
- the data are rendered by a rendering device and the haptic effects are communicated to the user/viewer this way .
- the data are filtered and thresholded.
- the data might also be manipulated by input data provided manually. This has the advantage that the data processed this way give a more realistic feeling after rendering .
- an apparatus for generating data for describing haptic effects is proposed.
- apparatus comprises a gyroscope for generating a first signal describing the rotation of the gyroscope around its three axes, an accelerator for generating a second signal describing the three axes acceleration of the accelerator, an electronic compass for generating a third signal
- an audio/video capturing device for generating at least one of an audio and video signal.
- a synchronizer for synchronizing the first, second and third signal with the at least one of an audio and video signal and a generator for generating data describing haptic effects by using the first, second and third signal and the at least one of an audio and video signal are used.
- the apparatus comprises a subtractor for removing the gravitational component from the first and second signal.
- a rendering device for rendering the data describing haptic effects is employed to interface between the apparatus and the user/viewer and to communicate the haptic effects to the user.
- the apparatus comprises an input for receiving manually provided input data for modifying the automatically generated data describing haptic effects or for modifying the data used for automatically generating haptic effects.
- the apparatus includes a first input for receiving a first signal describing a rotation, a second input for receiving a second signal describing an acceleration, a third input for receiving a third signal describing a geometric
- the apparatus includes a generator for generating data describing haptic effects by using the first, second and third signal and the at least one of an audio and video signal and a renderer for rendering the data describing haptic effects.
- the apparatus includes a synchronizer for synchronizing the first, second and third signal with the at least one of an audio and video signal.
- a synchronizer for synchronizing the first, second and third signal with the at least one of an audio and video signal.
- Fig. 1 shows a brief overview about the steps necessary to transform real feelings in haptic effects
- Fig. 2 shows a technical system for audio/video
- Fig. 3 shows a data processing unit according to the
- Fig. 4 shows a flow diagram for processing the motion data
- Fig. 5 shows a rendering device
- Fig. 6 shows an haptic application program interface for controlling the rendering device
- Fig. 1 shows a brief overview how the real life motions of e.g. an actor can be captured and transformed in data describing haptic effects, which on the other hand can be rendered and communicated to a viewer or user.
- the complete framework includes a technical system for audio/video and motion capturing.
- the system should be transportable, thus the real actor can wear it during a shoot or the system accompanies the actor in another way.
- Fig. 2 shows such a capturing system.
- the system includes a processing unit for processing and transforming the original captured signals representing the motion and manipulating the data such that they can be rendered using a rendering device.
- the system includes at least a device for transforming the data describing the haptic effects into forces which can be recognized by the user, which is known has a motion rendering device in the art (also called haptic interface or force-feedback device) .
- the haptic rendering device is an interface based on haptics stimuli.
- Fig. 5 shows such a device.
- Fig. 2 shows a technical system for audio/video and motion capturing.
- Fig. 2a shows an Intertial Measurment Unit (IMU) to record the motion of the main actor, combining a
- IMU Intertial Measurment Unit
- gyroscope for quantifying the rotation of the IMU around its axes
- an accelerometer for quantifying the three axes acceleration of the IMU
- an electronic compass for capturing the geographic orientation with regard to
- the system is also provided for recording at least on of audio and video content.
- the system includes a camera and/or a microphone.
- Fig. 2b shows a Camsports HDS-720p camera as an exemplary example.
- Fig. 2c shows a prototype system.
- the system includes a storage medium for storing the raw data and/or a
- transmission device for sending the raw data to the
- the prototype system includes an energy source, e.g. a battery.
- the system might be robust and also appropriate for outdoor use under raw
- the data For capturing motion and audio/video data, the data have to be synchronized. This might be done by a synchronization device during capturing data or might be done in a latter step during data processing.
- a synchronization device For synchronizing the motion and audio/video data during data processing, before each record, three little pats are given on the prototype which causes fast and big peaks in both the acceleration signals and the audio stream. In a latter step of processing, these peaks are either manually or automatically detected and used for synchronization by matching the two peaks.
- Fig. 3 shows a data processing unit (1) for processing and transforming the original captured signals representing the motion and manipulating the data such that they can be rendered using a rendering device.
- the processing unit (1) has four inputs for receiving data, the first input (11) for receiving a first input signal (rot) from the gyroscope describing the rotation of the gyroscope around its three axes.
- the second input (12) is for receiving a second input signal (acc) from the accelerometer describing the three axes acceleration of accelerometer.
- the third input (13) is for receiving a third input signal (geo) from the
- the fourth input (14) is for receiving at least one of an audio and video input signal (A/V) . Further, the processing unit (1) contains a synchronizer (2) for synchronizing the first input signal
- the synchronizer (2) outputs synchronized first signal describing the rotation, synchronized second signal describing acceleration and synchronized third signal describing the northwards direction, the three signals are input in a generator (3) that is generating data describing haptic effects, which are used for
- the synchronized at least one of an audio and video signal is sent to a display device via a second output
- Fig. 4 schematically discloses further processing steps of the first, second and third input signals (11, 12, 13) .
- step 101 axis permutation is performed with the signals to align the axes of the accelerometer and the axes of the rendering device, which operates as an interface to the user's hand.
- step 102 data scaling is performed.
- the scaling of the raw data is necessary to adapt the amplitude of the raw signals to the input range of the rendering device.
- the scaling factor is e.g. empirically determined according to experimental feedbacks.
- the gravitational component is removed from signals in step 103. Removing the gravitational component included in the raw acceleration is important regarding the other external sources of
- the orientations of the three sensors are estimated using the information of the three sensors (gyroscope, accelerometer and electronic compass) and the balance filter method (see S. Colton, ' he balance filter: A simple solution for integrating
- step 104 data filtering is applied in step 104 for smoothing the raw data.
- step 105 Data thresholding is applied in step 105 for avoiding unexpected signal peaks due to recording artefacts and manually reshaping the raw data by applying authoring techniques is shown in step 106.
- the last step allows the inclusion of personalized effects.
- the haptic rendering device is the interface between the processing unit and the user to transfer the motion available as physical data signal into forces the user can register. For example, a force in the user' s hand is applied while the user is watching a visual stimulus. The orientation and direction of the force is highly correlated to the motion embedded in the
- the device exemplary shown in Fig. 5 is a Novint Falcon device. This is a 3-DOF force-feedback device, which is able to apply a force along three axes. It is typically designed to interact with 3D objects within virtual reality applications. Sensations of motion might also be produced with this device. Other rendering devices are by way of example moving seats or smartphones.
- Fig. 6 shows an open source haptic application program interface (HAPI - http://www.h3dapi.org/) as an exemplary interface to control the rendering device.
- HAPI open source haptic application program interface
- Fig. 6 contains also an integrated player for displaying the audio/video content on a standard screen while applying the forces to the rendering device in a synchronized way.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention relates to a method for automatically generating data describing haptic effects in motion data is proposed. For generating the motion data, - a first input signal from a first sensor describing the rotation of the first sensor around its three axes, - a second input signal from a second sensor describing the three axes acceleration of the second sensor, - a third input signal from a third sensor describing the geographic orientation of the third sensor, and at least one of an audio and video input signal are received. Then, the three sensor input signals and the at least one of an audio and video input signal are synchronized. From the synchronized data, data describing haptic effects are generated.
Description
Methods to command a haptic renderer from real motion data
The present invention relates to a method for automatically generating data describing haptic effects in motion data. Further, the invention relates to an apparatus for
generating data describing haptic effects and to an
apparatus for rendering data describing haptic effects.
New technology developments allow the creation of more and more immersive multimedia systems. 3D images and sound spatialization are now present in the end-user living space. But these systems are still limited to the
stimulation of two senses, sight and hearing, while
researches in virtual reality have shown that haptic perception seems to be strongly connected to immersion feeling - M. Reiner,. 'The Role of Haptics in Immersive Telecommunication Environments' , IEEE Transactions on
Circuits and Systems for Video Technology, 14 (3) : 392-401 r Mar. 2004. In line with this work, the invention especially focuses here on the way to enhance a non-synthetic video viewing experience with " " realistic ' haptic effects in a consumer context. More precisely, a framework is developed to make the user feel the motion embedded in the multimedia (Audio/Video) content he is watching. As detailed in the pioneer theoretical work of S. O'Modhrain and I. Oakley, ^Touch TV: Adding feeling to broadcast media' , In
Proceedings of the 1st European Conference on Interactive Television: from Viewers to Actors, 2003, addressing such a problem generally raises two typical issues.
The first issue deals with the content creation, i.e. how the haptic effects can be produced in order to add a
"motion effect' ' to the audiovisual stream. According to O'Modhrain and Oakley there are two ways to create such effects: the off-line creation and the real-time
generation. In the first case, haptic effects are
synthesized and the editor of the content manually adds effects to the media. In the second case, the effects are directly captured from physical sensors using a specific device. In the context of motion rendering, these devices may be i) external sensors such as Inertial Measurement Units (IMU) (see A. Brady, B. MacDonald, I. Oakley, S.
Hughes, and S. 0''Modhrainr 'Relay: a futuristic interface for remote driving' , In Proceedings of Eurohaptics, 2002 or R. Slyper and J. K. Hodgins , 1Action capture with
accelerometers' , Proceedings of the 2008 ACM SIGGRAPH, 2008 for examples in the context of radio-controlled cars and actor modelling respectively) or ii) the camera used to record the scene combined with motion extraction techniques
(see W. Hu, T. Tan,. L. Wang, and S. Maybank, Survey on Visual Surveillance of Object Motion and Behaviors' , IEEE Transactions on Systems , Man and Cybernetics , Part C
(Applications and Reviews) , 34 (3) : 334-352, Aug. 2004) . It is possible to help the extraction algorithms by adding markers into the filmed scene (L. Sigal and M. J. Black,
'HumanEva : Synchronized Video and Motion Capture Dataset for Evaluation of Articulated Human Motion' , Brown
Univertsity TR, 2006) or by using a camera enhanced with infra-red capabilities (e.g. Microsoft Kinect device) .
The second issue refers to the visualization of the
content, more precisely to the rendering of the haptic cues. The haptic feedback should be rendered by a
technology able to produce a wide range of haptic
sensations. Moreover several constraints might appear if the content has to be displayed in a user's living space and should be potentially shared. In the context of motion simulation, motion simulators are well-known devices designed to make the user feel motion. They are intensively used as driving or flight simulators for learning purposes or in amusement parks. Most of them are based on the
Stewart's platform (see B. Dasgupta, 'The Stewart platform
manipulator : a review' r Mechanism and Machine Theory,.
35 (1) :15-40, Jan. 2000) . It is a 6-DOF platform moving thanks to 6 hydraulic cylinders. In order to generate a displacement over a longer distance, the platform may be mounted on rails (see L. Nehaoua, H. Mohellebi, A. Amouri r H. Arioui, S. Espie, and A. Kheddarr 'Design and Control of a Small-Clearance Driving Simulator' r IEEE Transactions on Vehicular Technology, 57 (2) : 736-746, 2008) . Motion
platforms are very immersive but they remain expensive for end-user customers and are not designed to be integrated in a user's living space. In a less invasive way, the
sensation of motion can be also induced by a force-feedback device. In patent application WO2011/032937, a force is applied in the user' s hand while he is watching a synthetic visual stimulus. While the interface is pulling the hand, the user feels moving forward. Such a technique takes advantage of its low cost and of its convenient size to be compatible with consumer applications.
It is an object of the invention to improve the above mentioned solutions to generate and render haptic effects.
According to the invention, a method for automatically generating data describing haptic effects in motion data is proposed. For generating the haptic data,
- a first input signal from a first sensor describing the rotation of the first sensor around its three axes,
- a second input signal from a second sensor describing the three axes acceleration of the second sensor,
- a third input signal from a third sensor describing the geographic orientation of the third sensor, and
- at least one of an audio and video input signal are received. Then, the three sensor input signals and the at least one of an audio and video input signal are
synchronized. From the synchronized data, data describing haptic effects are generated. Preferably, the first, second
and third sensor signals are real-time signals.
Advantageously, the at least one of an audio and video input signal are real-time signals. This has the advantage that motion data are generated from real-time signals. The data are sensored on a real-object. They are not
generically generated data. Thus, when rendering the motion data, the user gets a feeling closely related to reality. Further, this is advantageous over the above mentioned prior art because the method according to the invention focuses on both, producing haptic effects and rendering haptic effects, and is designed to enhance a video viewing session and to enhance realistic scenes (i.e. natural scenes) captured from a real live experience. Thus, the invention provides a way to enhance a non-synthetic/natural film viewing session in a consumer context. Thus, the viewer feels what the main actor is currently feeling in terms of motion. The viewing experience of the user is enhanced. In a more simple approach, it is also sufficient to use at least one of the first, second and third sensor signals and the at least one of the audio and video input signals, wherein at least the sensor signal describing the acceleration has to be used for generating the data
describing haptic effects. If only the acceleration signal and the at least one of audio and video signal is used for the method, alignment of the apparatus performing the method with regard to the ground is assumed. This means, the z-axis of the acceleration sensor is perpendicular to the surface of the earth.
Preferably, the gravitational component from the sensor input signals is removed to generate manipulated sensor signals for rendering a motion effect. The gravitational component is included in the signal describing rotation and the signal describing acceleration and leads to
computational errors when generating the data describing haptic effects. By removing this component from the sensor
data, more realistic data describing haptic effects can be generated, which are closer to the real feeling of a person being part of a real audio/video scene currently viewed by a viewer. Thus, this leads to an even more immersive experience .
Advantageously, the first sensor is a gyroscope, the second sensor is an accelerator and the third sensor is an
electronic compass.
Preferably, after generating the data describing haptic effects, the data are rendered by a rendering device and the haptic effects are communicated to the user/viewer this way .
Advantageously, to more enhance the data describing haptic effects, the data are filtered and thresholded. In
addition, the data might also be manipulated by input data provided manually. This has the advantage that the data processed this way give a more realistic feeling after rendering .
According to the invention, an apparatus for generating data for describing haptic effects is proposed. The
apparatus comprises a gyroscope for generating a first signal describing the rotation of the gyroscope around its three axes, an accelerator for generating a second signal describing the three axes acceleration of the accelerator, an electronic compass for generating a third signal
describing the geometric orientation, and an audio/video capturing device for generating at least one of an audio and video signal. Further, a synchronizer for synchronizing the first, second and third signal with the at least one of an audio and video signal and a generator for generating data describing haptic effects by using the first, second
and third signal and the at least one of an audio and video signal are used.
Preferably, the apparatus comprises a subtractor for removing the gravitational component from the first and second signal.
Advantageously, a rendering device for rendering the data describing haptic effects is employed to interface between the apparatus and the user/viewer and to communicate the haptic effects to the user.
Advantageously, the apparatus comprises an input for receiving manually provided input data for modifying the automatically generated data describing haptic effects or for modifying the data used for automatically generating haptic effects.
According to a further aspect of the invention, an
apparatus for rendering haptic effects is proposed. The apparatus includes a first input for receiving a first signal describing a rotation, a second input for receiving a second signal describing an acceleration, a third input for receiving a third signal describing a geometric
orientation and a fourth input for receiving at least one of an audio and video data. Further, the apparatus includes a generator for generating data describing haptic effects by using the first, second and third signal and the at least one of an audio and video signal and a renderer for rendering the data describing haptic effects.
Preferably, the apparatus includes a synchronizer for synchronizing the first, second and third signal with the at least one of an audio and video signal.
For better understanding the invention shall now be explained in more detail in the following description with reference to the figures. It is understood that the invention is not limited to this exemplary embodiment and that specified features can also expediently be combined and/or modified without departing from the scope of the present invention.
Fig. 1 shows a brief overview about the steps necessary to transform real feelings in haptic effects
Fig. 2 shows a technical system for audio/video and
motion capturing
Fig. 3 shows a data processing unit according to the
invention
Fig. 4 shows a flow diagram for processing the motion data
Fig. 5 shows a rendering device
Fig. 6 shows an haptic application program interface for controlling the rendering device
Fig. 1 shows a brief overview how the real life motions of e.g. an actor can be captured and transformed in data describing haptic effects, which on the other hand can be rendered and communicated to a viewer or user.
The complete framework includes a technical system for audio/video and motion capturing. The system should be transportable, thus the real actor can wear it during a shoot or the system accompanies the actor in another way. Fig. 2 shows such a capturing system. Further, the system includes a processing unit for processing and transforming the original captured signals representing the motion and manipulating the data such that they can be rendered using a rendering device. The system includes at least a device for transforming the data describing the haptic effects into forces which can be recognized by the user, which is
known has a motion rendering device in the art (also called haptic interface or force-feedback device) . The haptic rendering device is an interface based on haptics stimuli. Fig. 5 shows such a device.
Fig. 2 shows a technical system for audio/video and motion capturing. Fig. 2a shows an Intertial Measurment Unit (IMU) to record the motion of the main actor, combining a
gyroscope for quantifying the rotation of the IMU around its axes, an accelerometer for quantifying the three axes acceleration of the IMU and an electronic compass for capturing the geographic orientation with regard to
northward direction by estimating the local magnetic field.
The system is also provided for recording at least on of audio and video content. Thus, the system includes a camera and/or a microphone. Fig. 2b shows a Camsports HDS-720p camera as an exemplary example.
Fig. 2c shows a prototype system. The system includes a storage medium for storing the raw data and/or a
transmission device for sending the raw data to the
processing unit. Further, the prototype system includes an energy source, e.g. a battery. The system might be robust and also appropriate for outdoor use under raw
environmental conditions.
For capturing motion and audio/video data, the data have to be synchronized. This might be done by a synchronization device during capturing data or might be done in a latter step during data processing. For synchronizing the motion and audio/video data during data processing, before each record, three little pats are given on the prototype which causes fast and big peaks in both the acceleration signals and the audio stream. In a latter step of processing, these
peaks are either manually or automatically detected and used for synchronization by matching the two peaks.
Fig. 3 shows a data processing unit (1) for processing and transforming the original captured signals representing the motion and manipulating the data such that they can be rendered using a rendering device. The processing unit (1) has four inputs for receiving data, the first input (11) for receiving a first input signal (rot) from the gyroscope describing the rotation of the gyroscope around its three axes. The second input (12) is for receiving a second input signal (acc) from the accelerometer describing the three axes acceleration of accelerometer. The third input (13) is for receiving a third input signal (geo) from the
electronic compass describing the geographic orientation with regard to northward direction. The fourth input (14) is for receiving at least one of an audio and video input signal (A/V) . Further, the processing unit (1) contains a synchronizer (2) for synchronizing the first input signal
(rot) , the second input signal (acc) and the third input signal (geo) with the at least one of an audio and video signal (A/V) . The synchronizer (2) outputs synchronized first signal describing the rotation, synchronized second signal describing acceleration and synchronized third signal describing the northwards direction, the three signals are input in a generator (3) that is generating data describing haptic effects, which are used for
commanding a rendering device as shown in Fig. 5, which is connected to the first output (21) of the processing unit
(1) . The synchronized at least one of an audio and video signal is sent to a display device via a second output
(22) .
Fig. 4 schematically discloses further processing steps of the first, second and third input signals (11, 12, 13) . In step 101 axis permutation is performed with the signals to
align the axes of the accelerometer and the axes of the rendering device, which operates as an interface to the user's hand. In step 102 data scaling is performed. The scaling of the raw data is necessary to adapt the amplitude of the raw signals to the input range of the rendering device. The scaling factor is e.g. empirically determined according to experimental feedbacks. The gravitational component is removed from signals in step 103. Removing the gravitational component included in the raw acceleration is important regarding the other external sources of
acceleration and masks some useful information interesting to render a better motion feeling. Removing this specific contribution enhances the user experience. A two-step methodology is therefore applied to suppress this component from the original signal. In a first step, the orientations of the three sensors are estimated using the information of the three sensors (gyroscope, accelerometer and electronic compass) and the balance filter method (see S. Colton, ' he balance filter: A simple solution for integrating
accelerometer and gyroscope measurements for a balancing platform' r 2001 for a complete description of the method) . The direction of the gravity is estimated in a second step, the current 3-axis acceleration vector is updated
accordingly by removing the constant gravity component in the estimated direction. Further, data filtering is applied in step 104 for smoothing the raw data. Data thresholding is applied in step 105 for avoiding unexpected signal peaks due to recording artefacts and manually reshaping the raw data by applying authoring techniques is shown in step 106. The last step allows the inclusion of personalized effects. These steps might be done within the processing unit disclosed in Fig. 3 or in a separate unit.
Then, the data are processed to be compatible with the input of the motion rendering device (shown in Fig. 5) to communicate the user a motion. The haptic rendering device
is the interface between the processing unit and the user to transfer the motion available as physical data signal into forces the user can register. For example, a force in the user' s hand is applied while the user is watching a visual stimulus. The orientation and direction of the force is highly correlated to the motion embedded in the
audio/video content that the user is watching and creates an illusion of motion. The device exemplary shown in Fig. 5 is a Novint Falcon device. This is a 3-DOF force-feedback device, which is able to apply a force along three axes. It is typically designed to interact with 3D objects within virtual reality applications. Sensations of motion might also be produced with this device. Other rendering devices are by way of example moving seats or smartphones.
Fig. 6 shows an open source haptic application program interface (HAPI - http://www.h3dapi.org/) as an exemplary interface to control the rendering device. Such an
interface contains a program library allowing the creation of force vectors that are rendered on a force-feedback device. Fig. 6 includes also an integrated player for displaying the audio/video content on a standard screen while applying the forces to the rendering device in a synchronized way.
Claims
1. Method for automatically generating data describing haptic effects in motion data characterized by the steps
- receiving a first input signal from a first sensor describing the rotation of the first sensor around its three axes;
- receiving a second input signal from a second sensor describing the three axes acceleration of the second sensor;
- receiving a third input signal from a third sensor describing the geographic orientation of the third sensor;
- receiving at least one of an audio and video input signal ;
- synchronizing the three sensor input signals and the at least one of an audio and video input signal;
- generating data describing haptic effects by using the received sensor signals.
2. Method for automatically generating data describing haptic effects in motion data according to claim 1, wherein the first, second and third sensor signals are real-time signals .
3. Method for automatically generating data describing haptic effects in motion data according to one of claims 1 to 2, further characterized by the step of removing the gravitational component from the sensor input signals to generate manipulated sensor signals for rendering a motion effect .
4. Method for automatically generating data describing haptic effects in motion data according to one of claims 1 to 3, wherein the first sensor is a gyroscope, the second sensor is an accelerator and the third sensor is an
electronic compass.
5. Method for automatically generating data describing haptic effects in motion data according to one of claims 1 to 4, further characterized by the step of rendering the data describing haptic effects by a rendering device.
6. Method for automatically generating data describing haptic effects in motion data according to one of claims 1 to 5, further characterized by the steps of data filtering and data thresholding.
7. Method for automatically generating data describing haptic effects in motion data according to one of claims 1 to 6, further characterized by the step of receiving manually provided input data for modifying the
automatically generated data describing haptic effects.
8. Apparatus for generating data for describing haptic effects, including:
a gyroscope for generating a first signal describing the rotation of the gyroscope around its three axes;
an accelerator for generating a second signal describing the three axes acceleration of the accelerator;
an electronic compass for generating a third signal describing the geometric orientation;
an audio/video capturing device for generating at least one of an audio and video signal;
a synchronizer for synchronizing the first, second and third signal with the at least one of an audio and video signal ;
a generator for generating data describing haptic effects by using the first, second and third signal and the at least one of an audio and video signal.
9. Apparatus for generating data for describing haptic effects according to claim 8, further including: a subtractor for removing the gravitational component from the first and second signal.
10. Apparatus for generating data for describing haptic effects according to claim 8 or 9, further including:
a rendering device for rendering the data describing haptic effects .
11. Apparatus for generating data for describing haptic effects according to one of claims 8 to 10, further
including :
an input for receiving manually provided input data for modifying the automatically generated data describing haptic effects.
12. Apparatus for rendering haptic effects, including:
a first input for receiving a first signal describing a rotation;
a second input for receiving a second signal describing an acceleration;
a third input for receiving a third signal describing a geometric orientation;
a fourth input for receiving at least one of an audio and video data;
a generator for generating data describing haptic effects by using the first, second and third signal and the at least one of an audio and video signal;
a renderer for rendering the data describing haptic
effects .
13. Apparatus for rendering haptic effects according to claim 12, further including:
a synchronizer for synchronizing the first, second and third signal with the at least one of an audio and video signal .
14. Method for automatically generating data describing haptic effects in motion data characterized by the steps
- receiving at least one of a first input signal from a first sensor describing the rotation of the first sensor around its three axes, a second input signal from a second sensor describing the three axes acceleration of the second sensor and a third input signal from a third sensor describing the geographic orientation of the third sensor, wherein at least the signal describing the three axis acceleration is received;
- receiving at least one of an audio and video input signal ;
- synchronizing the at least one of a first, second and third sensor input signals and the at least one of an audio and video input signal;
- generating data describing haptic effects by using the received sensor signals.
15. Method for automatically generating data describing haptic effects in motion data according to one of claims 1 to 7, further characterized the synchronizing step is performed by detecting peaks in the audio and the
acceleration signals, the peaks are caused by pats given to the system, and matching the time axis of the signals according to the detected peaks .
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP11306178.2 | 2011-09-19 | ||
EP11306178 | 2011-09-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013041152A1 true WO2013041152A1 (en) | 2013-03-28 |
Family
ID=45390104
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2011/073704 WO2013041152A1 (en) | 2011-09-19 | 2011-12-21 | Methods to command a haptic renderer from real motion data |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2013041152A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9179854B2 (en) | 2005-05-16 | 2015-11-10 | Mark S. Doidge | Three-dimensional localization, display, recording, and analysis of electrical activity in the cerebral cortex |
EP2958107A1 (en) * | 2014-06-16 | 2015-12-23 | Immersion Corporation | Systems and methods for foley-style haptic content creation |
EP3054451A1 (en) * | 2015-02-03 | 2016-08-10 | Thomson Licensing | Method, apparatus and system for synchronizing audiovisual content with inertial measurements |
WO2017053761A1 (en) * | 2015-09-25 | 2017-03-30 | Immersion Corporation | Haptic effects design system |
US9864871B2 (en) | 2015-01-24 | 2018-01-09 | International Business Machines Corporation | Masking of haptic data |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011011737A1 (en) * | 2009-07-24 | 2011-01-27 | Digimarc Corporation | Improved audio/video methods and systems |
WO2011032937A2 (en) | 2009-09-17 | 2011-03-24 | Centre National De La Recherche Scientifique | Method for simulating specific movements by haptic feedback, and device implementing the method |
-
2011
- 2011-12-21 WO PCT/EP2011/073704 patent/WO2013041152A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011011737A1 (en) * | 2009-07-24 | 2011-01-27 | Digimarc Corporation | Improved audio/video methods and systems |
WO2011032937A2 (en) | 2009-09-17 | 2011-03-24 | Centre National De La Recherche Scientifique | Method for simulating specific movements by haptic feedback, and device implementing the method |
Non-Patent Citations (9)
Title |
---|
A. BRADY; B. MACDONALD; I. OAKLEY; S. HUGHES; S. O'MODHRAIN; RELAY: A FUTURISTIC INTERFACE FOR REMOTE DRIVING, PROCEEDINGS OF EUROHAPTICS, 2002 |
B. DASGUPTA, THE STEWART PLATFORM MANIPULATOR: A REVIEWRR MECHANISM AND MACHINE THEORY, vol. 35, no. 1, January 2000 (2000-01-01), pages 15 - 40 |
DAVID BANNACH ET AL: "Automatic Event-Based Synchronization of Multimodal Data Streams from Wearable and Ambient Sensors", 16 September 2009, SMART SENSING AND CONTEXT, SPRINGER BERLIN HEIDELBERG, BERLIN, HEIDELBERG, PAGE(S) 135 - 148, ISBN: 978-3-642-04470-0, XP019129348 * |
L. NEHAOUA; H. MOHELLEBI; A. AMOURI; H. ARIOUI; S. ESPIE; A. KHEDDAR: "Design and Control of a Small-Clearance Driving Simulator", IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, vol. 57, no. 2, 2008, pages 736 - 746, XP011201791 |
L. SIGAL; M. J. BLACK: "HumanEva: Synchronized Video and Motion Capture Dataset for Evaluation of Articulated Human Motion", 2006, BROWN UNIVERTSITY TR |
M. REINER: "The Role of Haptics in Immersive Telecommunication environments", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, vol. 14, no. 3, March 2004 (2004-03-01), pages 392 - 401, XP011108804, DOI: doi:10.1109/TCSVT.2004.823399 |
R. SLYPER; J. K. HODGINS: "Action capture with accelerometers", PROCEEDINGS OF THE 2008 ACM SIGGRAPH, 2008 |
S. O'MODHRAIN; I. OAKLEY: "Touch TV: Adding feeling to broadcast media", PROCEEDINGS OF THE 1ST EUROPEAN CONFERENCE ON INTERACTIVE TELEVISION: FROM VIEWERS TO ACTORS, 2003 |
W. HU; T. TAN; L. WANG; S. MAYBANK: "A Survey on Visual Surveillance of Object Motion and Behaviors", IEEE TRANSACTIONS ON SYSTEMS, MAN AND CYBERNETICS, vol. 34, no. 3, August 2004 (2004-08-01), pages 334 - 352, XP011114887, DOI: doi:10.1109/TSMCC.2004.829274 |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9179854B2 (en) | 2005-05-16 | 2015-11-10 | Mark S. Doidge | Three-dimensional localization, display, recording, and analysis of electrical activity in the cerebral cortex |
EP2958107A1 (en) * | 2014-06-16 | 2015-12-23 | Immersion Corporation | Systems and methods for foley-style haptic content creation |
US10503263B2 (en) | 2014-06-16 | 2019-12-10 | Immersion Corporation | Systems and methods for Foley-style haptic content creation |
US10139907B2 (en) | 2014-06-16 | 2018-11-27 | Immersion Corporation | Systems and methods for foley-style haptic content creation |
US9864871B2 (en) | 2015-01-24 | 2018-01-09 | International Business Machines Corporation | Masking of haptic data |
WO2016124580A1 (en) * | 2015-02-03 | 2016-08-11 | Thomson Licensing | Method, apparatus and system for synchronizing audiovisual content with inertial measurements |
KR20170109573A (en) * | 2015-02-03 | 2017-09-29 | 톰슨 라이센싱 | Method, apparatus and system for synchronizing inertial measurements and audiovisual content |
CN107210054A (en) * | 2015-02-03 | 2017-09-26 | 汤姆逊许可公司 | For simultaneous audio and visual content and the methods, devices and systems of inertia measurement |
JP2018505478A (en) * | 2015-02-03 | 2018-02-22 | トムソン ライセンシングThomson Licensing | Method, apparatus and system for synchronizing audiovisual content with inertial measurement |
US10249341B2 (en) | 2015-02-03 | 2019-04-02 | Interdigital Ce Patent Holdings | Method, apparatus and system for synchronizing audiovisual content with inertial measurements |
EP3054451A1 (en) * | 2015-02-03 | 2016-08-10 | Thomson Licensing | Method, apparatus and system for synchronizing audiovisual content with inertial measurements |
KR102401673B1 (en) * | 2015-02-03 | 2022-05-25 | 인터디지털 씨이 페이튼트 홀딩스, 에스에이에스 | Method, apparatus and system for synchronizing inertial measurements with audiovisual content |
WO2017053761A1 (en) * | 2015-09-25 | 2017-03-30 | Immersion Corporation | Haptic effects design system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Danieau et al. | Framework for enhancing video viewing experience with haptic effects of motion | |
US10650590B1 (en) | Method and system for fully immersive virtual reality | |
US10092843B1 (en) | Interactive system and method | |
CN106157359B (en) | Design method of virtual scene experience system | |
EP3712840A1 (en) | Method and system for generating an image of a subject in a scene | |
US20130215229A1 (en) | Real-time compositing of live recording-based and computer graphics-based media streams | |
KR101713772B1 (en) | Apparatus and method for pre-visualization image | |
US20030227453A1 (en) | Method, system and computer program product for automatically creating an animated 3-D scenario from human position and path data | |
US20130218542A1 (en) | Method and system for driving simulated virtual environments with real data | |
Lee et al. | Motion effects synthesis for 4D films | |
EP2175636A1 (en) | Method and system for integrating virtual entities within live video | |
JP2018527655A (en) | Method and apparatus for providing haptic feedback and interactivity based on user haptic space (HapSpace) | |
WO2020197621A1 (en) | Spatially consistent representation of hand motion | |
CN112150885B (en) | Cockpit system based on mixed reality and scene construction method | |
WO2013041152A1 (en) | Methods to command a haptic renderer from real motion data | |
CN113709543A (en) | Video processing method and device based on virtual reality, electronic equipment and medium | |
CN103324488A (en) | Method and device for obtaining special effect information | |
US20190295324A1 (en) | Optimized content sharing interaction using a mixed reality environment | |
CN112532963B (en) | AR-based three-dimensional holographic real-time interaction system and method | |
Kim et al. | 3-d virtual studio for natural inter-“acting” | |
KR20160136160A (en) | Virtual Reality Performance System and Performance Method | |
JPH04204842A (en) | Video simulation system | |
KR102388715B1 (en) | Apparatus for feeling to remodeling historic cites | |
CN103309444A (en) | Kinect-based intelligent panoramic display method | |
EP3623908A1 (en) | A system for controlling audio-capable connected devices in mixed reality environments |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11799446 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11799446 Country of ref document: EP Kind code of ref document: A1 |