WO2013008869A1 - Dispositif électronique et procédé de génération de données - Google Patents

Dispositif électronique et procédé de génération de données Download PDF

Info

Publication number
WO2013008869A1
WO2013008869A1 PCT/JP2012/067757 JP2012067757W WO2013008869A1 WO 2013008869 A1 WO2013008869 A1 WO 2013008869A1 JP 2012067757 W JP2012067757 W JP 2012067757W WO 2013008869 A1 WO2013008869 A1 WO 2013008869A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
unit
subject
audio
vibration
Prior art date
Application number
PCT/JP2012/067757
Other languages
English (en)
Japanese (ja)
Inventor
八木 健
Original Assignee
株式会社ニコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ニコン filed Critical 株式会社ニコン
Publication of WO2013008869A1 publication Critical patent/WO2013008869A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/04Time compression or expansion
    • G10L21/055Time compression or expansion for synchronising with other signals, e.g. video signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • H04N5/772Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera the recording apparatus and the television camera being placed in the same enclosure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/82Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
    • H04N9/8205Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal

Definitions

  • the present invention relates to an electronic device and a data generation method.
  • This application claims priority based on Japanese Patent Application No. 2011-155586 for which it applied on July 14, 2011, and uses the content here.
  • Patent Document 1 a technique for generating an audio signal corresponding to a video is known (for example, see Patent Document 1).
  • color information of a predetermined detection area in a video is detected, and an audio signal is generated using a predetermined tone color corresponding to the color information.
  • Patent Document 1 has a problem that only an audio signal is generated using a timbre corresponding to color information in an image, and movement of a subject in the image is not taken into consideration.
  • An object of an aspect of the present invention is to provide an electronic device and a data generation method that can generate audio data and vibration data according to the movement of a subject in a moving image.
  • One embodiment of the present invention includes an analysis unit that analyzes the movement of a subject in a moving image, generation of audio data according to the movement of the subject analyzed by the analysis unit, and generation of vibration data according to the generated audio data
  • An electronic device comprising: a generation unit that performs Another aspect of the present invention is an analysis unit that analyzes the movement of a subject in a moving image, generation of audio data according to the movement of the subject analyzed by the analysis unit, and the movement of the subject analyzed by the analysis unit
  • An electronic device comprising: a generation unit that generates vibration data according to
  • FIG. 1 is a block diagram showing a configuration of an electronic device 1 according to the first embodiment of the present invention.
  • the electronic device 1 is a portable information terminal such as a mobile phone, a smartphone, or a digital camera.
  • the electronic device 1 includes a control unit 101, an imaging unit 102, a microphone 103, a data storage unit 104, an image analysis unit 105, a generation unit 106, a library (storage unit) 107, an audio output unit 108, A vibration unit 109 and a display unit 110 are included.
  • the imaging unit 102 images a subject and generates image data. For example, the imaging unit 102 outputs image data of a captured still image in response to a still image shooting operation. In addition, the imaging unit 102 outputs image data of moving images continuously captured at a predetermined interval in accordance with the moving image shooting operation. Then, still image data and moving image data captured by the imaging unit 102 are recorded in the data storage unit 104 under the control of the control unit 101. The imaging unit 102 outputs image data obtained continuously at a predetermined interval as through image data (through image) in a shooting standby state in which no shooting operation is performed. The through image data obtained by the imaging unit 102 is displayed on the display unit 110 under the control of the control unit 101. The microphone 103 collects sound and generates sound data corresponding to the collected sound.
  • the data storage unit 104 stores moving image data, moving image audio data, multimedia data, and the like.
  • the moving image audio data is data including moving image data and audio data that is temporally synchronized with the moving image data.
  • Multimedia data is data including moving image data, audio data that is temporally synchronized with the moving image image data, and vibration data that is temporally synchronized with the moving image data. is there.
  • the control unit 101 controls each part of the electronic device 1 in an integrated manner. For example, the control unit 101 generates moving image audio data by temporally synchronizing the moving image data generated by the imaging unit 102 and the audio data collected by the microphone 103, and the generated moving image audio data is stored in the data storage unit 104. Write to. Further, the control unit 101 controls the image analysis unit 105 and the generation unit 106 to generate audio data and vibration data corresponding to the moving image data, and generates the audio data and vibration data generated in the moving image data. The multimedia data is generated by synchronizing them with each other in time. Further, the control unit 101 reads the multimedia data from the data storage unit 104, and controls the display unit 110, the audio output unit 108, and the vibration unit 109 to reproduce the read multimedia data.
  • the image analysis unit 105 analyzes the movement of the subject in the moving image data, and outputs the analyzed movement of the subject to the generation unit 106.
  • the library 107 is a storage unit that stores audio element data corresponding to each movement of the subject.
  • the generation unit 106 generates audio data corresponding to the movement of the subject analyzed by the image analysis unit 105. Specifically, the generation unit 106 reads audio element data corresponding to the movement of the subject from the library 107, and generates audio data based on the read audio element data.
  • the generation unit 106 generates vibration data corresponding to the generated audio data. Specifically, the generation unit 106 converts audio data into vibration data using a predetermined conversion formula.
  • the generation unit 106 generates vibration data so that vibration is generated when the vibration of the sound is greater than a predetermined value on the time axis of the sound data. Then, the generation unit 106 generates multimedia data by temporally synchronizing the image data of the moving image, the generated audio data, and the generated vibration data, and the generated multimedia data is stored in the data storage unit 104. Write.
  • the display unit 110 is a display such as a liquid crystal display.
  • the display unit 110 displays image data.
  • the audio output unit 108 outputs audio corresponding to the audio data.
  • the audio output unit 108 includes a codec that converts digital audio data into analog, and a speaker that outputs the converted analog audio data.
  • the vibration unit 109 generates vibration according to the vibration data.
  • the vibration unit 109 includes a vibration signal generation unit that vibrates a vibration device based on vibration data, and a vibration device that generates vibration such as a linear vibration actuator.
  • FIG. 2 is a diagram for explaining a multimedia data generation method according to the present embodiment.
  • FIG. 3 is a flowchart showing a procedure of multimedia data generation processing according to this embodiment.
  • FIG. 2 shows a moving image of the main subject (person) T running.
  • the control unit 101 reads moving image data from the data storage unit 104 and outputs the read moving image data to the image analysis unit 105 to instruct generation of multimedia data.
  • the image analysis unit 105 extracts the main subject T in the moving image data (step S101). For example, the image analysis unit 105 extracts a person by pattern matching, and sets the extracted person as the main subject T. At this time, when a plurality of persons are extracted, the image analysis unit 105 sets a person close to the center of the image data as the main subject T, or sets a specific person as the main subject T by face recognition. When the main subject T is determined by face recognition, the electronic device 1 stores in advance data on the face of the person who is the main subject T. Alternatively, the image analysis unit 105 may use not only a person but also an object located near the center or an object that frequently appears in a moving image as the main subject T.
  • the image analysis unit 105 analyzes the extracted movement of the main subject T (step S102). Specifically, the image analysis unit 105 performs pattern matching between a motion pattern (for example, running, jumping, etc.) stored in advance and the motion of the main subject T in a moving image, thereby moving the main subject's motion. Determine. In this example, the image analysis unit 105 determines that the main subject T is running. Further, the image analysis unit 105 extracts the timing at which the foot of the main subject T lands on the ground (time position in the moving image) by, for example, vector analysis of foot motion. When the image analysis unit 105 performs extraction by vector analysis, the timing at which the vector direction of the motion changes more than a predetermined value is set as the timing at which the foot reaches the ground.
  • a motion pattern for example, running, jumping, etc.
  • the generation unit 106 generates audio data corresponding to the movement of the main subject T (step S103). Specifically, first, the generation unit 106 reads out sound element data corresponding to the movement of the main subject T from the library 107, and generates sound data based on the read sound element data. In this example, the generation unit 106 reads the running footstep “t” from the library 107. Then, the generation unit 106 generates audio data so that the footstep “tap” sounds at the timing when the foot of the main subject T lands on the ground. Thereby, sound data of a sound “tattattata” that matches the movement of the main subject T in the moving image is generated.
  • generation part 106 produces
  • the generation unit 106 generates vibration data so as to vibrate at the timing when the footstep “t” is output.
  • the generation unit 106 generates multimedia data in time synchronization with the moving image data, the generated audio data, and the generated vibration data (step S105).
  • the control unit 101 displays moving image data on the display unit 110, outputs audio data to the audio output unit 108, and outputs vibration data to the vibration unit 109.
  • the audio output unit 108 reproduces the footstep “Tattatta” in accordance with the movement of the main subject T displayed on the display unit 110, and the vibration unit 109 generates vibration. That is, at the timing when the foot of the main subject T lands on the ground on the display unit 110, the footstep “tap” is reproduced and the electronic device 1 vibrates.
  • multimedia data is generated from the moving image data already stored in the data storage unit 104.
  • the multimedia data is generated from the moving image data captured by the imaging unit 102.
  • the control unit 101 sequentially outputs the image data of the moving image being imaged by the imaging unit 102 to the image analysis unit 105.
  • the control unit 101 controls the generation unit 106 to generate multimedia data of the moving image in response to an operation for ending the shooting of the moving image.
  • the user can acquire the multimedia data to which the vibration is added only by performing the moving image shooting operation.
  • the generation unit 106 converts the sound data into vibration data by a predetermined conversion formula.
  • information related to vibration for example, frequency, vibration amplitude, (Or vibration time, etc.) is stored in the library 107 in advance, information relating to vibration corresponding to the sound data (voice element data) is read from the library 107, and vibration data is generated based on the read information relating to vibration. Also good.
  • the generation unit 106 generates audio data and vibration data corresponding to the movement of the subject analyzed by the image analysis unit 105, and adds the audio data and vibration data to the moving image data.
  • Multimedia data is generated in time synchronization. Thereby, it is possible to generate audio data and vibration data corresponding to the movement of the subject in the moving image. Further, when the multimedia data is reproduced, sound is reproduced in accordance with the moving image and vibration is generated in the electronic device 1, so that the multimedia data can be viewed more enjoyably.
  • the three senses of video, sound, and vibration make the memories related to the video clearer than with video and sound alone. Can help you remember.
  • the library 107 stores data relating to vibration corresponding to each movement of the subject.
  • the data related to vibration includes, for example, frequency, amplitude, vibration time, and the like.
  • the generation unit 106 generates vibration data according to the movement of the subject. Specifically, the generation unit 106 reads data related to vibration according to the movement of the subject from the library 107, and generates vibration data based on the read data related to vibration. Since other configurations are the same as those of the first embodiment, description thereof is omitted.
  • FIG. 4 is a flowchart showing a procedure of multimedia data generation processing according to this embodiment.
  • the control unit 101 reads moving image / sound data from the data storage unit 104 and outputs the read moving image / sound data to the image analysis unit 105 to instruct generation of multimedia data.
  • the image analysis unit 105 extracts the main subject T in the moving image data included in the moving image audio data (step S201). Next, the image analysis unit 105 analyzes the movement of the extracted main subject T (step S202). In this example, the image analysis unit 105 determines that the main subject T is running. Further, the image analysis unit 105 extracts the timing at which the foot of the main subject T lands on the ground (time position in the moving image) by, for example, vector analysis of foot motion.
  • the generation unit 106 generates vibration data according to the movement of the main subject T (step S203). Specifically, the control unit 106 reads data related to vibration according to the movement of the main subject T, and generates vibration data based on the read data related to vibration. In this example, the generation unit 106 generates vibration data so that vibration according to vibration-related data (for example, frequency, amplitude, vibration time, etc.) is generated at the timing when the foot of the main subject T lands on the ground. . Alternatively, vibration data can be generated based on data obtained by vector analysis. In this case, it is conceivable that vibration data is generated so as to vibrate when the motion vector direction changes more than a predetermined value. Finally, the generation unit 106 generates multimedia data by temporally synchronizing the vibration data generated to the moving image audio data (step S204).
  • vibration-related data for example, frequency, amplitude, vibration time, etc.
  • the vibration unit 109 When this multimedia data is reproduced, the vibration unit 109 generates vibration in accordance with the movement of the main subject T displayed on the display unit 110. That is, vibration occurs in the electronic device 1 at the timing when the foot of the main subject T lands on the ground in the display unit 110.
  • the generation unit 106 since the generation unit 106 generates vibration data based on the analysis result of the image analysis unit 105, it is possible to generate vibrations more in line with the movement of the subject.
  • FIG. 5 is a block diagram illustrating the configuration of the electronic apparatus 2 according to the present embodiment.
  • the electronic device 2 according to the present embodiment includes a voice extraction unit 211 in addition to the configuration of the electronic device 1 shown in FIG.
  • the sound extraction unit 211 extracts sound corresponding to the movement of the main subject T from the sound data based on the analysis result of the moving image data by the image analysis unit 205, and information about the extracted sound (for example, the sound data in the sound data)
  • the time position of the sound is output to the generation unit 206.
  • the generation unit 206 newly generates audio data by increasing the volume of the audio extracted by the audio extraction unit 211 in the audio data to at least a predetermined amount set in advance.
  • the generation unit 206 generates vibration data according to the generated audio data. Specifically, the generation unit 206 generates vibration data so that the sound volume vibrates at a time position where the volume is larger than a predetermined amount.
  • the audio data may be converted into vibration data by a predetermined conversion formula.
  • the generation unit 206 generates multimedia data by temporally synchronizing the image data of the moving image, the generated audio data, and the generated vibration data. Since other configurations are the same as those of the first embodiment, description thereof is omitted.
  • FIG. 6 is a flowchart showing a procedure of multimedia data generation processing according to this embodiment.
  • the control unit 201 reads moving image / sound data from the data storage unit 204 and outputs the read moving image / sound data to the image analysis unit 205 and the sound extraction unit 211 to instruct generation of multimedia data.
  • the image analysis unit 205 extracts the main subject T in the moving image data included in the moving image audio data (step S301).
  • the image analysis unit 205 analyzes the movement of the extracted main subject T (step S302). In this example, the image analysis unit 205 determines that the main subject T is running. Further, the image analysis unit 205 extracts the timing (position (time) in the moving image) at which the foot of the main subject T lands on the ground by, for example, vector analysis of foot motion.
  • the sound extraction unit 211 extracts sound corresponding to the movement of the main subject T from the sound data included in the moving image sound data (step S303).
  • the sound extraction unit 211 extracts a footstep sound of the main subject T by frequency analysis or the like based on, for example, temporal timing.
  • the electronic device 2 stores in advance data relating to the frequency of footsteps. That is, the voice extraction unit 211 extracts a voice when the foot of the main subject T lands on the ground as a footstep from the voice data based on the analysis result by the image analysis unit 205.
  • the generation unit 206 generates audio data based on the audio extracted by the audio extraction unit 211 (step S304). Specifically, the generation unit 206 increases the volume of the audio extracted by the audio extraction unit 211 in the audio data included in the moving image audio data. In this example, the generation unit 206 increases the volume of the sound (footstep) at the timing when the foot of the main subject T lands on the ground in the sound data included in the moving image sound data. That is, the generation unit 206 emphasizes footsteps in the audio data included in the moving image audio data.
  • the generation unit 206 generates vibration data according to the generated audio data (step S305).
  • the generation unit 206 generates vibration data so as to vibrate at the timing when the emphasized footsteps are output.
  • the generation unit 206 generates multimedia data by temporally synchronizing the generated audio data and the generated vibration data to the moving image data included in the moving image audio data (step S306).
  • the audio output unit 208 reproduces the footsteps emphasized in accordance with the movement of the main subject T displayed on the display unit 210, and the vibration unit 209 generates vibration. That is, at the timing when the foot of the main subject T lands on the ground on the display unit 210, the emphasized footsteps are reproduced and the electronic device 2 vibrates.
  • the generation unit 206 since the generation unit 206 emphasizes the voice according to the movement of the subject, it can express the movement of the subject with the voice.
  • Multimedia data generation processing may be performed.
  • the “computer system” may include an OS and hardware such as peripheral devices.
  • “Computer-readable recording medium” means a floppy (registered trademark) disk, a magneto-optical disk, an SD card, a writable non-volatile memory such as a flash memory, a portable medium such as a CD-ROM, and a computer system.
  • a built-in storage device such as a hard disk.
  • the “computer-readable recording medium” refers to a volatile memory (for example, DRAM (Dynamic DRAM)) in a computer system that becomes a server or a client when a program is transmitted via a network such as the Internet or a communication line such as a telephone line. Random Access Memory)), etc., which hold programs for a certain period of time.
  • the program may be transmitted from a computer system storing the program in a storage device or the like to another computer system via a transmission medium or by a transmission wave in the transmission medium.
  • the “transmission medium” for transmitting the program refers to a medium having a function of transmitting information, such as a network (communication network) such as the Internet or a communication line (communication line) such as a telephone line.
  • the program may be for realizing a part of the functions described above. Furthermore, what can implement
  • a moving image of a person is described as an example.
  • the present invention is not limited to this, and the main subject is added to a moving image other than a person, such as adding sound data of a flapping sound to a moving image of a bird.
  • sound data or vibration data may be generated.
  • audio data or vibration data may be generated for a moving image such as an animation. In this embodiment, sound data or vibration data for a moving image is generated.
  • the image analysis unit 105 analyzes the movement of the subject from the still image by pattern matching or the like. For example, in the case of a still image of an athletic meet, voices corresponding to the athletic meet (running sounds, cheers, marches, etc.) are combined with the still image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Studio Devices (AREA)
  • Television Signal Processing For Recording (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un dispositif électronique (1) qui comporte une unité d'analyse d'image (105) pour analyser le mouvement d'un sujet dans une image mobile, et une unité de génération (106) pour générer des données audio et des données de vibration qui correspondent au mouvement du sujet analysé par l'unité d'analyse d'image (105), et générer des données par synchronisation temporelle des données audio générées et des données de vibration générées avec des données d'image relatives à l'image mobile.
PCT/JP2012/067757 2011-07-14 2012-07-11 Dispositif électronique et procédé de génération de données WO2013008869A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011155586 2011-07-14
JP2011-155586 2011-07-14

Publications (1)

Publication Number Publication Date
WO2013008869A1 true WO2013008869A1 (fr) 2013-01-17

Family

ID=47506148

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/067757 WO2013008869A1 (fr) 2011-07-14 2012-07-11 Dispositif électronique et procédé de génération de données

Country Status (2)

Country Link
JP (1) JPWO2013008869A1 (fr)
WO (1) WO2013008869A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014239430A (ja) * 2013-05-24 2014-12-18 イマージョン コーポレーションImmersion Corporation 触覚データを符号化及びストリーミングする方法及びシステム
JP2016119071A (ja) * 2014-12-19 2016-06-30 イマージョン コーポレーションImmersion Corporation マルチメディアデータで使用する触覚データを記録するシステムおよび方法
JP2021082954A (ja) * 2019-11-19 2021-05-27 日本放送協会 触覚メタデータ生成装置、映像触覚連動システム、及びプログラム
JP7488704B2 (ja) 2020-06-18 2024-05-22 日本放送協会 触覚メタデータ生成装置、映像触覚連動システム、及びプログラム

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09219858A (ja) * 1996-02-13 1997-08-19 Matsushita Electric Ind Co Ltd 映像音声符号化装置及び映像音声復号化装置
JP2004261272A (ja) * 2003-02-28 2004-09-24 Oki Electric Ind Co Ltd 体感装置、モーション信号の生成方法およびプログラム
JP2007006313A (ja) * 2005-06-27 2007-01-11 Megachips Lsi Solutions Inc 動画撮像装置およびファイル格納方法
JP2010278997A (ja) * 2009-06-01 2010-12-09 Sharp Corp 画像処理装置、画像処理方法及びプログラム

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09219858A (ja) * 1996-02-13 1997-08-19 Matsushita Electric Ind Co Ltd 映像音声符号化装置及び映像音声復号化装置
JP2004261272A (ja) * 2003-02-28 2004-09-24 Oki Electric Ind Co Ltd 体感装置、モーション信号の生成方法およびプログラム
JP2007006313A (ja) * 2005-06-27 2007-01-11 Megachips Lsi Solutions Inc 動画撮像装置およびファイル格納方法
JP2010278997A (ja) * 2009-06-01 2010-12-09 Sharp Corp 画像処理装置、画像処理方法及びプログラム

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014239430A (ja) * 2013-05-24 2014-12-18 イマージョン コーポレーションImmersion Corporation 触覚データを符号化及びストリーミングする方法及びシステム
US10085069B2 (en) 2013-05-24 2018-09-25 Immersion Corporation Method and system for haptic data encoding and streaming using a multiplexed data stream
US10542325B2 (en) 2013-05-24 2020-01-21 Immersion Corporation Method and system for haptic data encoding and streaming using a multiplexed data stream
JP2016119071A (ja) * 2014-12-19 2016-06-30 イマージョン コーポレーションImmersion Corporation マルチメディアデータで使用する触覚データを記録するシステムおよび方法
US10650859B2 (en) 2014-12-19 2020-05-12 Immersion Corporation Systems and methods for recording haptic data for use with multi-media data
JP2021082954A (ja) * 2019-11-19 2021-05-27 日本放送協会 触覚メタデータ生成装置、映像触覚連動システム、及びプログラム
JP7344096B2 (ja) 2019-11-19 2023-09-13 日本放送協会 触覚メタデータ生成装置、映像触覚連動システム、及びプログラム
JP7488704B2 (ja) 2020-06-18 2024-05-22 日本放送協会 触覚メタデータ生成装置、映像触覚連動システム、及びプログラム

Also Published As

Publication number Publication date
JPWO2013008869A1 (ja) 2015-02-23

Similar Documents

Publication Publication Date Title
JP6664137B2 (ja) 触感記録および再生
JP2019525571A5 (fr)
WO2013024704A1 (fr) Dispositif, procédé et programme de traitement d'image
JP2011239141A (ja) 情報処理方法、情報処理装置、情景メタデータ抽出装置、欠損補完情報生成装置及びプログラム
CN110312162A (zh) 精选片段处理方法、装置、电子设备及可读介质
CN111445901A (zh) 音频数据获取方法、装置、电子设备及存储介质
WO2013008869A1 (fr) Dispositif électronique et procédé de génération de données
JP4725918B2 (ja) 番組画像配信システム、番組画像配信方法及びプログラム
KR20220106848A (ko) 비디오 특수 효과 처리 방법 및 장치
JP6073145B2 (ja) 歌唱音声データ生成装置、及び、歌唱動画データ生成装置
JP4318182B2 (ja) 端末装置および同端末装置に適用されるコンピュータプログラム
CN107087208B (zh) 一种全景视频播放方法、***及存储装置
JP2018019393A (ja) 再生制御システム、情報処理装置およびプログラム
WO2017061278A1 (fr) Dispositif de traitement de signal, procédé de traitement de signal et programme d'ordinateur
JP2009260718A (ja) 画像再生装置及び画像再生処理プログラム
JP2013054334A (ja) 電子機器
JP2013183280A (ja) 情報処理装置、撮像装置、及びプログラム
JP5310682B2 (ja) カラオケ装置
JP2010200079A (ja) 撮影制御装置
CN114760574A (zh) 音频播放方法及激光投影设备
CN111696566A (zh) 语音处理方法、装置和介质
KR20220036210A (ko) 영상의 음질을 향상시키는 디바이스 및 방법
TWI581626B (zh) 影音自動處理系統及方法
WO2023084933A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
KR101562901B1 (ko) 대화 지원 서비스 제공 시스템 및 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12811472

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2013523971

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12811472

Country of ref document: EP

Kind code of ref document: A1