WO2017085963A1 - Information processing device and video display device - Google Patents

Information processing device and video display device Download PDF

Info

Publication number
WO2017085963A1
WO2017085963A1 PCT/JP2016/071137 JP2016071137W WO2017085963A1 WO 2017085963 A1 WO2017085963 A1 WO 2017085963A1 JP 2016071137 W JP2016071137 W JP 2016071137W WO 2017085963 A1 WO2017085963 A1 WO 2017085963A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
video display
display device
information processing
sensor
Prior art date
Application number
PCT/JP2016/071137
Other languages
French (fr)
Japanese (ja)
Inventor
洋一 西牧
平田 真一
Original Assignee
株式会社ソニー・インタラクティブエンタテインメント
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ソニー・インタラクティブエンタテインメント filed Critical 株式会社ソニー・インタラクティブエンタテインメント
Publication of WO2017085963A1 publication Critical patent/WO2017085963A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/212Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/428Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/02Viewing or reading apparatus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/64Constructional details of receivers, e.g. cabinets or dust covers

Definitions

  • the present invention relates to a video display device mounted on a user's head, an information processing device connected to the video display device, a control method thereof, and a control program.
  • a video display device that a user wears on the head, such as a head mounted display.
  • a video display device forms an image in front of the user's eyes, thereby allowing the user to view the image.
  • Such a video display device is used for virtual reality technology and the like because it allows a user to view a realistic video.
  • virtual reality technology it is also considered that multiple users can communicate with each other in a more realistic manner.
  • the user's face is hidden and the facial expression is not understood. However, in some cases, it is important to grasp the user's facial expressions and emotions in order to give the user a realistic experience.
  • the present invention has been made in consideration of the above circumstances, and one of its purposes is a video display device and an information processing device capable of grasping a user's facial expression with the video display device mounted on the head. And a control method thereof and a control program.
  • An information processing apparatus is an information processing apparatus connected to a video display device mounted on a user's head, and a detection result of a sensor disposed on a surface of the video display device facing the user And an identifying unit that identifies a movement of a part of the user's face based on the obtained detection result.
  • the video display device is a video display device worn on a user's head, and the movement of a part of the user's face on a surface directed to the user when the user wears the video display device.
  • a sensor for specifying is arranged.
  • the information processing device control method is a method for controlling an information processing device connected to a video display device worn on a user's head, and is a surface directed to the user of the video display device.
  • part of the said user's face based on the said acquired detection result are characterized by the above-mentioned.
  • the program according to the present invention acquires a detection result of a sensor arranged on a surface of the video display device that is connected to the video display device that is mounted on the user's head and directed to the user of the video display device.
  • a program for functioning as a specifying unit that specifies the movement of the part of the user's face based on the acquired detection result.
  • This program may be provided by being stored in a computer-readable non-transitory information storage medium.
  • FIG. 1 is a configuration block diagram showing a configuration of an information processing system including an information processing apparatus according to an embodiment of the present invention. It is a figure which shows a mode that the user mounted
  • FIG. 11 is a flowchart illustrating an example of a flow of processing executed by the information processing apparatus. It is a figure which shows the example which equips the front surface of a video display apparatus with a display screen.
  • FIG. 1 is a configuration block diagram showing a configuration of an information processing system 1 including an information processing apparatus 10 according to an embodiment of the present invention.
  • the information processing system 1 includes an information processing device 10, an operation device 20, a relay device 30, and a video display device 40.
  • the information processing apparatus 10 is an apparatus that supplies an image to be displayed by the image display apparatus 40, and may be, for example, a home game machine, a portable game machine, a personal computer, a smartphone, or a tablet. As illustrated in FIG. 1, the information processing apparatus 10 includes a control unit 11, a storage unit 12, and an interface unit 13.
  • the control unit 11 includes at least one processor such as a CPU, and executes various types of information processing by executing programs stored in the storage unit 12. In addition, the specific example of the process which the control part 11 performs in this embodiment is mentioned later.
  • the storage unit 12 includes at least one memory device such as a RAM, and stores a program executed by the control unit 11 and data processed by the program.
  • the interface unit 13 is an interface for data communication between the operation device 20 and the relay device 30.
  • the information processing apparatus 10 is connected to each of the operation device 20 and the relay apparatus 30 via the interface unit 13 either by wire or wirelessly.
  • the interface unit 13 may include a multimedia interface such as HDMI (High-Definition Multimedia Interface: registered trademark) in order to transmit video and audio supplied by the information processing device 10 to the relay device 30.
  • a data communication interface such as a USB may be included in order to receive various information from the video display device 40 via the relay device 30 and to transmit a control signal or the like.
  • the interface unit 13 may include a data communication interface such as a USB in order to receive a signal indicating the content of the user's operation input to the operation device 20.
  • the operation device 20 is a controller or the like of a consumer game machine, and is used for a user to perform various instruction operations on the information processing apparatus 10.
  • the content of the user's operation input to the operation device 20 is transmitted to the information processing apparatus 10 by either wired or wireless.
  • the operation device 20 may include operation buttons, a touch panel, and the like disposed on the surface of the housing of the information processing apparatus 10.
  • the relay device 30 is connected to the video display device 40 by either wired or wireless, receives video data supplied from the information processing device 10, and outputs a video signal corresponding to the received data to the video display device 40. Output for. At this time, the relay device 30 may execute a process for correcting distortion caused by the optical system of the video display device 40 for the supplied video data as necessary, and output the corrected video signal. Good.
  • the video signal supplied from the relay device 30 to the video display device 40 includes two videos, a left-eye video and a right-eye video.
  • the relay device 30 relays various types of information transmitted and received between the information processing device 10 and the video display device 40 such as audio data and control signals.
  • the video display device 40 is a video display device that the user wears on the head and uses the video according to the video signal input from the relay device 30 to allow the user to browse.
  • the video display device 40 supports browsing of video with both eyes, and displays video in front of each of the user's right eye and left eye.
  • FIG. 2 shows a state in which the user wears the video display device 40
  • FIG. 3 shows a state in which the video display device 40 is viewed from the back side.
  • the video display device 40 includes a video display element 41, an optical element 42, a face sensor 43, and a communication interface 45.
  • the video display element 41 is an organic EL display panel, a liquid crystal display panel, or the like, and displays a video corresponding to a video signal supplied from the relay device 30.
  • the video display element 41 displays two videos, a left-eye video and a right-eye video.
  • the video display element 41 may be a single display element that displays the left-eye video and the right-eye video side by side, or may be configured by two display elements that display each video independently. Further, a known smartphone or the like may be used as the video display element 41.
  • the video display device 40 may be a retinal irradiation type (retinal projection type) device that directly projects a video image on a user's retina.
  • the image display element 41 may be configured by a laser that emits light and a MEMS (Micro Electro Mechanical Systems) mirror that scans the light.
  • MEMS Micro Electro Mechanical Systems
  • the optical element 42 is a hologram, a prism, a half mirror, or the like, and is disposed in front of the user's eyes.
  • the optical element 42 transmits or refracts the image light displayed by the image display element 41 and enters the left and right eyes of the user.
  • the left-eye image displayed by the image display element 41 is incident on the user's left eye via the optical element 42
  • the right-eye image is incident on the user's right eye via the optical element 42.
  • the user can view the left-eye video with the left eye and the right-eye video with the right eye while the video display device 40 is mounted on the head.
  • the video display device 40 is assumed to be a non-transmissive video display device in which the user cannot visually recognize the appearance of the outside world.
  • the face sensor 43 measures various information related to the face state of the user wearing the video display device 40. Specifically, as shown in FIG. 3, the face sensor 43 is arranged at a plurality of locations on the back of the main body of the video display device 40, that is, on the surface facing the user's face when worn. In the example of FIG. 3, the face sensor 43 a is located at a position where it contacts the forehead of the user, the face sensor 43 b is located at a position where it contacts the upper cheek, the face sensor 43 c is located near the temple, and the face sensor is located around the eyeball. Sensors 43d are respectively arranged.
  • the face sensor 43 may include a proximity sensor that measures the distance to the object.
  • the proximity sensor may be of various types, such as a method for measuring the intensity of reflected light by irradiating the object with light and a time-of-flight method for measuring the time until the reflected light returns.
  • the proximity sensor it is possible to detect the movement of the muscles of the user's face at the measurement target location.
  • the ratio of the eyebrows contained in the measurement target portion of the proximity sensor varies due to the movement of the user's eyebrows, the intensity of the reflected light varies.
  • the movement of the eyebrows can be directly detected by using a proximity sensor that measures the intensity of reflected light.
  • a proximity sensor that measures the intensity of reflected light.
  • information for specifying the movement of the user's eyebrows, cheeks, and the like can be detected without bringing the sensor into contact with the user's face. Therefore, the burden on the user during use can be reduced.
  • the face sensor 43 may include a touch sensor that detects contact of the user's face. By arranging a touch sensor at a location where the user's forehead comes into contact (43a in the figure), it is possible to detect the top and bottom of the user's eyebrows.
  • the face sensor 43 may include a myoelectric sensor that directly detects the movement of the muscles of the user's face.
  • the face sensor 43 may include a camera that captures the position of the user's eyes. With these sensors, the movement of the muscles of the user's face can be detected.
  • positioned at each location of the back surface of the video display apparatus 40 is not restricted to one type, You may arrange
  • the face sensor 43 is not limited to a sensor that detects the movement of the user's face, but may include a sensor that detects information related to other face states.
  • the face sensor 43 may include a color sensor that measures the color of the skin around the eyeball, a temperature sensor that measures the temperature of the skin around the eyeball, and the like. By using the measurement results of these sensors, it is possible to estimate the blood flow and the degree of excitement of the user.
  • the communication interface 45 is an interface for performing data communication with the relay device 30.
  • the communication interface 45 includes a communication antenna and a communication module.
  • the information processing apparatus 10 functionally includes a sensor information acquisition unit 51, a face part identification unit 52, and a process execution unit 53. These functions are realized when the control unit 11 executes a program stored in the storage unit 12. This program may be provided to the information processing apparatus 10 via a communication network such as the Internet, or may be provided by being stored in a computer-readable information storage medium such as an optical disk.
  • the sensor information acquisition unit 51 acquires the measurement result by the face sensor 43 while the user wears the video display device 40 on the head and uses it, for example, at regular intervals.
  • the information on the measurement result of the face sensor 43 acquired by the sensor information acquisition unit 51 from the video display device 40 is simply referred to as sensor information.
  • the face part specifying unit 52 uses the sensor information acquired by the sensor information acquiring unit 51 to specify the change in position of each part (face part) constituting the user's face. That is, the face part specifying unit 52 specifies the movement of the face part at the time when measurement by the face sensor 43 is performed.
  • the face part to be specified by the face part specifying unit 52 may be an eyebrow, an eyelid, a pupil, a corner of the eye, a mouth, or the like. For example, the face part specifying unit 52 raises the eyebrows, lowers, closes the center, opens the eyelids, closes, raises the eyes, lowers, opens the mouth, closes, raises the corner of the mouth.
  • a movement such as lowering or turning the pupil in any direction is specified using sensor information.
  • the sensor information detects the movement of facial muscles (facial muscles) at the detection target location. Since such muscle movement is linked to the movement of the face part, the face part specifying unit 52 can specify the movement of the user's face part by using sensor information.
  • the face part specifying unit 52 specifies the movement of the face part by determining whether or not the sensor information satisfies a judgment criterion prepared in advance.
  • This criterion may be generated, for example, by supervised machine learning.
  • the user wearing the video display device 40 acquires a measurement result of the face sensor 43 in a state where a specific movement is performed on a specific facial part, and inputs the movement of the facial part at that time as teacher information. .
  • an estimator serves as a determination criterion for estimating the movement of the facial part when new sensor information is obtained.
  • the facial part specifying unit 52 prompts the user to correct the wearing of the video display device 40, and uses it as a guide for correctly attaching the video display device 40 to the user. You can also.
  • the video display device 40 covers an area centered on the user's eyes and does not cover the lower half of the face.
  • the movement cannot be directly detected by the face sensor 43.
  • the facial muscles move in a relatively wide range, such as the cheeks moving in conjunction. Therefore, by preparing an appropriate determination criterion that associates the sensor information with the movement of the facial part in advance, the movement of the user's mouth can be estimated using the sensor information including the measurement result of the movement of the cheek muscles.
  • the information processing apparatus 10 may execute calibration in advance when the user starts using the video display apparatus 40 and reflect the contents in the determination criteria.
  • the face part specifying unit 52 obtains the sensor information obtained by the face sensor 43 at that time by causing the user to move the specific face part as calibration. Then, the degree of deviation from the reference value of the obtained sensor information is evaluated, and the evaluation result is reflected in a numerical value used as a criterion for identifying the facial part.
  • the facial part can be specified based on an appropriate criterion for each user.
  • the process executing unit 53 executes various processes using specific results obtained by the face part specifying unit 52.
  • the process execution unit 53 is realized by the control unit 11 executing an application program for an online game, and generates an avatar that appears in the game.
  • An avatar is a virtual object that represents a user, and has parts such as eyes and a mouth like a person.
  • the process execution unit 53 transmits the generated avatar data to the information processing apparatus used by other game participants via the communication network.
  • the information processing apparatus that has received the avatar data displays the avatar image generated according to the content on the screen of the display device. Thereby, other game participants can browse the avatar representing the user wearing the video display device 40.
  • the process execution unit 53 moves the avatar parts so as to be linked to the movement of the user's face part specified by the face part specifying unit 52. Specifically, for example, the process execution unit 53 opens the avatar's mouth when it is specified that the user has opened his mouth, and lowers the avatar of the avatar when it is specified that the user's eyebrows have been lowered. Information indicating this movement is transmitted to an information processing apparatus used by other game participants and used for updating the display of the avatar. According to such a process, the change in the user's facial expression can be reflected in the avatar, and the user's facial expression can be presented to other game participants browsing the avatar.
  • the processing execution unit 53 may display the avatar that changes according to the movement of the user's face on the video display device 40 worn by the user as well as the display device that other game participants view. Good.
  • FIG. 5 shows an example of a screen on which an avatar in which parts such as eyes and mouth move in conjunction with the movement of the user's face part is displayed.
  • the sensor information acquisition unit 51 acquires sensor information including the measurement result of the face sensor 43 from the video display device 40 via the relay device 30 (S1). Thereafter, the face part specifying unit 52 specifies the movement of the user's face part using the sensor information acquired in S1 (S2).
  • the process execution unit 53 determines the position of the part of the avatar's face according to the movement of the face part specified in S2 (S3). Then, the image of the avatar in which the facial parts are arranged at the determined position is updated (S4) and displayed on the video display device 40 (S5). Further, the part position information determined in S3 is transmitted to the information processing apparatus used by other game participants (S6). Thereafter, the process returns to S1 and the above processing is repeatedly executed every predetermined time. Thereby, an avatar whose expression changes in conjunction with the movement of the user's face can be presented to the user and other game participants.
  • an avatar does not necessarily have all the same parts as a normal human, and may have a different appearance from a human. Therefore, the movement of the part specified by the face part specifying unit 52 may be reflected in the movement of another part associated in advance, instead of reflecting the movement of the same part of the avatar as it is.
  • the process execution unit 53 may link the movement of the user's eyebrow and the like with the movement of the avatar's ear and tail.
  • the process execution unit 53 may add an effect such as enhancing or reducing the movement. Since the degree of change in facial expression varies from person to person, emphasis can be given to those who are poor in facial expression changes, making it easier to convey emotional expressions to the other party. It is also possible to mask the expression of anger.
  • the process execution unit 53 executes the online game, but the present invention is not limited to this, and a local game in which a plurality of users participate may be executed.
  • a plurality of users wear video display devices 40 in the same manner, and other players whose facial expressions change in conjunction with the facial expressions of each player by reflecting the movements of the facial parts on the respective avatars. You can browse avatars.
  • the process execution part 53 may execute the communication software which communicates with another user via an avatar instead of performing the process of a game. Also in this case, by moving the face part of the avatar in conjunction with the movement of the user's face part, an expression close to the user's expression can be expressed by the avatar.
  • a display screen D capable of displaying an image may be provided in a portion covering the eyes of the user on the front surface of the image display device 40.
  • the process execution part 53 may display the image
  • FIG. FIG. 7 shows a video display device 40 in which a part of the avatar is displayed on the display screen D.
  • the eyes and eyebrows displayed on the display screen are also moved in conjunction with the movements of the user's eyes and eyebrows specified by the face part specifying unit 52.
  • the user's facial expression can be transmitted to another person around the user.
  • the process executed by the process execution unit 53 is not limited to the process of operating the avatar.
  • the processing content may be changed according to the movement of the face part specified by the face part specifying unit 52.
  • the movement of the user's face is considered to reflect the user's emotion. Therefore, the process execution unit 53 estimates the user's emotion according to a predetermined criterion based on the movement of the user's face part. Specifically, emotions such as the user laughing, sad, or angry are estimated from the movement of the user's eyebrows, eyelids, mouth, and the like. Then, the progress of the game is changed or a specific parameter is increased or decreased according to the estimated emotion.
  • the process execution unit 53 may calculate an index such as the user's anger level or sadness level as a numerical value based on the magnitude of the movement of the user's face part. By executing the processing using these indices, it is possible to realize the progress of the game in accordance with the degree of emotion of the user.
  • the process execution unit 53 may use the identification result of the face part identification unit 52 for estimating not only the user's emotion but also the fatigue level and the excitement level. For example, the process execution unit 53 estimates that the user is excited when a specific movement by a specific face part is detected, such as when the user widens his eyes or raises his eyebrows. Further, when the movement of the facial part is slow, it may be estimated that the user is tired. Note that the processing execution unit 53 may use not only the movement of the face part but also the measurement results of the color sensor and the temperature sensor included in the face sensor 43 for estimation of the degree of excitement and fatigue.
  • the process execution unit 53 may use the identification result of the face part identification unit 52 when executing the voice recognition process that identifies the user's utterance content.
  • the speech recognition process the sound signal collected by the microphone is used, but this includes not only the voice of the user but also the noise of the surrounding environment, etc. It may be difficult to identify what is talking. Therefore, the process execution unit 53 estimates that the user is speaking during a period in which the face part specifying unit 52 specifies that the user's mouth is moving. Then, voice recognition processing is executed using the voice signal collected during that period. Such processing can reduce misrecognition during voice recognition.
  • the information processing apparatus 10 by specifying the movement of the user's face part using the measurement result of the sensor disposed on the surface facing the user's face, the video Changes in the facial expression of the user that are hidden when the display device 40 is worn can be captured and used for various processes.
  • the process execution unit 53 may execute the process using various types of information related to the attitudes of other users as well as the movement of the user's face part specified by the face part specifying unit 52.
  • the process execution unit 53 may use the sensor information itself acquired by the sensor information acquisition unit 51 for estimating the user's emotion. In determining the processing content, not only the result of emotion estimation but also other types of information such as user vital information may be used together.
  • the face part specifying unit 52 uses only the detection result of the face sensor 43 to specify the movement of the user's face part.
  • other information is used to specify the movement of the face part. May be.
  • the state of the user wearing the video display device 40 may be photographed with a camera installed at the position in the front direction of the user, and the photographed image may be analyzed and used to specify the facial part.
  • This captured image includes a region of the user's face that is not covered by the video display device 40 (particularly a region around the user's mouth).
  • the movement of the area covered by the video display device 40 can be measured by the face sensor 43 as described above. Therefore, by using these pieces of information in combination, it is possible to accurately specify the movement of the facial part of the entire user's face.
  • the information processing apparatus 10 locally connected to the video display device 40 executes the process corresponding to the identification of the face part and the specified contents.
  • a part of the processing to be executed by the information processing apparatus 10 may be executed by a server apparatus or the like connected via a communication network.
  • the server device may execute a process of changing the expression of the avatar according to the movement of the face part specified locally. Further, the server device may specify the facial part using sensor information transmitted from the local information processing device 10.
  • Information processing system 10 Information processing device, 11 Control unit, 12 Storage unit, 13 Interface unit, 20 Operation device, 30 Relay device, 40 Video display device, 41 Video display device, 42 Optical device, 43 Face sensor, 45 Communication Interface, 51 sensor information acquisition unit, 52 face part identification unit, 53 processing execution unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Optics & Photonics (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Cardiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

Provided is an information processing device (10) which is connected to a video display device (40) worn on the head of a user, the information processing device (10) acquiring results of detection by a sensor which is disposed on a user-facing surface of the video display device (40), and identifying movement of the user's facial region on the basis of the acquired detection results.

Description

情報処理装置、及び映像表示装置Information processing apparatus and video display apparatus
 本発明は、ユーザーの頭部に装着される映像表示装置、当該映像表示装置と接続される情報処理装置、その制御方法、及び制御プログラムに関する。 The present invention relates to a video display device mounted on a user's head, an information processing device connected to the video display device, a control method thereof, and a control program.
 例えばヘッドマウントディスプレイのように、ユーザーが頭部に装着して使用する映像表示装置がある。このような映像表示装置は、ユーザーの目の前に画像を結像させることで、その画像をユーザーに閲覧させる。このような映像表示装置は、臨場感のある映像をユーザーに閲覧させることができるため、バーチャルリアリティ技術などに利用されている。さらにこのようなバーチャルリアリティ技術を用いて、複数のユーザーが互いにより臨場感のある形でコミュニケーションを取れるようにすることも検討されている。 For example, there is a video display device that a user wears on the head, such as a head mounted display. Such a video display device forms an image in front of the user's eyes, thereby allowing the user to view the image. Such a video display device is used for virtual reality technology and the like because it allows a user to view a realistic video. Furthermore, using such virtual reality technology, it is also considered that multiple users can communicate with each other in a more realistic manner.
 上述した映像表示装置をユーザーが装着した状態では、ユーザーの顔が隠れてしまい、その表情が分からなくなってしまう。しかしながら、ユーザーに臨場感のある体験をしてもらう上では、ユーザーの表情や感情を把握することが重要な場合がある。 In the state where the user wears the video display device described above, the user's face is hidden and the facial expression is not understood. However, in some cases, it is important to grasp the user's facial expressions and emotions in order to give the user a realistic experience.
 本発明は上記実情を考慮してなされたものであって、その目的の一つは、頭部に映像表示装置を装着した状態のユーザーの表情を把握することのできる映像表示装置、情報処理装置、その制御方法、及び制御プログラムを提供することにある。 The present invention has been made in consideration of the above circumstances, and one of its purposes is a video display device and an information processing device capable of grasping a user's facial expression with the video display device mounted on the head. And a control method thereof and a control program.
 本発明に係る情報処理装置は、ユーザーの頭部に装着される映像表示装置と接続される情報処理装置であって、前記映像表示装置の前記ユーザーに向けられる面に配置されたセンサーの検出結果を取得する取得部と、前記取得した検出結果に基づいて、前記ユーザーの顔の部位の動きを特定する特定部と、を含むことを特徴とする。 An information processing apparatus according to the present invention is an information processing apparatus connected to a video display device mounted on a user's head, and a detection result of a sensor disposed on a surface of the video display device facing the user And an identifying unit that identifies a movement of a part of the user's face based on the obtained detection result.
 また、本発明に係る映像表示装置は、ユーザーの頭部に装着される映像表示装置であって、前記ユーザーが装着した際に当該ユーザーに向けられる面に、当該ユーザーの顔の部位の動きを特定するためのセンサーが配置されていることを特徴とする。 The video display device according to the present invention is a video display device worn on a user's head, and the movement of a part of the user's face on a surface directed to the user when the user wears the video display device. A sensor for specifying is arranged.
 また、本発明に係る情報処理装置の制御方法は、ユーザーの頭部に装着される映像表示装置と接続される情報処理装置の制御方法であって、前記映像表示装置の前記ユーザーに向けられる面に配置されたセンサーの検出結果を取得するステップと、前記取得した検出結果に基づいて、前記ユーザーの顔の部位の動きを特定するステップと、を含むことを特徴とする。 The information processing device control method according to the present invention is a method for controlling an information processing device connected to a video display device worn on a user's head, and is a surface directed to the user of the video display device. The step of acquiring the detection result of the sensor arrange | positioned in this and the step of specifying the motion of the site | part of the said user's face based on the said acquired detection result are characterized by the above-mentioned.
 また、本発明に係るプログラムは、ユーザーの頭部に装着される映像表示装置と接続されるコンピュータを、前記映像表示装置の前記ユーザーに向けられる面に配置されたセンサーの検出結果を取得する取得部、及び、前記取得した検出結果に基づいて、前記ユーザーの顔の部位の動きを特定する特定部、として機能させるためのプログラムである。このプログラムは、コンピュータ読み取り可能で非一時的な情報記憶媒体に格納されて提供されてよい。 In addition, the program according to the present invention acquires a detection result of a sensor arranged on a surface of the video display device that is connected to the video display device that is mounted on the user's head and directed to the user of the video display device. And a program for functioning as a specifying unit that specifies the movement of the part of the user's face based on the acquired detection result. This program may be provided by being stored in a computer-readable non-transitory information storage medium.
本発明の実施の形態に係る情報処理装置を含んだ情報処理システムの構成を示す構成ブロック図である。1 is a configuration block diagram showing a configuration of an information processing system including an information processing apparatus according to an embodiment of the present invention. ユーザーが映像表示装置を装着した様子を示す図である。It is a figure which shows a mode that the user mounted | wore the video display apparatus. 映像表示装置を背面側から見た図である。It is the figure which looked at the video display device from the back side. 情報処理装置の機能を示す機能ブロック図である。It is a functional block diagram which shows the function of information processing apparatus. アバターが表示される画面の一例を示す図である。It is a figure which shows an example of the screen where an avatar is displayed. 情報処理装置が実行する処理の流れの一例を示すフロー図である。FIG. 11 is a flowchart illustrating an example of a flow of processing executed by the information processing apparatus. 映像表示装置の前面に表示画面を備える例を示す図である。It is a figure which shows the example which equips the front surface of a video display apparatus with a display screen.
 以下、本発明の実施形態について、図面に基づき詳細に説明する。 Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.
 図1は、本発明の一実施形態に係る情報処理装置10を含む情報処理システム1の構成を示す構成ブロック図である。同図に示されるように、情報処理システム1は、情報処理装置10と、操作デバイス20と、中継装置30と、映像表示装置40と、を含んで構成されている。 FIG. 1 is a configuration block diagram showing a configuration of an information processing system 1 including an information processing apparatus 10 according to an embodiment of the present invention. As illustrated in FIG. 1, the information processing system 1 includes an information processing device 10, an operation device 20, a relay device 30, and a video display device 40.
 情報処理装置10は、映像表示装置40が表示すべき映像を供給する装置であって、例えば家庭用ゲーム機、携帯型ゲーム機、パーソナルコンピューター、スマートフォン、タブレット等であってよい。図1に示されるように、情報処理装置10は、制御部11と、記憶部12と、インタフェース部13と、を含んで構成される。 The information processing apparatus 10 is an apparatus that supplies an image to be displayed by the image display apparatus 40, and may be, for example, a home game machine, a portable game machine, a personal computer, a smartphone, or a tablet. As illustrated in FIG. 1, the information processing apparatus 10 includes a control unit 11, a storage unit 12, and an interface unit 13.
 制御部11は、CPU等のプロセッサを少なくとも一つ含み、記憶部12に記憶されているプログラムを実行して各種の情報処理を実行する。なお、本実施形態において制御部11が実行する処理の具体例については、後述する。記憶部12は、RAM等のメモリデバイスを少なくとも一つ含み、制御部11が実行するプログラム、及び当該プログラムによって処理されるデータを格納する。 The control unit 11 includes at least one processor such as a CPU, and executes various types of information processing by executing programs stored in the storage unit 12. In addition, the specific example of the process which the control part 11 performs in this embodiment is mentioned later. The storage unit 12 includes at least one memory device such as a RAM, and stores a program executed by the control unit 11 and data processed by the program.
 インタフェース部13は、操作デバイス20、及び中継装置30との間のデータ通信のためのインタフェースである。情報処理装置10は、インタフェース部13を介して有線又は無線のいずれかで操作デバイス20及び中継装置30のそれぞれと接続される。具体例として、インタフェース部13は、情報処理装置10が供給する映像や音声を中継装置30に送信するために、HDMI(High-Definition Multimedia Interface:登録商標)などのマルチメディアインタフェースを含んでよい。また、中継装置30経由で映像表示装置40から各種の情報を受信したり、制御信号等を送信したりするために、USB等のデータ通信インタフェースを含んでよい。さらにインタフェース部13は、操作デバイス20に対するユーザーの操作入力の内容を示す信号を受信するために、USB等のデータ通信インタフェースを含んでよい。 The interface unit 13 is an interface for data communication between the operation device 20 and the relay device 30. The information processing apparatus 10 is connected to each of the operation device 20 and the relay apparatus 30 via the interface unit 13 either by wire or wirelessly. As a specific example, the interface unit 13 may include a multimedia interface such as HDMI (High-Definition Multimedia Interface: registered trademark) in order to transmit video and audio supplied by the information processing device 10 to the relay device 30. Further, a data communication interface such as a USB may be included in order to receive various information from the video display device 40 via the relay device 30 and to transmit a control signal or the like. Further, the interface unit 13 may include a data communication interface such as a USB in order to receive a signal indicating the content of the user's operation input to the operation device 20.
 操作デバイス20は、家庭用ゲーム機のコントローラ等であって、ユーザーが情報処理装置10に対して各種の指示操作を行うために使用される。操作デバイス20に対するユーザーの操作入力の内容は、有線又は無線のいずれかにより情報処理装置10に送信される。なお、操作デバイス20は情報処理装置10の筐体表面に配置された操作ボタンやタッチパネル等を含んでもよい。 The operation device 20 is a controller or the like of a consumer game machine, and is used for a user to perform various instruction operations on the information processing apparatus 10. The content of the user's operation input to the operation device 20 is transmitted to the information processing apparatus 10 by either wired or wireless. Note that the operation device 20 may include operation buttons, a touch panel, and the like disposed on the surface of the housing of the information processing apparatus 10.
 中継装置30は、有線又は無線のいずれかにより映像表示装置40と接続されており、情報処理装置10から供給される映像のデータを受け付けて、受け付けたデータに応じた映像信号を映像表示装置40に対して出力する。このとき中継装置30は、必要に応じて、供給された映像データに対して、映像表示装置40の光学系によって生じる歪みを補正する処理などを実行し、補正された映像信号を出力してもよい。なお、中継装置30から映像表示装置40に供給される映像信号は、左目用映像、及び右目用映像の二つの映像を含んでいる。また、中継装置30は、映像データ以外にも、音声データや制御信号など、情報処理装置10と映像表示装置40との間で送受信される各種の情報を中継する。 The relay device 30 is connected to the video display device 40 by either wired or wireless, receives video data supplied from the information processing device 10, and outputs a video signal corresponding to the received data to the video display device 40. Output for. At this time, the relay device 30 may execute a process for correcting distortion caused by the optical system of the video display device 40 for the supplied video data as necessary, and output the corrected video signal. Good. Note that the video signal supplied from the relay device 30 to the video display device 40 includes two videos, a left-eye video and a right-eye video. In addition to the video data, the relay device 30 relays various types of information transmitted and received between the information processing device 10 and the video display device 40 such as audio data and control signals.
 映像表示装置40は、ユーザーが頭部に装着して使用する映像表示装置であって、中継装置30から入力される映像信号に応じた映像を表示し、ユーザーに閲覧させる。本実施形態では、映像表示装置40は両目での映像の閲覧に対応しており、ユーザーの右目及び左目それぞれの目の前に映像を表示するものとする。図2は、ユーザーが映像表示装置40を装着した様子を示しており、図3は映像表示装置40を背面側から見た様子を示している。また、図1に示すように、映像表示装置40は、映像表示素子41、光学素子42、顔面センサー43、及び通信インタフェース45を含んで構成される。 The video display device 40 is a video display device that the user wears on the head and uses the video according to the video signal input from the relay device 30 to allow the user to browse. In the present embodiment, the video display device 40 supports browsing of video with both eyes, and displays video in front of each of the user's right eye and left eye. FIG. 2 shows a state in which the user wears the video display device 40, and FIG. 3 shows a state in which the video display device 40 is viewed from the back side. As shown in FIG. 1, the video display device 40 includes a video display element 41, an optical element 42, a face sensor 43, and a communication interface 45.
 映像表示素子41は、有機EL表示パネルや液晶表示パネルなどであって、中継装置30から供給される映像信号に応じた映像を表示する。映像表示素子41は、左目用映像、及び右目用映像の2つの映像を表示する。なお、映像表示素子41は、左目用映像及び右目用映像を並べて表示する1つの表示素子であってもよいし、それぞれの映像を独立に表示する2つの表示素子によって構成されてもよい。また、公知のスマートフォン等を映像表示素子41として用いてもよい。また、映像表示装置40は、ユーザーの網膜に直接映像を投影する網膜照射型(網膜投影型)の装置であってもよい。この場合、映像表示素子41は、光を発するレーザーとその光を走査するMEMS(Micro Electro Mechanical Systems)ミラーなどによって構成されてもよい。 The video display element 41 is an organic EL display panel, a liquid crystal display panel, or the like, and displays a video corresponding to a video signal supplied from the relay device 30. The video display element 41 displays two videos, a left-eye video and a right-eye video. The video display element 41 may be a single display element that displays the left-eye video and the right-eye video side by side, or may be configured by two display elements that display each video independently. Further, a known smartphone or the like may be used as the video display element 41. Further, the video display device 40 may be a retinal irradiation type (retinal projection type) device that directly projects a video image on a user's retina. In this case, the image display element 41 may be configured by a laser that emits light and a MEMS (Micro Electro Mechanical Systems) mirror that scans the light.
 光学素子42は、ホログラムやプリズム、ハーフミラーなどであって、ユーザーの目の前に配置され、映像表示素子41が表示する映像の光を透過又は屈折させて、ユーザーの左右それぞれの目に入射させる。具体的に、映像表示素子41が表示する左目用映像は、光学素子42を経由してユーザーの左目に入射し、右目用映像は光学素子42を経由してユーザーの右目に入射する。これによりユーザーは、映像表示装置40を頭部に装着した状態で、左目用映像を左目で、右目用映像を右目で、それぞれ閲覧することができる。なお、本実施形態において映像表示装置40は、ユーザーが外界の様子を視認することができない非透過型の映像表示装置であるものとする。 The optical element 42 is a hologram, a prism, a half mirror, or the like, and is disposed in front of the user's eyes. The optical element 42 transmits or refracts the image light displayed by the image display element 41 and enters the left and right eyes of the user. Let Specifically, the left-eye image displayed by the image display element 41 is incident on the user's left eye via the optical element 42, and the right-eye image is incident on the user's right eye via the optical element 42. Thus, the user can view the left-eye video with the left eye and the right-eye video with the right eye while the video display device 40 is mounted on the head. In the present embodiment, the video display device 40 is assumed to be a non-transmissive video display device in which the user cannot visually recognize the appearance of the outside world.
 顔面センサー43は、映像表示装置40を装着したユーザーの顔面の状態に関する各種の情報を計測する。具体的に顔面センサー43は、図3に示すように、映像表示装置40の本体背面、すなわち装着時にユーザーの顔と対向する面の複数の箇所に配置される。図3の例では、ユーザーの額と接触する箇所には顔面センサー43aが、頬の上部と接触する箇所には顔面センサー43bが、こめかみの近傍には顔面センサー43cが、眼球の周囲には顔面センサー43dが、それぞれ配置されている。 The face sensor 43 measures various information related to the face state of the user wearing the video display device 40. Specifically, as shown in FIG. 3, the face sensor 43 is arranged at a plurality of locations on the back of the main body of the video display device 40, that is, on the surface facing the user's face when worn. In the example of FIG. 3, the face sensor 43 a is located at a position where it contacts the forehead of the user, the face sensor 43 b is located at a position where it contacts the upper cheek, the face sensor 43 c is located near the temple, and the face sensor is located around the eyeball. Sensors 43d are respectively arranged.
 具体的に顔面センサー43は、対象物までの距離を計測する近接センサーを含んでよい。近接センサーは、対象物に光を照射して反射光の強度を測定する方式や、反射光が戻ってくるまでの時間を測定するTime-of-Flight方式など、各種のものであってよい。ユーザーが顔の筋肉を動かすと、この動きによって顔面センサー43からユーザーの顔の計測対象箇所までの距離が変化する。そのため、近接センサーを用いることで、計測対象箇所におけるユーザーの顔の筋肉の動きを検出することができる。また、ユーザーの眉が動くことによって近接センサーの計測対象箇所に含まれる眉毛の割合が変動すると、反射光の強度が変動する。そのため、反射光の強度を測定する方式の近接センサーを用いることによって、眉の動きを直接検出することもできる。このような非接触型のセンサーによれば、ユーザーの眉や頬などの動きを特定するための情報を、ユーザーの顔にセンサーを接触させることなく検出することができる。そのため、使用時のユーザーへの負担を軽くすることができる。 Specifically, the face sensor 43 may include a proximity sensor that measures the distance to the object. The proximity sensor may be of various types, such as a method for measuring the intensity of reflected light by irradiating the object with light and a time-of-flight method for measuring the time until the reflected light returns. When the user moves the facial muscles, the distance from the face sensor 43 to the measurement target portion of the user's face changes due to this movement. Therefore, by using the proximity sensor, it is possible to detect the movement of the muscles of the user's face at the measurement target location. Moreover, if the ratio of the eyebrows contained in the measurement target portion of the proximity sensor varies due to the movement of the user's eyebrows, the intensity of the reflected light varies. Therefore, the movement of the eyebrows can be directly detected by using a proximity sensor that measures the intensity of reflected light. According to such a non-contact type sensor, information for specifying the movement of the user's eyebrows, cheeks, and the like can be detected without bringing the sensor into contact with the user's face. Therefore, the burden on the user during use can be reduced.
 また、顔面センサー43は、ユーザーの顔の接触を検知するタッチセンサーを含んでもよい。ユーザーのおでこが接触する箇所(図中の43a)にタッチセンサーを配置することで、ユーザーの眉の上下を検出できる。また、顔面センサー43は、ユーザーの顔の筋肉の動きを直接的に検出する筋電センサーを含んでもよい。また、顔面センサー43は、ユーザーの眼の位置を撮影するカメラを含んでもよい。これらのセンサーにより、ユーザーの顔の筋肉の動きが検出できる。なお、映像表示装置40の背面の各箇所に配置される顔面センサー43は、一種類に限らず、以上説明したような複数種類のセンサーを同じ箇所に配置してもよい。 Further, the face sensor 43 may include a touch sensor that detects contact of the user's face. By arranging a touch sensor at a location where the user's forehead comes into contact (43a in the figure), it is possible to detect the top and bottom of the user's eyebrows. The face sensor 43 may include a myoelectric sensor that directly detects the movement of the muscles of the user's face. The face sensor 43 may include a camera that captures the position of the user's eyes. With these sensors, the movement of the muscles of the user's face can be detected. In addition, the face sensor 43 arrange | positioned at each location of the back surface of the video display apparatus 40 is not restricted to one type, You may arrange | position several types of sensors which were demonstrated above in the same location.
 また、顔面センサー43は、ユーザーの顔の動きを検出するセンサーに限らず、その他の顔の状態に関する情報を検出するセンサーを含んでもよい。具体的に、顔面センサー43は、眼球周辺の肌の色を計測する色センサーや、眼球周辺の肌の温度を計測する温度センサーなどを含んでもよい。これらのセンサーの計測結果を用いることにより、ユーザーの血流や興奮度合いを推定することができる。 Further, the face sensor 43 is not limited to a sensor that detects the movement of the user's face, but may include a sensor that detects information related to other face states. Specifically, the face sensor 43 may include a color sensor that measures the color of the skin around the eyeball, a temperature sensor that measures the temperature of the skin around the eyeball, and the like. By using the measurement results of these sensors, it is possible to estimate the blood flow and the degree of excitement of the user.
 通信インタフェース45は、中継装置30との間でデータ通信を行うためのインタフェースである。例えば映像表示装置40が中継装置30との間で無線LANやBluetooth(登録商標)などの無線通信によりデータの送受信を行う場合、通信インタフェース45は通信用のアンテナ、及び通信モジュールを含む。 The communication interface 45 is an interface for performing data communication with the relay device 30. For example, when the video display device 40 transmits / receives data to / from the relay device 30 by wireless communication such as wireless LAN or Bluetooth (registered trademark), the communication interface 45 includes a communication antenna and a communication module.
 次に、情報処理装置10が実現する機能について図4を用いて説明する。図4に示すように、情報処理装置10は、機能的に、センサー情報取得部51と、顔部位特定部52と、処理実行部53と、を含む。これらの機能は、制御部11が記憶部12に記憶されたプログラムを実行することにより実現される。このプログラムは、インターネット等の通信ネットワークを介して情報処理装置10に提供されてもよいし、光ディスク等のコンピュータ読み取り可能な情報記憶媒体に格納されて提供されてもよい。 Next, functions realized by the information processing apparatus 10 will be described with reference to FIG. As shown in FIG. 4, the information processing apparatus 10 functionally includes a sensor information acquisition unit 51, a face part identification unit 52, and a process execution unit 53. These functions are realized when the control unit 11 executes a program stored in the storage unit 12. This program may be provided to the information processing apparatus 10 via a communication network such as the Internet, or may be provided by being stored in a computer-readable information storage medium such as an optical disk.
 センサー情報取得部51は、ユーザーが映像表示装置40を頭部に装着して使用している間、例えば一定時間おきなどのタイミングで、顔面センサー43による計測結果を取得する。以下では、センサー情報取得部51が映像表示装置40から取得する顔面センサー43の計測結果の情報を単にセンサー情報という。 The sensor information acquisition unit 51 acquires the measurement result by the face sensor 43 while the user wears the video display device 40 on the head and uses it, for example, at regular intervals. Hereinafter, the information on the measurement result of the face sensor 43 acquired by the sensor information acquisition unit 51 from the video display device 40 is simply referred to as sensor information.
 顔部位特定部52は、センサー情報取得部51が取得したセンサー情報を用いて、ユーザーの顔を構成する各部位(顔部位)の位置の変化を特定する。すなわち、顔部位特定部52は、顔面センサー43による計測が行われた時点における顔部位の動きを特定する。顔部位特定部52が特定対象とする顔部位は、眉、まぶた、瞳、目尻、口などであってよい。例えば顔部位特定部52は、ユーザーが眉を上げた、下げた、中央に寄せた、まぶたを開いた、閉じた、目尻を上げた、下げた、口を開いた、閉じた、口角を上げた、下げた、また瞳をいずれかの向きに向けた、などの動きを、センサー情報を用いて特定する。また、これらの動きについて、その動きの大きさを特定してもよい。センサー情報は、検出対象箇所における顔の筋肉(表情筋)の動きを検出する。このような筋肉の動きは、顔部位の動きに連動しているので、センサー情報を用いることで、顔部位特定部52はユーザーの顔部位の動きを特定できる。 The face part specifying unit 52 uses the sensor information acquired by the sensor information acquiring unit 51 to specify the change in position of each part (face part) constituting the user's face. That is, the face part specifying unit 52 specifies the movement of the face part at the time when measurement by the face sensor 43 is performed. The face part to be specified by the face part specifying unit 52 may be an eyebrow, an eyelid, a pupil, a corner of the eye, a mouth, or the like. For example, the face part specifying unit 52 raises the eyebrows, lowers, closes the center, opens the eyelids, closes, raises the eyes, lowers, opens the mouth, closes, raises the corner of the mouth. A movement such as lowering or turning the pupil in any direction is specified using sensor information. Moreover, you may identify the magnitude | size of the motion about these motions. The sensor information detects the movement of facial muscles (facial muscles) at the detection target location. Since such muscle movement is linked to the movement of the face part, the face part specifying unit 52 can specify the movement of the user's face part by using sensor information.
 具体的に顔部位特定部52は、センサー情報が予め用意された判断基準を満たすか否かを判定することによって、顔部位の動きを特定する。この判断基準は、例えば教師付きの機械学習によって生成されるものであってよい。具体的には、映像表示装置40を装着したユーザーが特定の顔部位に特定の動きをさせた状態で顔面センサー43の計測結果を取得し、そのときの顔部位の動きを教師情報として入力する。このような機械学習を複数回実行することによって、新たなセンサー情報が得られた際に顔部位の動きを推定するための判断基準となる推定器を生成することができる。なお、この推定器の出力が不安定な場合、映像表示装置40の装着が不適切であることが原因である場合もある。そこで顔部位特定部52は、この推定器の出力が不安定な場合、ユーザーに対して映像表示装置40の装着を修正するよう促し、ユーザーに映像表示装置40を正しく装着させるガイドとして活用することもできる。 Specifically, the face part specifying unit 52 specifies the movement of the face part by determining whether or not the sensor information satisfies a judgment criterion prepared in advance. This criterion may be generated, for example, by supervised machine learning. Specifically, the user wearing the video display device 40 acquires a measurement result of the face sensor 43 in a state where a specific movement is performed on a specific facial part, and inputs the movement of the facial part at that time as teacher information. . By executing such machine learning a plurality of times, it is possible to generate an estimator that serves as a determination criterion for estimating the movement of the facial part when new sensor information is obtained. In addition, when the output of this estimator is unstable, it may be because the wearing of the video display device 40 is inappropriate. Therefore, when the output of the estimator is unstable, the facial part specifying unit 52 prompts the user to correct the wearing of the video display device 40, and uses it as a guide for correctly attaching the video display device 40 to the user. You can also.
 特に本実施形態では、図2に示すように映像表示装置40はユーザーの目を中心とした領域を覆い、顔の下半分は覆わないため、口のような顔の下側にある顔部位の動きについては、顔面センサー43によって直接検出することができない。しかしながら、口を動かす際には頬が連動して動くなど、顔の筋肉は比較的広い範囲にわたって連動して動作する。そのため、予めセンサー情報と顔部位の動きとを対応付ける適切な判断基準を用意することによって、頬の筋肉の動きの計測結果を含むセンサー情報を用いてユーザーの口の動きを推定することができる。 In particular, in the present embodiment, as shown in FIG. 2, the video display device 40 covers an area centered on the user's eyes and does not cover the lower half of the face. The movement cannot be directly detected by the face sensor 43. However, when moving the mouth, the facial muscles move in a relatively wide range, such as the cheeks moving in conjunction. Therefore, by preparing an appropriate determination criterion that associates the sensor information with the movement of the facial part in advance, the movement of the user's mouth can be estimated using the sensor information including the measurement result of the movement of the cheek muscles.
 また、ユーザーの顔立ちや映像表示装置40の装着の仕方などの差異により、顔部位の特定のために常に同一の判断基準を使用するのは適切でない場合もある。そこで情報処理装置10は、ユーザーが映像表示装置40の使用を開始する際などに、予めキャリブレーションを実行し、その内容を判断基準に反映させてもよい。具体的に、例えば顔部位特定部52は、キャリブレーションとして、ユーザーに特定の顔部位を動かす動きをしてもらい、その際に顔面センサー43によって得られるセンサー情報を取得する。そして、得られたセンサー情報の基準値からの乖離度合いを評価し、評価結果を顔部位特定の判断基準として用いる数値に反映させる。これにより、ユーザーごとに適切な判断基準で顔部位の特定を実施することができる。 Also, it may not be appropriate to always use the same criterion for specifying the facial part due to differences in the user's facial features, the manner in which the video display device 40 is worn, and the like. Therefore, the information processing apparatus 10 may execute calibration in advance when the user starts using the video display apparatus 40 and reflect the contents in the determination criteria. Specifically, for example, the face part specifying unit 52 obtains the sensor information obtained by the face sensor 43 at that time by causing the user to move the specific face part as calibration. Then, the degree of deviation from the reference value of the obtained sensor information is evaluated, and the evaluation result is reflected in a numerical value used as a criterion for identifying the facial part. As a result, the facial part can be specified based on an appropriate criterion for each user.
 処理実行部53は、顔部位特定部52による特定の結果を用いて各種の処理を実行する。以下では具体例として、処理実行部53は、制御部11がオンラインゲームのアプリケーションプログラムを実行することによって実現されることとし、そのゲーム内に登場するアバターを生成することとする。アバターは、ユーザーを表象する仮想オブジェクトであって、人と同様に目や口などのパーツを備えている。処理実行部53は、生成したアバターのデータを、通信ネットワークを介して他のゲーム参加者が使用する情報処理装置に対して送信する。アバターのデータを受信した情報処理装置は、その内容に従って生成されたアバターの画像を表示装置の画面に表示させる。これにより、他のゲーム参加者は、映像表示装置40を装着したユーザーを表すアバターを閲覧できる。 The process executing unit 53 executes various processes using specific results obtained by the face part specifying unit 52. In the following, as a specific example, the process execution unit 53 is realized by the control unit 11 executing an application program for an online game, and generates an avatar that appears in the game. An avatar is a virtual object that represents a user, and has parts such as eyes and a mouth like a person. The process execution unit 53 transmits the generated avatar data to the information processing apparatus used by other game participants via the communication network. The information processing apparatus that has received the avatar data displays the avatar image generated according to the content on the screen of the display device. Thereby, other game participants can browse the avatar representing the user wearing the video display device 40.
 さらに本実施形態において、処理実行部53は、顔部位特定部52によって特定されたユーザーの顔部位の動きに連動するように、アバターのパーツを動かすこととする。具体的に、例えば処理実行部53は、ユーザーが口を開いたと特定されたときには、アバターの口を開くこととし、ユーザーの眉が下がったと特定されたときには、アバターの眉を下げることとする。この動きを示す情報は、他のゲーム参加者が使用する情報処理装置に送信され、アバターの表示の更新に使用される。このような処理によれば、ユーザーの表情の変化をアバターに反映させることができ、アバターを閲覧する他のゲーム参加者に対してユーザーの表情を提示することができる。なお、処理実行部53は、ユーザーの顔の動きに応じて変化するアバターを、他のゲーム参加者が閲覧する表示装置だけでなく、ユーザーが装着している映像表示装置40に表示させてもよい。図5はユーザーの顔部位の動きに連動して目や口などのパーツが動くアバターが表示されている画面の一例を示している。 Further, in the present embodiment, the process execution unit 53 moves the avatar parts so as to be linked to the movement of the user's face part specified by the face part specifying unit 52. Specifically, for example, the process execution unit 53 opens the avatar's mouth when it is specified that the user has opened his mouth, and lowers the avatar of the avatar when it is specified that the user's eyebrows have been lowered. Information indicating this movement is transmitted to an information processing apparatus used by other game participants and used for updating the display of the avatar. According to such a process, the change in the user's facial expression can be reflected in the avatar, and the user's facial expression can be presented to other game participants browsing the avatar. Note that the processing execution unit 53 may display the avatar that changes according to the movement of the user's face on the video display device 40 worn by the user as well as the display device that other game participants view. Good. FIG. 5 shows an example of a screen on which an avatar in which parts such as eyes and mouth move in conjunction with the movement of the user's face part is displayed.
 以下、情報処理装置10がユーザーの顔部位の動きをアバターに反映させる処理の流れの一例について、図6のフロー図を用いて説明する。まずセンサー情報取得部51が、顔面センサー43の計測結果を含んだセンサー情報を中継装置30経由で映像表示装置40から取得する(S1)。その後、顔部位特定部52が、S1で取得されたセンサー情報を用いて、ユーザーの顔部位の動きを特定する(S2)。 Hereinafter, an example of a processing flow in which the information processing apparatus 10 reflects the movement of the user's facial part on the avatar will be described with reference to the flowchart of FIG. First, the sensor information acquisition unit 51 acquires sensor information including the measurement result of the face sensor 43 from the video display device 40 via the relay device 30 (S1). Thereafter, the face part specifying unit 52 specifies the movement of the user's face part using the sensor information acquired in S1 (S2).
 次に、処理実行部53が、S2で特定された顔部位の動きに応じて、アバターの顔のパーツの位置を決定する(S3)。そして、決定された位置に顔のパーツを配置させたアバターの画像を更新し(S4)、映像表示装置40に表示させる(S5)。また、S3で決定したパーツ位置の情報を、他のゲーム参加者が使用する情報処理装置に送信する(S6)。その後、S1に戻って以上の処理を所定時間おきに繰り返し実行する。これにより、ユーザーの顔の動きに連動して表情の変化するアバターをユーザーや他のゲーム参加者に提示することができる。 Next, the process execution unit 53 determines the position of the part of the avatar's face according to the movement of the face part specified in S2 (S3). Then, the image of the avatar in which the facial parts are arranged at the determined position is updated (S4) and displayed on the video display device 40 (S5). Further, the part position information determined in S3 is transmitted to the information processing apparatus used by other game participants (S6). Thereafter, the process returns to S1 and the above processing is repeatedly executed every predetermined time. Thereby, an avatar whose expression changes in conjunction with the movement of the user's face can be presented to the user and other game participants.
 なお、アバターは必ずしも通常の人間と同様の部位を全て備えているとは限らず、人とは異なる外見を備えている場合もある。そこで、顔部位特定部52によって特定された部位の動きを、そのままアバターの同じ部位の動きに反映させるのではなく、予め関連づけられた別の部位の動きに反映させてもよい。一例として、アバターが動物を模した外観を備えている場合、処理実行部53は、ユーザーの眉などの動きを、アバターの耳や尻尾などの動きに連動させてもよい。また、処理実行部53は、顔部位特定部52によって特定された部位の動きをアバターの動きに反映させる際に、その動きをより強調したり、減じたりといった演出を加えてもよい。表情の変化の度合いには個人差があるので、表情変化の乏しい人の場合には強調することで、より感情表現を相手に伝えやすくするという効果を得ることができる。また、怒りの表情表現に対してはマスクするといったことも可能である。 Note that an avatar does not necessarily have all the same parts as a normal human, and may have a different appearance from a human. Therefore, the movement of the part specified by the face part specifying unit 52 may be reflected in the movement of another part associated in advance, instead of reflecting the movement of the same part of the avatar as it is. As an example, when the avatar has an appearance imitating an animal, the process execution unit 53 may link the movement of the user's eyebrow and the like with the movement of the avatar's ear and tail. In addition, when the movement of the part specified by the face part specifying unit 52 is reflected in the movement of the avatar, the process execution unit 53 may add an effect such as enhancing or reducing the movement. Since the degree of change in facial expression varies from person to person, emphasis can be given to those who are poor in facial expression changes, making it easier to convey emotional expressions to the other party. It is also possible to mask the expression of anger.
 また、ここでは処理実行部53はオンラインゲームを実行することとしたが、これに限らず複数のユーザーが参加するローカルゲームを実行してもよい。この場合にも、同様にして複数のユーザーがそれぞれ映像表示装置40を装着し、その顔部位の動きをそれぞれのアバターに反映させることで、各プレイヤーの表情に連動して表情が変化する他プレイヤーのアバターを閲覧することができる。また、処理実行部53は、ゲームの処理を実行するのではなく、アバターを介して他のユーザーとコミュニケーションを取るコミュニケーションソフトウェアを実行してもよい。この場合も、アバターの顔のパーツをユーザーの顔部位の動きに連動して動かすことで、アバターによってユーザーの表情に近い表情を表現できる。 Further, here, the process execution unit 53 executes the online game, but the present invention is not limited to this, and a local game in which a plurality of users participate may be executed. In this case as well, a plurality of users wear video display devices 40 in the same manner, and other players whose facial expressions change in conjunction with the facial expressions of each player by reflecting the movements of the facial parts on the respective avatars. You can browse avatars. Moreover, the process execution part 53 may execute the communication software which communicates with another user via an avatar instead of performing the process of a game. Also in this case, by moving the face part of the avatar in conjunction with the movement of the user's face part, an expression close to the user's expression can be expressed by the avatar.
 また、映像表示装置40の前面のユーザーの目を覆う部分には、映像を表示可能な表示画面Dが設けられてもよい。そして、処理実行部53は、アバターの目の周辺部分の映像をこの表示画面Dに表示させてもよい。図7は表示画面Dにアバターの一部が表示された映像表示装置40を示している。この表示画面に表示される目や眉も、顔部位特定部52によって特定されるユーザーの目や眉の動きに連動して動かすこととする。これにより、ユーザーの周囲にいる別の人にも、ユーザーの表情を伝えることができる。 Further, a display screen D capable of displaying an image may be provided in a portion covering the eyes of the user on the front surface of the image display device 40. And the process execution part 53 may display the image | video of the peripheral part of an avatar's eyes on this display screen D. FIG. FIG. 7 shows a video display device 40 in which a part of the avatar is displayed on the display screen D. The eyes and eyebrows displayed on the display screen are also moved in conjunction with the movements of the user's eyes and eyebrows specified by the face part specifying unit 52. Thus, the user's facial expression can be transmitted to another person around the user.
 また、処理実行部53が実行する処理は、アバターを動作させる処理に限られない。例えば処理実行部53がゲームの処理を実行する場合、その処理内容を顔部位特定部52によって特定された顔部位の動きに応じて変化させてもよい。ユーザーの顔部位の動きは、ユーザーの感情を反映していると考えられる。そこで処理実行部53は、ユーザーの顔部位の動きに基づいて、予め定められた判断基準に従って、ユーザーの感情を推定する。具体的には、ユーザーの眉やまぶた、口などの動きから、ユーザーが笑っている、悲しんでいる、怒っているなどの感情を推定する。そして、推定された感情に応じて、ゲームの進行を変化させたり、特定のパラメータを増減させるなどする。これにより、ユーザーの感情の変化を反映した処理を実行することができる。また、処理実行部53は、ユーザーの顔部位の動きの大きさに基づいて、ユーザーの怒り度、悲しみ度などの指標を数値として算出してもよい。これらの指標を用いて処理を実行することで、ユーザーの感情の度合いに応じたゲームの進行などを実現できる。 Further, the process executed by the process execution unit 53 is not limited to the process of operating the avatar. For example, when the process execution unit 53 executes a game process, the processing content may be changed according to the movement of the face part specified by the face part specifying unit 52. The movement of the user's face is considered to reflect the user's emotion. Therefore, the process execution unit 53 estimates the user's emotion according to a predetermined criterion based on the movement of the user's face part. Specifically, emotions such as the user laughing, sad, or angry are estimated from the movement of the user's eyebrows, eyelids, mouth, and the like. Then, the progress of the game is changed or a specific parameter is increased or decreased according to the estimated emotion. Thereby, the process reflecting the change of a user's emotion can be performed. Further, the process execution unit 53 may calculate an index such as the user's anger level or sadness level as a numerical value based on the magnitude of the movement of the user's face part. By executing the processing using these indices, it is possible to realize the progress of the game in accordance with the degree of emotion of the user.
 また、処理実行部53は、ユーザーの感情だけでなく、疲労度や興奮度の推定に顔部位特定部52の特定結果を利用してもよい。例えば処理実行部53は、ユーザーが目を大きく見開いたときや、眉をつり上げたときなど、特定の顔部位による特定の動きが検出された場合に、ユーザーが興奮していると推定する。また、顔部位の動きが緩慢になっているときには、ユーザーが疲労していると推定してもよい。なお、処理実行部53は、顔部位の動きだけでなく、顔面センサー43に含まれる色センサーや温度センサーの計測結果を併せて興奮度や疲労度の推定に利用してもよい。 Further, the process execution unit 53 may use the identification result of the face part identification unit 52 for estimating not only the user's emotion but also the fatigue level and the excitement level. For example, the process execution unit 53 estimates that the user is excited when a specific movement by a specific face part is detected, such as when the user widens his eyes or raises his eyebrows. Further, when the movement of the facial part is slow, it may be estimated that the user is tired. Note that the processing execution unit 53 may use not only the movement of the face part but also the measurement results of the color sensor and the temperature sensor included in the face sensor 43 for estimation of the degree of excitement and fatigue.
 また、処理実行部53は、ユーザーの発話内容を特定する音声認識処理を実行する際に、顔部位特定部52の特定結果を利用してもよい。音声認識処理を実行する際には、マイクロホンによって集音された音声信号を利用するが、その中にはユーザーの声だけでなく周囲の環境音等のノイズも含まれており、どのタイミングでユーザーが話しているのか識別するのが困難な場合がある。そこで処理実行部53は、顔部位特定部52によってユーザーの口が動いていると特定されている期間に、ユーザーが発話をしていると推定する。そして、その期間に集音された音声信号を用いて音声認識処理を実行する。このような処理によれば、音声認識の際の誤認識を減らすことができる。 Further, the process execution unit 53 may use the identification result of the face part identification unit 52 when executing the voice recognition process that identifies the user's utterance content. When executing the speech recognition process, the sound signal collected by the microphone is used, but this includes not only the voice of the user but also the noise of the surrounding environment, etc. It may be difficult to identify what is talking. Therefore, the process execution unit 53 estimates that the user is speaking during a period in which the face part specifying unit 52 specifies that the user's mouth is moving. Then, voice recognition processing is executed using the voice signal collected during that period. Such processing can reduce misrecognition during voice recognition.
 以上説明したように、本実施形態に係る情報処理装置10によれば、ユーザーの顔に対向する面に配置されたセンサーの計測結果を用いてユーザーの顔部位の動きを特定することにより、映像表示装置40の装着時には隠れてしまうユーザーの表情の変化を捉え、各種の処理に利用することができる。 As described above, according to the information processing apparatus 10 according to the present embodiment, by specifying the movement of the user's face part using the measurement result of the sensor disposed on the surface facing the user's face, the video Changes in the facial expression of the user that are hidden when the display device 40 is worn can be captured and used for various processes.
 なお、本発明の実施の形態は、以上説明したものに限られない。例えば処理実行部53は、顔部位特定部52によって特定されたユーザーの顔部位の動きだけでなく、その他のユーザーの態度に関する各種の情報を用いて処理を実行してもよい。例えば処理実行部53は、センサー情報取得部51が取得したセンサー情報そのものをユーザーの感情の推定に利用してもよい。また、処理内容を決定する上で、感情推定の結果だけでなく、ユーザーのバイタル情報など、他の種類の情報を併せて利用してもよい。 Note that the embodiments of the present invention are not limited to those described above. For example, the process execution unit 53 may execute the process using various types of information related to the attitudes of other users as well as the movement of the user's face part specified by the face part specifying unit 52. For example, the process execution unit 53 may use the sensor information itself acquired by the sensor information acquisition unit 51 for estimating the user's emotion. In determining the processing content, not only the result of emotion estimation but also other types of information such as user vital information may be used together.
 また、以上の説明では顔部位特定部52は顔面センサー43の検出結果だけを用いてユーザーの顔部位の動きを特定することとしたが、これ以外の情報を用いて顔部位の動きを特定してもよい。例えば、ユーザーの正面方向の位置に設置されたカメラで映像表示装置40を装着したユーザーの様子を撮影し、その撮影画像を解析して顔部位の特定に利用してもよい。この撮影画像には、ユーザーの顔のうち映像表示装置40が覆っていない領域(特にユーザーの口周辺の領域)が含まれる。この撮影画像を解析することで、ユーザーの口などの動きを特定することができる。一方、映像表示装置40が覆っている領域については、これまで説明したように顔面センサー43によりその動きを計測することができる。そこで、これらの情報を組み合わせて利用することにより、精度よくユーザーの顔全体の顔部位の動きを特定することができる。 In the above description, the face part specifying unit 52 uses only the detection result of the face sensor 43 to specify the movement of the user's face part. However, other information is used to specify the movement of the face part. May be. For example, the state of the user wearing the video display device 40 may be photographed with a camera installed at the position in the front direction of the user, and the photographed image may be analyzed and used to specify the facial part. This captured image includes a region of the user's face that is not covered by the video display device 40 (particularly a region around the user's mouth). By analyzing the captured image, it is possible to specify the movement of the user's mouth and the like. On the other hand, the movement of the area covered by the video display device 40 can be measured by the face sensor 43 as described above. Therefore, by using these pieces of information in combination, it is possible to accurately specify the movement of the facial part of the entire user's face.
 また、以上の説明では映像表示装置40とローカルで接続された情報処理装置10が顔部位の特定、及び特定された内容に応じた処理を実行することとしたが、これに限らず以上の説明において情報処理装置10が実行することとした処理の一部は通信ネットワークを介して接続されたサーバ装置等によって実行されてもよい。例えばオンラインゲームなどにおいては、ローカルで特定された顔部位の動きに応じて、サーバ装置がアバターの表情を変化させる処理を実行してもよい。また、サーバ装置が、ローカルの情報処理装置10から送信されるセンサー情報を用いて顔部位の特定を行ってもよい。 In the above description, the information processing apparatus 10 locally connected to the video display device 40 executes the process corresponding to the identification of the face part and the specified contents. A part of the processing to be executed by the information processing apparatus 10 may be executed by a server apparatus or the like connected via a communication network. For example, in an online game or the like, the server device may execute a process of changing the expression of the avatar according to the movement of the face part specified locally. Further, the server device may specify the facial part using sensor information transmitted from the local information processing device 10.
 1 情報処理システム、10 情報処理装置、11 制御部、12 記憶部、13 インタフェース部、20 操作デバイス、30 中継装置、40 映像表示装置、41 映像表示素子、42 光学素子、43 顔面センサー、45 通信インタフェース、51 センサー情報取得部、52 顔部位特定部、53 処理実行部。 1 Information processing system, 10 Information processing device, 11 Control unit, 12 Storage unit, 13 Interface unit, 20 Operation device, 30 Relay device, 40 Video display device, 41 Video display device, 42 Optical device, 43 Face sensor, 45 Communication Interface, 51 sensor information acquisition unit, 52 face part identification unit, 53 processing execution unit.

Claims (7)

  1.  ユーザーの頭部に装着される映像表示装置と接続される情報処理装置であって、
     前記映像表示装置の前記ユーザーに向けられる面に配置されたセンサーの検出結果を取得する取得部と、
     前記取得した検出結果に基づいて、前記ユーザーの顔の部位の動きを特定する特定部と、
     を含むことを特徴とする情報処理装置。
    An information processing device connected to a video display device mounted on a user's head,
    An acquisition unit for acquiring a detection result of a sensor disposed on a surface of the video display device facing the user;
    Based on the acquired detection result, a specifying unit that specifies the movement of the facial part of the user,
    An information processing apparatus comprising:
  2.  請求項1に記載の情報処理装置において、
     前記センサーは、近接センサー、タッチセンサー、及び筋電センサーのいずれか少なくとも一つを含む
     ことを特徴とする情報処理装置。
    The information processing apparatus according to claim 1,
    The information processing apparatus, wherein the sensor includes at least one of a proximity sensor, a touch sensor, and a myoelectric sensor.
  3.  請求項1に記載の情報処理装置において、
     前記情報処理装置は、前記特定された部位の動きに基づいて、前記ユーザーを表象するオブジェクトが備えるパーツを動作させる処理実行部をさらに含む
     ことを特徴とする情報処理装置。
    The information processing apparatus according to claim 1,
    The information processing apparatus further includes a processing execution unit that operates a part included in an object representing the user based on the movement of the identified part.
  4.  請求項1に記載の情報処理装置において、
     前記映像表示装置は、前記ユーザーが装着した状態で当該ユーザーの顔の一部範囲を覆い、
     前記特定部は、前記取得した検出結果に基づいて、前記ユーザーの顔の前記映像表示装置が覆っていない部位の動きを特定する
     ことを特徴とする情報処理装置。
    The information processing apparatus according to claim 1,
    The video display device covers a partial range of the user's face in a state worn by the user,
    The information processing apparatus, wherein the specifying unit specifies a movement of a part of the user's face that is not covered by the video display device based on the acquired detection result.
  5.  ユーザーの頭部に装着される映像表示装置であって、
     前記ユーザーが装着した際に当該ユーザーに向けられる面に、当該ユーザーの顔の部位の動きを特定するためのセンサーが配置されている
     ことを特徴とする映像表示装置。
    A video display device mounted on the user's head,
    A video display device, wherein a sensor for identifying movement of a part of the user's face is arranged on a surface directed to the user when the user wears the device.
  6.  ユーザーの頭部に装着される映像表示装置と接続される情報処理装置の制御方法であって、
     前記映像表示装置の前記ユーザーに向けられる面に配置されたセンサーの検出結果を取得するステップと、
     前記取得した検出結果に基づいて、前記ユーザーの顔の部位の動きを特定するステップと、
     を含むことを特徴とする情報処理装置の制御方法。
    A method of controlling an information processing apparatus connected to a video display device mounted on a user's head,
    Obtaining a detection result of a sensor disposed on a surface of the video display device facing the user;
    Identifying a movement of a part of the user's face based on the acquired detection result;
    A method for controlling an information processing apparatus, comprising:
  7.  ユーザーの頭部に装着される映像表示装置と接続されるコンピュータを、
     前記映像表示装置の前記ユーザーに向けられる面に配置されたセンサーの検出結果を取得する取得部、及び、
     前記取得した検出結果に基づいて、前記ユーザーの顔の部位の動きを特定する特定部、
     として機能させるためのプログラム。
    A computer connected to a video display device mounted on the user's head,
    An acquisition unit for acquiring a detection result of a sensor disposed on a surface of the video display device facing the user; and
    Based on the acquired detection result, a specifying unit that specifies the movement of the facial part of the user,
    Program to function as.
PCT/JP2016/071137 2015-11-20 2016-07-19 Information processing device and video display device WO2017085963A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015227826A JP2019023768A (en) 2015-11-20 2015-11-20 Information processing apparatus and video display device
JP2015-227826 2015-11-20

Publications (1)

Publication Number Publication Date
WO2017085963A1 true WO2017085963A1 (en) 2017-05-26

Family

ID=58718566

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/071137 WO2017085963A1 (en) 2015-11-20 2016-07-19 Information processing device and video display device

Country Status (2)

Country Link
JP (1) JP2019023768A (en)
WO (1) WO2017085963A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021517689A (en) * 2018-03-16 2021-07-26 マジック リープ, インコーポレイテッドMagic Leap,Inc. Facial expressions from eye tracking camera

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021051230A (en) * 2019-09-25 2021-04-01 株式会社Nttドコモ Display device and method for display

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4465414B2 (en) * 2008-07-11 2010-05-19 パナソニック株式会社 Device control method and electroencephalogram interface system using electroencephalogram
JP2014021707A (en) * 2012-07-18 2014-02-03 Nikon Corp Information input/output device and information input/output method
US20140078049A1 (en) * 2011-03-12 2014-03-20 Uday Parshionikar Multipurpose controllers and methods
WO2014192552A1 (en) * 2013-05-30 2014-12-04 ソニー株式会社 Display controller, display control method, and computer program
JP2015092646A (en) * 2013-11-08 2015-05-14 ソニー株式会社 Information processing device, control method, and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4465414B2 (en) * 2008-07-11 2010-05-19 パナソニック株式会社 Device control method and electroencephalogram interface system using electroencephalogram
US20140078049A1 (en) * 2011-03-12 2014-03-20 Uday Parshionikar Multipurpose controllers and methods
JP2014021707A (en) * 2012-07-18 2014-02-03 Nikon Corp Information input/output device and information input/output method
WO2014192552A1 (en) * 2013-05-30 2014-12-04 ソニー株式会社 Display controller, display control method, and computer program
JP2015092646A (en) * 2013-11-08 2015-05-14 ソニー株式会社 Information processing device, control method, and program

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021517689A (en) * 2018-03-16 2021-07-26 マジック リープ, インコーポレイテッドMagic Leap,Inc. Facial expressions from eye tracking camera
JP7344894B2 (en) 2018-03-16 2023-09-14 マジック リープ, インコーポレイテッド Facial expressions from eye-tracking cameras

Also Published As

Publication number Publication date
JP2019023768A (en) 2019-02-14

Similar Documents

Publication Publication Date Title
JP7344894B2 (en) Facial expressions from eye-tracking cameras
JP7378431B2 (en) Augmented reality display with frame modulation functionality
JP7190434B2 (en) Automatic control of wearable display devices based on external conditions
US11656680B2 (en) Technique for controlling virtual image generation system using emotional states of user
CN112034977B (en) Method for MR intelligent glasses content interaction, information input and recommendation technology application
JP2020047237A (en) Method for generating facial expression using data fusion
US10350761B2 (en) Communication device
WO2017085963A1 (en) Information processing device and video display device
JP7387198B2 (en) Program and image display system
WO2020195292A1 (en) Information processing device that displays sensory organ object
US20240005612A1 (en) Content transformations based on reflective object recognition
JP2021133469A (en) Robot, robot control program, robot control method and object control program as well as object control method
WO2022258647A1 (en) Method and device for determining a visual performance

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16865966

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16865966

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP