WO2022070429A1 - Estimation device, estimation method, and program - Google Patents

Estimation device, estimation method, and program Download PDF

Info

Publication number
WO2022070429A1
WO2022070429A1 PCT/JP2020/037659 JP2020037659W WO2022070429A1 WO 2022070429 A1 WO2022070429 A1 WO 2022070429A1 JP 2020037659 W JP2020037659 W JP 2020037659W WO 2022070429 A1 WO2022070429 A1 WO 2022070429A1
Authority
WO
WIPO (PCT)
Prior art keywords
gesture
mask
user
breathing
feature amount
Prior art date
Application number
PCT/JP2020/037659
Other languages
French (fr)
Japanese (ja)
Inventor
宇翔 草深
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Priority to PCT/JP2020/037659 priority Critical patent/WO2022070429A1/en
Publication of WO2022070429A1 publication Critical patent/WO2022070429A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/113Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb occurring during breathing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • the present disclosure relates to an estimation device, an estimation method and a program for estimating a gesture by breathing performed by a user wearing a mask.
  • a non-contact input method for example, a method of identifying a user's gesture by the movement of the user's finger or a facial expression and performing an operation input according to the identified gesture is being studied. Further, as another method, a method of inputting an operation by the user's breathing is being studied.
  • Non-Patent Document 1 describes an input method by blowing a breath on a microphone of a headset.
  • the input to the microphone (breathing) by the user is learned by using Siamese Network, which is a deep distance learning method, and identification is performed.
  • Non-Patent Document 2 describes a method of detecting a user's respiration by a chest strap type sensor attached to the user's chest in an attraction in an amusement park and controlling the attraction according to the detection result. ..
  • Non-Patent Document 3 describes a method of substituting a touch operation by a user blowing on a notebook PC (Personal Computer) and detecting the position where the breath is blown.
  • Non-Patent Document 4 describes a method in which a user blows on a screen, detects a position where the breath is blown, and processes it as an input.
  • the screen is divided into a plurality of grids, and a sensor is provided in each grid to detect the position where the breath is blown.
  • Non-Patent Document 5 describes a method of inputting an operation in a VR (Virtual Reality) game by exhaling a breath by a user.
  • the user's breathing is detected by detecting the change in the user's chest circumference due to breathing by a sensor wrapped around the user's chest.
  • Non-Patent Document 6 describes a method of inputting a user's operation by a game using a gas mask type interface equipped with a breathing sensor worn by the user.
  • Patent Document 1 describes a method of estimating a subject's breath from the subject's electrocardiographic waveform.
  • the method described in Patent Document 1 is intended to improve the accuracy of respiratory estimation under various situations, and is not related to input by gesture.
  • the input method using the gesture of the hand has a problem that the available situations are limited and the input method lacks versatility.
  • the input method using the facial expression of the user or the breathing of the user enables input even in a situation where the user's hand is blocked, and is excellent in versatility.
  • the object of the present disclosure made in view of the above-mentioned problems is an estimation that can estimate the gesture performed by the user even when the user wears a mask, which is more general and does not increase the load on the user. To provide equipment, estimation methods and programs.
  • the estimation device includes a feature amount extraction unit that extracts a feature amount related to respiration through the mask by a user wearing a mask, and a feature amount extraction unit that extracts the feature amount.
  • a gesture estimation unit for estimating a gesture due to the breath performed by the user based on the amount is provided.
  • the estimation method includes a step of extracting a feature amount related to respiration through the mask by a user wearing a mask, and the user based on the extracted feature amount. Includes a step of estimating the breathing gesture performed by.
  • the program according to the present disclosure causes the computer to function as the above-mentioned estimation device.
  • estimation device estimation method and program according to the present disclosure, it is possible to estimate the gesture performed by the user even when the user wears a mask, which is more general and does not increase the load on the user.
  • FIG. 1 is a flowchart showing an example of an operation when learning a gesture model in an estimation device.
  • FIG. 1 is a flowchart showing an example of an operation when learning a gesture model in an estimation device.
  • FIG. 1 is a flowchart showing an example of an operation when estimating a gesture in an estimation device. It is a figure which shows the other configuration example of the estimation apparatus which concerns on one Embodiment of this disclosure. It is a figure which shows the configuration example of the operation DB shown in FIG. It is a flowchart which shows an example of the operation at the time of registration of the operation in the operation DB in the estimation apparatus shown in FIG. It is a flowchart which shows an example of the operation at the time of performing the operation to the apparatus in the estimation apparatus shown in FIG.
  • FIG. 1 is a diagram showing a configuration example of the estimation device 10 according to the embodiment of the present disclosure.
  • the estimation device 10 according to the present embodiment estimates the gesture by breathing performed by the user wearing the mask.
  • the estimation device 10 includes a respiration data acquisition unit 11, a wind direction data acquisition unit 12, a feature amount extraction unit 13, a gesture learning unit 14, and a gesture estimation unit 15. Be prepared.
  • the respiration data acquisition unit 11 and the wind direction data acquisition unit 12 constitute a data acquisition unit 16.
  • the breathing data acquisition unit 11 acquires breathing data indicating the state of inhalation and exhalation of breath through the mask by the user.
  • FIG. 2 is a cross-sectional view showing a configuration example of a detection mechanism 110 for the respiratory data acquisition unit 11 to acquire respiratory data.
  • the detection mechanism 110 is attached to a mask worn by the user. Specifically, an opening is provided in a part of the mask, and the detection mechanism 110 is mounted so as to be fitted in the opening.
  • the detection mechanism 110 includes a first exterior member 111, a second exterior member 112, an air valve 114, a filter 115, and a distance sensor 116.
  • the first exterior member 111 includes an opening 111a provided with an opening for breathing, and an upright portion 111b erected from the opening 111a.
  • the second exterior member 112 includes an opening 112a provided with an opening for breathing, and an upright portion 112b erected from the opening 112a.
  • the first exterior member 111 is arranged on the human body side with the mask equipped with the detection mechanism 110 worn by the user. Further, the first exterior member 111 is arranged so that the opening 111a faces the human body while the user wears a mask equipped with the detection mechanism 110.
  • the second exterior member 112 is arranged on the side opposite to the human body while the user wears the mask on which the detection mechanism 110 is attached.
  • the second exterior member 112 is arranged so that the opening 112a faces the human body while the user wears a mask equipped with the detection mechanism 110.
  • the upright portion 111b included in the first exterior member 111 and the upright portion 112b included in the second exterior member 112 sandwich and fix the mask ground near the opening of the mask.
  • the detection mechanism 110 is attached to the mask by the mask ground near the opening of the mask being sandwiched and fixed between the standing portion 111b of the first exterior member 111 and the standing portion 112b of the second exterior member 112. Will be done.
  • the air valve 114 and the filter 115 are provided in the space formed by the first exterior member 111 and the second exterior member 112.
  • the air valve 114 has a first exterior member 111 by a support portion 113 fixed to an opening 111a of the first exterior member 111 and an opening 112a of the second exterior member 112. It is indicated to be substantially parallel to the opening 111a of the above and the opening 112a of the second exterior member 112.
  • the air valve 114 is made of, for example, silicone rubber and is flexible enough to be deformed by the wind caused by inhalation and exhalation by a person.
  • the filter 115 is made of, for example, a material having a filter function similar to that of a masked material. As shown in FIG. 2, the filter 115 is provided along the opening 112a of the second exterior member 112.
  • the air valve 114 when a user wearing a mask equipped with the detection mechanism 110 exhales, as shown in FIG. 4, the air valve 114 is oriented in the direction opposite to the human body. Curve. On the other hand, when the user wearing the mask inhales, the air valve 114 curves in the direction toward the human body, as shown in FIG.
  • the distance sensor 116 measures the distance to the air valve 114. As described above, the air valve 114 is curved and displaced by the user's breathing. Therefore, by measuring the distance to the air valve 114 with the distance sensor 116, it is possible to estimate whether the user is inhaling or exhaling, the strength of breathing, and the like.
  • the respiration data acquisition unit 11 acquires, for example, the distance to the air valve 114 measured by the distance sensor 116 as respiration data. That is, the respiration data acquisition unit 11 acquires respiration data based on the position of the air valve 114 that is displaced according to the inhalation and exhalation of the user.
  • the mechanism for the respiratory data acquisition unit 11 to acquire respiratory data is not limited to the detection mechanism 110 shown in FIG.
  • the respiration data acquisition unit 11 may acquire respiration data from an image of a mask worn by the user. When the user wearing the mask exhales, at least part of the mask swells. On the other hand, when the user wearing the mask inhales, at least a part of the mask shrinks. That is, since the shape of the mask changes (the predetermined measurement point on the mask surface is displaced) due to the breathing by the user wearing the mask, the breathing state of the user can be estimated from the change in the shape of the mask. Therefore, the respiration data acquisition unit 11 may acquire respiration data based on the change in the shape of the mask in the photographed image of the mask worn by the user. In this way, the respiration data acquisition unit 11 may acquire respiration data based on the displacement of the mask or an accessory attached to the mask (for example, the air valve 114 shown in FIG. 2).
  • respiration data acquisition unit 11 may acquire, for example, the respiration sound collected by the microphone attached to the mask as respiration data.
  • the respiration data acquisition unit 11 outputs the acquired respiration data to the feature amount extraction unit 13.
  • the wind direction data acquisition unit 12 acquires wind direction data indicating the direction of the wind caused by the user's breathing.
  • the wind direction data acquisition unit 12 acquires, for example, the direction of the user's head detected by the inertial sensor attached to the mask worn by the user as wind direction data.
  • the wind generated by the user's breathing travels in the direction from the user's mouth to the user's face. Therefore, the direction of the wind caused by the user's breathing can be estimated from the direction of the user's head detected by the inertial sensor.
  • the wind direction data acquisition unit 12 may estimate the position where the breath is blown by the microphone array, and acquire the wind direction data of the wind caused by breathing (exhalation) based on the position.
  • the wind direction data acquisition unit 12 outputs the acquired wind direction data to the feature amount extraction unit 13.
  • the respiration data acquisition unit 11 and the wind direction data acquisition unit 12 constitute the data acquisition unit 16. Therefore, the data acquisition unit 16 acquires breathing data indicating the state of inhalation and exhalation of breath through the mask by the user and wind direction data indicating the direction of the wind caused by breathing, and outputs the data to the feature amount extraction unit 13. ..
  • the feature amount extraction unit 13 extracts the feature amount related to breathing through the mask by the user wearing the mask. Specifically, the feature amount extraction unit 13 extracts the feature amount related to breathing through the mask from the breathing data acquired by the breathing data acquisition unit 11 and the wind direction data acquired by the wind direction data acquisition unit 12.
  • the feature amount extraction unit 13 may include, for example, the number of inhalations and / or exhalations, the time required for one inhalation and / or exhalation, inhalation and / or inhalation and / or as the feature amounts related to breathing through the mask. The interval of exhalation and the change in the direction of the wind caused by breathing are extracted.
  • Wind direction data is not required. Therefore, when the gesture is estimated using only the features extracted from the respiratory data without using the features extracted from the wind direction data, the wind direction data is unnecessary. Therefore, the configuration for acquiring the wind direction data (inertia sensor mounted on the user's head, microphone array, etc.) is not always essential.
  • the feature amount extraction unit 13 outputs the extracted feature amount to the gesture learning unit 14 when learning the gesture model 17, which will be described later, and outputs the extracted feature amount to the gesture estimation unit 15 when the user estimates the gesture. ..
  • the gesture learning unit 14 performs learning based on the feature amount extracted by the feature amount extraction unit 13, and generates a gesture model 17 as a learning result.
  • the gesture model 17 is a model for identifying a gesture corresponding to a feature amount (breathing pattern) related to breathing through a mask.
  • the gesture learning unit 14 performs learning by, for example, classification based on feature quantities or clustering. In the case of learning with teacher data such as classification, a label indicating a gesture corresponding to a feature amount related to respiration through a mask is manually input to the gesture learning unit 14, for example.
  • FIG. 6 is a diagram showing an example of a gesture by breathing performed by a user wearing a mask.
  • the gesture model 17 stores gestures corresponding to the user's breathing pattern, as shown in FIG.
  • the arrow with the hatching diagonally upward to the right indicates the action of exhaling
  • the arrow with the hatching diagonally downward to the right indicates the action of inhaling.
  • a gesture by breathing for example, as shown in FIG. 6, there is a gesture of inhaling or exhaling once. This gesture may be further subdivided according to the length of breath, the rate of breath, the intensity of breath, and the like.
  • a gesture by breathing for example, there is a gesture of inhaling or exhaling breath twice in succession.
  • a gesture by breathing there is a gesture that combines inhalation and exhalation, for example, inhaling twice and then exhaling once.
  • a gesture by breathing for example, there is a gesture in which the user inhales or exhales while tilting his face.
  • a gesture by breathing for example, there is a gesture in which the user inhales or exhales while changing the direction of the face from right to left.
  • a gesture by breathing for example, there is a gesture in which a user inhales or exhales while turning his / her face clockwise (CW: Clockwise) or counterclockwise (CCW: Counterclockwise).
  • CW Clockwise
  • CCW Counterclockwise
  • a gesture by breathing for example, there is a gesture in which the user inhales or exhales while facing (not tilting) the face, then moves laterally and exhales or inhales again.
  • the gesture learning unit 14 learns the gestures corresponding to the features of each of the various breathing patterns described above and generates the gesture model 17.
  • the gesture estimation unit 15 estimates the gesture by breathing performed by the user wearing the mask based on the feature amount extracted by the feature amount extraction unit 13. Specifically, the gesture estimation unit 15 estimates the gesture corresponding to the extracted feature amount based on the gesture model 17.
  • the estimation device 10 includes a data acquisition unit 16 (breathing data acquisition unit 11 and wind direction data acquisition unit 12), and the feature amount extraction unit 13 acquires breathing data and wind direction data from the data acquisition unit 16.
  • the estimation device 10 analyzes the detection results of the distance sensor 116 of the detection mechanism 110 attached to the mask worn by the user and the inertial sensor attached to the user's head, and acquires breathing data and wind direction data. Respiratory data and wind direction data may be acquired by communicating with an external device via a network. Therefore, in the present disclosure, the respiration data acquisition unit 11 and the wind direction data acquisition unit 12 are not essential configurations. Further, when the constructed gesture model 17 exists, the estimation device 10 does not have to include the gesture learning unit 14.
  • FIG. 7 is a flowchart showing an example of the operation of the gesture model 17 in the estimation device 10 according to the present embodiment during learning.
  • the data acquisition unit 16 acquires breathing data indicating the state of breathing by the user wearing the mask (state of inhalation and exhalation) and wind direction data indicating the direction of the wind caused by the breathing of the user wearing the mask (step). S11). Specifically, the respiratory data acquisition unit 11 constituting the data acquisition unit 16 acquires, for example, the detection result of the distance sensor 116 included in the detection mechanism 110 described with reference to FIG. 2 as respiratory data. Further, the wind direction data acquisition unit 12 constituting the data acquisition unit 16 acquires the detection result of the direction of the user's head by the inertial sensor mounted on the user's head as wind direction data.
  • the feature amount extraction unit 13 extracts the feature amount related to breathing through the mask by the user from the breathing data and the wind direction data acquired by the data acquisition unit 16 (step S12), and transfers the extracted feature amount to the gesture learning unit 14. Output.
  • the feature amount extraction unit 13 may acquire breathing data and wind direction data by communicating with an external device via a network. Therefore, the process of step S11 in which the respiration data is acquired by the respiration data acquisition unit 11 and the wind direction data is acquired by the wind direction data acquisition unit 12 is not essential.
  • the gesture learning unit 14 generates and stores the gesture model 17 by learning based on the feature amount extracted by the feature amount extraction unit 13 (step S13).
  • FIG. 8 is a flowchart showing the operation of the estimation device 10 when estimating the gesture, and is a diagram for explaining the estimation method by the estimation device 10 according to the present embodiment.
  • the data acquisition unit 16 acquires breathing data and wind direction data (step S21). Since the method of acquiring the respiration data and the wind direction data may be the same as that of step S11 described with reference to FIG. 7, the description thereof will be omitted.
  • the feature amount extraction unit 13 extracts the feature amount related to breathing through the mask by the user from the breathing data and the wind direction data acquired by the data acquisition unit 16 (step S22), and transfers the extracted feature amount to the gesture estimation unit 15. Output.
  • the feature amount extraction unit 13 may acquire breathing data and wind direction data by communicating with an external device via a network. Therefore, the process of step S21 in which the respiration data acquisition unit 11 acquires the respiration data and the wind direction data acquisition unit 12 acquires the wind direction data is not essential.
  • the gesture estimation unit 15 estimates the gesture due to the breath performed by the user based on the feature amount extracted by the feature amount extraction unit 13 (step S23). Specifically, the gesture estimation unit 15 estimates the gesture corresponding to the extracted feature amount based on the gesture model 17.
  • the estimation method by the estimation device 10 is based on the step (step S22) of extracting the feature amount related to respiration through the mask by the user wearing the mask and the extracted feature amount. Includes a step (step S23) of estimating the respiratory gesture performed by.
  • the estimation device 10 may further include an operation function for operating a device such as an earphone or a smartphone based on the estimated gesture.
  • FIG. 9 shows a configuration example of the estimation device 10 having such an operation function.
  • the estimation device 10 shown in FIG. 9 further includes an operation DB 21 and an operation execution unit 22 as compared with the estimation device 10 shown in FIG.
  • the operation DB 21 is a database that stores gestures by breathing performed by the user in association with the operation of the device.
  • FIG. 10 is a diagram showing a configuration example of the operation DB 21. Note that FIG. 10 shows an example in which a music device having a music reproduction function (for example, an earphone or the like) is operated by a gesture.
  • a music device having a music reproduction function for example, an earphone or the like
  • the operation DB 21 stores the gesture by breathing and the operation of the music device corresponding to the gesture in association with each other.
  • the gesture of inhaling twice is associated with the operation of pausing the music reproduction.
  • the gesture of breathing while rotating the face clockwise (CW) is associated with the operation of playing the next song.
  • the gesture of breathing while rotating the face counterclockwise (CCW) is associated with the operation of playing the previous song.
  • the gesture of exhaling from the bottom to the top is associated with the operation of increasing the volume.
  • the operation execution unit 22 executes the operation stored in the operation DB 21 in association with the gesture estimated by the gesture estimation unit 15.
  • FIG. 11 is a flowchart showing an example of an operation when registering an operation in the operation DB 21 in the estimation device 10.
  • the operation DB 21 stores the input gesture and the operation of the device in association with each other (step S31).
  • the operation execution unit 22 executes the operation of the device stored in the operation DB 21 in response to the estimated gesture (step S41). .. In the example shown in FIG. 10, the operation execution unit 22 pauses the music reproduction, for example, when the gesture estimation unit 15 estimates that the gesture of inhaling the breath is performed twice during the reproduction of the music.
  • the operation execution unit 22 may operate a smartphone, a game device, or the like based on the estimated gesture. In this case, the operation execution unit 22 may perform a tap operation, a double tap operation, a control operation of the ejection direction in a shooting game, or the like, for example, according to a gesture performed by the user. Further, the operation execution unit 22 may start a different application based on the estimated gesture. In this case, the estimation device 10 may start different applications (recipe search application, shopping memo application, etc.) according to the gesture performed by the user, for example. Further, the operation execution unit 22 may perform an operation of inserting a punctuation mark in speech recognition, a line feed operation, an operation of designating recognition as a word or a recognition as a control character, and the like according to the estimated gesture.
  • the estimation device 10 is performed by the user based on the feature amount extraction unit 13 for extracting the feature amount related to respiration through the mask by the user wearing the mask and the extracted feature amount.
  • a gesture estimation unit 15 for estimating a gesture due to breathing is provided.
  • a computer to function as each part of the above-mentioned estimation device 10.
  • a computer stores a program describing processing contents that realize the functions of each part of the estimation device 10 in the storage unit of the computer, and the CPU (Central Processing Unit) of the computer reads and executes this program. It can be realized by letting it. That is, the program can make the computer function as the estimation device 10 described above.
  • this program may be recorded on a computer-readable medium. It can be installed on a computer using a computer-readable medium.
  • the computer-readable medium on which the program is recorded may be a non-transient recording medium.
  • the non-transient recording medium is not particularly limited, but may be, for example, a recording medium such as a CD-ROM or a DVD-ROM. This program can also be provided via a network.
  • each component can be rearranged so as not to be logically inconsistent, and a plurality of components can be combined or divided into one.
  • Estimator 11 Respiratory data acquisition unit 12 Wind direction data acquisition unit 13 Feature extraction unit 14 Gesture learning unit 15 Gesture estimation unit 16 Data acquisition unit 17 Gesture model 21 Operation DB 22 Operation execution part 111 First exterior member 112 Second exterior member 113 Support part 114 Air valve 115 Filter 116 Distance sensor 111a, 112a Opening part 111b, 112b Standing part

Abstract

An estimation device (10) according to the present disclosure comprises a feature extraction unit (13) that extracts a feature related to breathing through a mask by a user wearing the mask, and a gesture estimation unit (15) that estimates a breath-based gesture performed by the user on the basis of the feature extracted by the feature extraction unit (13).

Description

推定装置、推定方法およびプログラムEstimator, estimation method and program
 本開示は、マスクを着用したユーザが行った、呼吸によるジェスチャを推定する推定装置、推定方法およびプログラムに関する。 The present disclosure relates to an estimation device, an estimation method and a program for estimating a gesture by breathing performed by a user wearing a mask.
 ウイルスなどによる感染症の拡大防止のためには、手指をアルコール消毒により清潔に保つことが推奨されている。しかしながら、ユーザが手すりあるいは自動販売機などの公共施設に触れた手で自身の持ち物(例えば、スマートフォン、イヤホンあるいは衣服など)に触れると、その持ち物を介してウイルスなどがユーザの生活空間に広がり、感染のリスクが高まってしまう。これを防ぐためには、ユーザが何かに触れるたびにアルコール消毒を行うことが考えられるが、頻繁にアルコール消毒を行う必要が生じ、現実的ではない。また、ユーザが気づかないうちにものに触ってしまうこともある。 It is recommended to keep the hands and fingers clean by alcohol disinfection to prevent the spread of infectious diseases caused by viruses. However, when the user touches his / her belongings (for example, smartphone, earphones, clothes, etc.) by touching a public facility such as a handrail or a vending machine, a virus or the like spreads in the user's living space through the belongings. The risk of infection increases. In order to prevent this, it is conceivable to perform alcohol disinfection every time the user touches something, but it is not realistic because it is necessary to perform alcohol disinfection frequently. In addition, the user may touch something without noticing it.
 手指を接触させて、スマートフォン、イヤホンなどの機器に操作を入力する入力方法には、上述したように、感染症の拡大を招くおそれがあるという問題がある。そこで、手指を接触させることなく、非接触で機器への入力を行う方法が検討されている。 As mentioned above, there is a problem that the input method of inputting operations to devices such as smartphones and earphones by touching hands and fingers may lead to the spread of infectious diseases. Therefore, a method of inputting to a device without contacting hands and fingers is being studied.
 非接触による入力方法として、例えば、ユーザの手指の動きあるいは顔の表情によるユーザのジェスチャを識別し、識別したジェスチャに応じた操作入力を行う方法が検討されている。また、別の方法として、ユーザの呼吸による操作の入力方法が検討されている。 As a non-contact input method, for example, a method of identifying a user's gesture by the movement of the user's finger or a facial expression and performing an operation input according to the identified gesture is being studied. Further, as another method, a method of inputting an operation by the user's breathing is being studied.
 例えば、非特許文献1には、ヘッドセットのマイクへの息の吹きかけによる入力方法が記載されている。この入力方法では、ユーザによるマイクへの入力(息の吹きかけ)を、深層距離学習手法であるSiamese Networkを用いて学習し、識別を行う。 For example, Non-Patent Document 1 describes an input method by blowing a breath on a microphone of a headset. In this input method, the input to the microphone (breathing) by the user is learned by using Siamese Network, which is a deep distance learning method, and identification is performed.
 また、非特許文献2には、遊園地におけるアトラクションにおいて、ユーザの胸部に取り付けた胸部ストラップ型のセンサによりユーザの呼吸を検出し、その検出結果に応じてアトラクションを制御する方法が記載されている。 Further, Non-Patent Document 2 describes a method of detecting a user's respiration by a chest strap type sensor attached to the user's chest in an attraction in an amusement park and controlling the attraction according to the detection result. ..
 また、非特許文献3には、ノートPC(Personal Computer)に対してユーザが息を吹きかけ、息が吹きかけられた位置を検出することで、タッチ操作の代替とする方法が記載されている。 Further, Non-Patent Document 3 describes a method of substituting a touch operation by a user blowing on a notebook PC (Personal Computer) and detecting the position where the breath is blown.
 また、非特許文献4には、スクリーンに対してユーザが息を吹きかけ、息が吹きかけられた位置を検出して入力として処理する方法が記載されている。この方法では、スクリーンが複数のグリッドに分けられ、各グリッドにセンサを設けることで、息を吹きかけた位置が検出される。 Further, Non-Patent Document 4 describes a method in which a user blows on a screen, detects a position where the breath is blown, and processes it as an input. In this method, the screen is divided into a plurality of grids, and a sensor is provided in each grid to detect the position where the breath is blown.
 また、非特許文献5には、ユーザによる息の吐き出しにより、VR(Virtual Reality)ゲームにおける操作を入力する方法が記載されている。この方法では、呼吸によるユーザの胸囲の変化を、ユーザの胸に巻き付けたセンサにより検出することで、ユーザの呼吸が検出される。 Further, Non-Patent Document 5 describes a method of inputting an operation in a VR (Virtual Reality) game by exhaling a breath by a user. In this method, the user's breathing is detected by detecting the change in the user's chest circumference due to breathing by a sensor wrapped around the user's chest.
 また、非特許文献6には、ゲームによるユーザの操作を、ユーザが装着した、呼吸センサを取り付けたガスマスク型のインタフェースを用いて入力する方法が記載されている。 Further, Non-Patent Document 6 describes a method of inputting a user's operation by a game using a gas mask type interface equipped with a breathing sensor worn by the user.
 なお、特許文献1には、被験者の心電位波形から被験者の呼吸を推定する方法が記載されている。ただし、特許文献1に記載の方法は、様々な状況下での呼吸推定精度を高めることを目的としたものであり、ジェスチャによる入力などに関するものではない。 Note that Patent Document 1 describes a method of estimating a subject's breath from the subject's electrocardiographic waveform. However, the method described in Patent Document 1 is intended to improve the accuracy of respiratory estimation under various situations, and is not related to input by gesture.
国際公開第2017/082165号International Publication No. 2017/082165
 例えば、手術あるいは料理を行っているような、ユーザの手が塞がっている状況では、あるいは、手が不自由であるなどのハンディキャップがあるユーザにとっては、手指の動きによるジェスチャを行うことが難しい。そのため、手指のジェスチャによる入力方法には、利用可能な状況が限られ、汎用性に欠けるという問題がある。一方、ユーザの顔の表情あるいはユーザの呼吸を用いた入力方法では、ユーザの手が塞がっているような状況でも入力が可能となり、汎用性に優れている。 For example, in situations where the user's hands are blocked, such as during surgery or cooking, or for users with handicap such as handicap, it is difficult to perform gestures by finger movements. .. Therefore, the input method using the gesture of the hand has a problem that the available situations are limited and the input method lacks versatility. On the other hand, the input method using the facial expression of the user or the breathing of the user enables input even in a situation where the user's hand is blocked, and is excellent in versatility.
 しかしながら、感染症の拡大防止のために、ユーザがマスクを着用することが増えており、ユーザがマスクを着用した状態では、ユーザの顔の表情によるジェスチャを識別することが困難である。また、ユーザがマスクを着用していると、ユーザにより吐き出された息はマスクにより遮られてしまう。そのため、ユーザがマスクを着用した状態では、ユーザの表情による入力方法、あるいは、非特許文献1,3,4に記載されているような、息の吹きかけによる入力方法を用いることは困難である。また、非特許文献2,5,6に記載されているような、センサあるいはセンサを備えるインタフェースをユーザに装着する入力方法では、センサなどの装着によりユーザの負荷が増大する。 However, in order to prevent the spread of infectious diseases, users are increasingly wearing masks, and it is difficult to identify gestures based on the facial expressions of users when the users are wearing masks. Also, when the user wears a mask, the breath exhaled by the user is blocked by the mask. Therefore, when the user wears a mask, it is difficult to use the input method by the user's facial expression or the input method by blowing breath as described in Non-Patent Documents 1, 3 and 4. Further, in the input method in which the sensor or an interface including the sensor is attached to the user as described in Non-Patent Documents 2, 5 and 6, the load on the user is increased by attaching the sensor or the like.
 したがって、ユーザがマスクを着用した状態でも、より汎用的かつユーザの負荷を増大させることなく、ユーザが行うジェスチャを推定することができる技術が求められている。 Therefore, there is a demand for a technique that is more versatile and can estimate the gesture performed by the user without increasing the load on the user even when the user wears the mask.
 上記のような問題点に鑑みてなされた本開示の目的は、ユーザがマスクを着用した状態でも、より汎用的かつユーザの負荷を増大させることなく、ユーザが行うジェスチャを推定することができる推定装置、推定方法およびプログラムを提供することにある。 The object of the present disclosure made in view of the above-mentioned problems is an estimation that can estimate the gesture performed by the user even when the user wears a mask, which is more general and does not increase the load on the user. To provide equipment, estimation methods and programs.
 上記課題を解決するため、本開示に係る推定装置は、マスクを着用したユーザによる、前記マスクを介した呼吸に関する特徴量を抽出する特徴量抽出部と、前記特徴量抽出部により抽出された特徴量に基づき、前記ユーザが行った前記呼吸によるジェスチャを推定するジェスチャ推定部と、を備える。 In order to solve the above problems, the estimation device according to the present disclosure includes a feature amount extraction unit that extracts a feature amount related to respiration through the mask by a user wearing a mask, and a feature amount extraction unit that extracts the feature amount. A gesture estimation unit for estimating a gesture due to the breath performed by the user based on the amount is provided.
 また、上記課題を解決するため、本開示に係る推定方法は、マスクを装着したユーザによる、前記マスクを介した呼吸に関する特徴量を抽出するステップと、前記抽出された特徴量に基づき、前記ユーザが行った前記呼吸によるジェスチャを推定するステップと、を含む。 Further, in order to solve the above problems, the estimation method according to the present disclosure includes a step of extracting a feature amount related to respiration through the mask by a user wearing a mask, and the user based on the extracted feature amount. Includes a step of estimating the breathing gesture performed by.
 また、上記課題を解決するため、本開示に係るプログラムは、コンピュータを上述した推定装置として機能させる。 Further, in order to solve the above problems, the program according to the present disclosure causes the computer to function as the above-mentioned estimation device.
 本開示に係る推定装置、推定方法およびプログラムによれば、ユーザがマスクを着用した状態でも、より汎用的かつユーザの負荷を増大させることなく、ユーザが行うジェスチャを推定することができる。 According to the estimation device, estimation method and program according to the present disclosure, it is possible to estimate the gesture performed by the user even when the user wears a mask, which is more general and does not increase the load on the user.
本開示の一実施形態に係る推定装置の構成例を示す図である。It is a figure which shows the structural example of the estimation apparatus which concerns on one Embodiment of this disclosure. 図1に示す呼吸データ取得部が呼吸データを取得するための検出機構の構成例を示す図である。It is a figure which shows the structural example of the detection mechanism for the respiratory data acquisition part shown in FIG. 1 to acquire the respiratory data. 図2に示す検出機構がマスクに装着された状態を示す図である。It is a figure which shows the state which the detection mechanism shown in FIG. 2 is attached to a mask. 図2に示す空気弁の呼吸(吐き出し)に伴う変位を示す図である。It is a figure which shows the displacement with the breathing (exhalation) of the air valve shown in FIG. 図2に示す空気弁の呼吸(吸い込み)に伴う変位を示す図である。It is a figure which shows the displacement with the breathing (suction) of the air valve shown in FIG. マスクを着用したユーザが行う呼吸によるジェスチャの一例を示す図である。It is a figure which shows an example of the gesture by the breathing performed by the user wearing a mask. 図1に推定装置におけるジェスチャモデルの学習の際の動作の一例を示すフローチャートである。FIG. 1 is a flowchart showing an example of an operation when learning a gesture model in an estimation device. 図1に推定装置におけるジェスチャの推定の際の動作の一例を示すフローチャートである。FIG. 1 is a flowchart showing an example of an operation when estimating a gesture in an estimation device. 本開示の一実施形態に係る推定装置の他の構成例を示す図である。It is a figure which shows the other configuration example of the estimation apparatus which concerns on one Embodiment of this disclosure. 図9に示す操作DBの構成例を示す図である。It is a figure which shows the configuration example of the operation DB shown in FIG. 図9に示す推定装置における操作DBへの操作の登録の際の動作の一例を示すフローチャートである。It is a flowchart which shows an example of the operation at the time of registration of the operation in the operation DB in the estimation apparatus shown in FIG. 図9に示す推定装置における機器への操作の実行の際の動作の一例を示すフローチャートである。It is a flowchart which shows an example of the operation at the time of performing the operation to the apparatus in the estimation apparatus shown in FIG.
 以下、本開示の実施の形態について図面を参照して説明する。 Hereinafter, embodiments of the present disclosure will be described with reference to the drawings.
 図1は、本開示の一実施形態に係る推定装置10の構成例を示す図である。本実施形態に係る推定装置10は、マスクを着用したユーザが行った、呼吸によるジェスチャを推定するものである。 FIG. 1 is a diagram showing a configuration example of the estimation device 10 according to the embodiment of the present disclosure. The estimation device 10 according to the present embodiment estimates the gesture by breathing performed by the user wearing the mask.
 図1に示すように、本実施形態に係る推定装置10は、呼吸データ取得部11と、風向きデータ取得部12と、特徴量抽出部13と、ジェスチャ学習部14と、ジェスチャ推定部15とを備える。呼吸データ取得部11および風向きデータ取得部12は、データ取得部16を構成する。 As shown in FIG. 1, the estimation device 10 according to the present embodiment includes a respiration data acquisition unit 11, a wind direction data acquisition unit 12, a feature amount extraction unit 13, a gesture learning unit 14, and a gesture estimation unit 15. Be prepared. The respiration data acquisition unit 11 and the wind direction data acquisition unit 12 constitute a data acquisition unit 16.
 呼吸データ取得部11は、ユーザによるマスクを介した息の吸い込みおよび吐き出しの状態を示す呼吸データを取得する。 The breathing data acquisition unit 11 acquires breathing data indicating the state of inhalation and exhalation of breath through the mask by the user.
 図2は、呼吸データ取得部11が呼吸データを取得するための検出機構110の構成例を示す断面図である。検出機構110は、ユーザが着用するマスクに装着される。具体的には、マスクの一部に開口部が設けられ、その開口部にはめ込むようにして検出機構110が装着される。 FIG. 2 is a cross-sectional view showing a configuration example of a detection mechanism 110 for the respiratory data acquisition unit 11 to acquire respiratory data. The detection mechanism 110 is attached to a mask worn by the user. Specifically, an opening is provided in a part of the mask, and the detection mechanism 110 is mounted so as to be fitted in the opening.
 図2に示すように、検出機構110は、第1の外装部材111と、第2の外装部材112と、空気弁114と、フィルタ115と、距離センサ116とを備える。 As shown in FIG. 2, the detection mechanism 110 includes a first exterior member 111, a second exterior member 112, an air valve 114, a filter 115, and a distance sensor 116.
 第1の外装部材111は、呼吸のための開口が設けられた開口部111aと、開口部111aから立設した立設部111bとを備える。第2の外装部材112は、呼吸のための開口が設けられた開口部112aと、開口部112aから立設した立設部112bとを備える。第1の外装部材111は、検出機構110を装着したマスクをユーザが着用した状態で人体側に配置される。また、第1の外装部材111は、検出機構110を装着したマスクをユーザが着用した状態で、開口部111aが人体と対向するように配置される。第2の外装部材112は、検出機構110を装着したマスクをユーザが着用した状態で人体とは反対側に配置される。また、第2の外装部材112は、検出機構110を装着したマスクをユーザが着用した状態で、開口部112aが人体と対向するように配置される。第1の外装部材111が備える立設部111bと、第2の外装部材112が備える立設部112bとは、図3に示すように、マスクの開口部付近のマスク地を挟み込んで固定する。マスクの開口部付近のマスク地が第1の外装部材111の立設部111bと第2の外装部材112の立設部112bとに挟み込まれて固定されることで、検出機構110がマスクに装着される。 The first exterior member 111 includes an opening 111a provided with an opening for breathing, and an upright portion 111b erected from the opening 111a. The second exterior member 112 includes an opening 112a provided with an opening for breathing, and an upright portion 112b erected from the opening 112a. The first exterior member 111 is arranged on the human body side with the mask equipped with the detection mechanism 110 worn by the user. Further, the first exterior member 111 is arranged so that the opening 111a faces the human body while the user wears a mask equipped with the detection mechanism 110. The second exterior member 112 is arranged on the side opposite to the human body while the user wears the mask on which the detection mechanism 110 is attached. Further, the second exterior member 112 is arranged so that the opening 112a faces the human body while the user wears a mask equipped with the detection mechanism 110. As shown in FIG. 3, the upright portion 111b included in the first exterior member 111 and the upright portion 112b included in the second exterior member 112 sandwich and fix the mask ground near the opening of the mask. The detection mechanism 110 is attached to the mask by the mask ground near the opening of the mask being sandwiched and fixed between the standing portion 111b of the first exterior member 111 and the standing portion 112b of the second exterior member 112. Will be done.
 図2に示すように、第1の外装部材111と第2の外装部材112とにより形成される空間に、空気弁114およびフィルタ115が設けられる。空気弁114は、図2に示すように、第1の外装部材111の開口部111aと、第2の外装部材112の開口部112aとに固定された支持部113により、第1の外装部材111の開口部111aおよび第2の外装部材112の開口部112aと略平行に指示される。空気弁114は、例えば、シリコンゴムで構成され、人による息の吸い込みおよび吐き出しにより起こる風により変形可能な程度の可撓性を有する。 As shown in FIG. 2, the air valve 114 and the filter 115 are provided in the space formed by the first exterior member 111 and the second exterior member 112. As shown in FIG. 2, the air valve 114 has a first exterior member 111 by a support portion 113 fixed to an opening 111a of the first exterior member 111 and an opening 112a of the second exterior member 112. It is indicated to be substantially parallel to the opening 111a of the above and the opening 112a of the second exterior member 112. The air valve 114 is made of, for example, silicone rubber and is flexible enough to be deformed by the wind caused by inhalation and exhalation by a person.
 フィルタ115は、例えば、マスク地と同程度のフィルタ機能を有する素材で構成される。フィルタ115は、図2に示すように、第2の外装部材112の開口部112aに沿って設けられる。 The filter 115 is made of, for example, a material having a filter function similar to that of a masked material. As shown in FIG. 2, the filter 115 is provided along the opening 112a of the second exterior member 112.
 図2に示す検出機構110によれば、検出機構110が装着されたマスクを着用したユーザが息を吐き出した場合には、図4に示すように、空気弁114は、人体とは反対方向に湾曲する。一方、マスクを着用したユーザが息を吸い込んだ場合には、図5に示すように、空気弁114は、人体に向かう方向に湾曲する。 According to the detection mechanism 110 shown in FIG. 2, when a user wearing a mask equipped with the detection mechanism 110 exhales, as shown in FIG. 4, the air valve 114 is oriented in the direction opposite to the human body. Curve. On the other hand, when the user wearing the mask inhales, the air valve 114 curves in the direction toward the human body, as shown in FIG.
 距離センサ116は、空気弁114までの距離を計測する。上述したように、空気弁114は、ユーザの呼吸により湾曲して変位する。そのため、距離センサ116により空気弁114までの距離を計測することで、ユーザが息を吸っているのか吐いているのか、また、呼吸の強さなどを推定することができる。 The distance sensor 116 measures the distance to the air valve 114. As described above, the air valve 114 is curved and displaced by the user's breathing. Therefore, by measuring the distance to the air valve 114 with the distance sensor 116, it is possible to estimate whether the user is inhaling or exhaling, the strength of breathing, and the like.
 呼吸データ取得部11は、例えば、距離センサ116により計測された空気弁114までの距離を呼吸データとして取得する。すなわち、呼吸データ取得部11は、ユーザによる息の吸い込みおよび吐き出しに応じて変位する空気弁114の位置に基づき、呼吸データを取得する。 The respiration data acquisition unit 11 acquires, for example, the distance to the air valve 114 measured by the distance sensor 116 as respiration data. That is, the respiration data acquisition unit 11 acquires respiration data based on the position of the air valve 114 that is displaced according to the inhalation and exhalation of the user.
 なお、呼吸データ取得部11が呼吸データを取得するための機構は図2に示す検出機構110に限られるものではない。例えば、呼吸データ取得部11は、ユーザが着用したマスクを撮影した画像から呼吸データを取得してもよい。マスクを着用したユーザが息を吐き出すと、少なくともマスクの一部が膨らむ。一方、マスクを着用したユーザが息を吸い込むと、少なくともマスクの一部が縮む。すなわち、マスクを着用したユーザによる呼吸により、マスクの形状が変化する(マスク表面の所定の計測点が変位する)ため、マスクの形状の変化から、ユーザの呼吸の状態を推定することができる。したがって、呼吸データ取得部11は、ユーザが着用したマスクを撮影した撮影画像におけるマスクの形状の変化に基づき、呼吸データを取得してもよい。このように、呼吸データ取得部11は、マスクあるいはマスクに付属する付属物(例えば、図2に示す空気弁114)の変位に基づき、呼吸データを取得してよい。 The mechanism for the respiratory data acquisition unit 11 to acquire respiratory data is not limited to the detection mechanism 110 shown in FIG. For example, the respiration data acquisition unit 11 may acquire respiration data from an image of a mask worn by the user. When the user wearing the mask exhales, at least part of the mask swells. On the other hand, when the user wearing the mask inhales, at least a part of the mask shrinks. That is, since the shape of the mask changes (the predetermined measurement point on the mask surface is displaced) due to the breathing by the user wearing the mask, the breathing state of the user can be estimated from the change in the shape of the mask. Therefore, the respiration data acquisition unit 11 may acquire respiration data based on the change in the shape of the mask in the photographed image of the mask worn by the user. In this way, the respiration data acquisition unit 11 may acquire respiration data based on the displacement of the mask or an accessory attached to the mask (for example, the air valve 114 shown in FIG. 2).
 また、呼吸データ取得部11は、例えば、マスクに装着されたマイクにより集音された呼吸音を呼吸データとして取得してもよい。 Further, the respiration data acquisition unit 11 may acquire, for example, the respiration sound collected by the microphone attached to the mask as respiration data.
 図1を再び参照すると、呼吸データ取得部11は、取得した呼吸データを特徴量抽出部13に出力する。 Referring to FIG. 1 again, the respiration data acquisition unit 11 outputs the acquired respiration data to the feature amount extraction unit 13.
 風向きデータ取得部12は、ユーザの呼吸により起こる風の向きを示す風向きデータを取得する。風向きデータ取得部12は、例えば、ユーザが着用するマスクに装着された慣性センサにより検出されたユーザの頭部の方向を風向きデータとして取得する。通常、ユーザの呼吸により起こる風は、ユーザの口からユーザの顔が向かう方向に進む。したがって、慣性センサにより検出されたユーザの頭部の方向から、ユーザの呼吸により起こる風の向きを推定することができる。また、風向きデータ取得部12は、マイクアレイにより息が吹きかけられた位置を推定し、その位置に基づき、呼吸(吐き出し)により起こる風の風向きデータを取得してもよい。風向きデータ取得部12は、取得した風向きデータを特徴量抽出部13に出力する。 The wind direction data acquisition unit 12 acquires wind direction data indicating the direction of the wind caused by the user's breathing. The wind direction data acquisition unit 12 acquires, for example, the direction of the user's head detected by the inertial sensor attached to the mask worn by the user as wind direction data. Normally, the wind generated by the user's breathing travels in the direction from the user's mouth to the user's face. Therefore, the direction of the wind caused by the user's breathing can be estimated from the direction of the user's head detected by the inertial sensor. Further, the wind direction data acquisition unit 12 may estimate the position where the breath is blown by the microphone array, and acquire the wind direction data of the wind caused by breathing (exhalation) based on the position. The wind direction data acquisition unit 12 outputs the acquired wind direction data to the feature amount extraction unit 13.
 上述したように、呼吸データ取得部11および風向きデータ取得部12は、データ取得部16を構成する。したがって、データ取得部16は、ユーザによるマスクを介した息の吸い込みおよび吐き出しの状態を示す呼吸データ、および、呼吸により起こる風の向きを示す風向きデータを取得し、特徴量抽出部13に出力する。 As described above, the respiration data acquisition unit 11 and the wind direction data acquisition unit 12 constitute the data acquisition unit 16. Therefore, the data acquisition unit 16 acquires breathing data indicating the state of inhalation and exhalation of breath through the mask by the user and wind direction data indicating the direction of the wind caused by breathing, and outputs the data to the feature amount extraction unit 13. ..
 特徴量抽出部13は、マスクを着用したユーザによる、マスクを介した呼吸に関する特徴量を抽出する。具体的には、特徴量抽出部13は、呼吸データ取得部11により取得された呼吸データおよび風向きデータ取得部12により取得された風向きデータから、マスクを介した呼吸に関する特徴量を抽出する。特徴量抽出部13は、マスクを介した呼吸に関する特徴量として、例えば、息の吸い込みおよび/または吐き出しの回数、一回の息の吸い込みおよび/または吐き出しあたりの所要時間、息の吸い込みおよび/または吐き出しの間隔、および、呼吸により起こる風の向きの変化などを抽出する。 The feature amount extraction unit 13 extracts the feature amount related to breathing through the mask by the user wearing the mask. Specifically, the feature amount extraction unit 13 extracts the feature amount related to breathing through the mask from the breathing data acquired by the breathing data acquisition unit 11 and the wind direction data acquired by the wind direction data acquisition unit 12. The feature amount extraction unit 13 may include, for example, the number of inhalations and / or exhalations, the time required for one inhalation and / or exhalation, inhalation and / or inhalation and / or as the feature amounts related to breathing through the mask. The interval of exhalation and the change in the direction of the wind caused by breathing are extracted.
 息の吸い込みおよび/または吐き出しの回数、一回の息の吸い込みおよび/または吐き出し当たりの所要時間、および、息の吸い込みおよび/または吐き出しの間隔などの特徴量は、呼吸データから取得することができ、風向きデータは必要ではない。そのため、風向きデータから抽出される特徴量を用いずに、呼吸データから抽出される特徴量だけを用いてジェスチャを推定する場合には、風向きデータは不要である。したがって、風向きデータを取得するための構成(ユーザの頭部に装着される慣性センサ、マイクアレイなど)は必ずしも必須ではない。 Features such as the number of inhalations and / or exhalations, the time required per inhalation and / or exhalation, and the inhalation and / or exhalation intervals can be obtained from the respiratory data. , Wind direction data is not required. Therefore, when the gesture is estimated using only the features extracted from the respiratory data without using the features extracted from the wind direction data, the wind direction data is unnecessary. Therefore, the configuration for acquiring the wind direction data (inertia sensor mounted on the user's head, microphone array, etc.) is not always essential.
 特徴量抽出部13は、抽出した特徴量を、後述するジェスチャモデル17の学習の際にはジェスチャ学習部14に出力し、ユーザが行ったジェスチャの推定の際にはジェスチャ推定部15に出力する。 The feature amount extraction unit 13 outputs the extracted feature amount to the gesture learning unit 14 when learning the gesture model 17, which will be described later, and outputs the extracted feature amount to the gesture estimation unit 15 when the user estimates the gesture. ..
 ジェスチャ学習部14は、特徴量抽出部13により抽出された特徴量に基づき学習を行い、学習結果としてジェスチャモデル17を生成する。ジェスチャモデル17は、マスクを介した呼吸に関する特徴量(呼吸のパターン)に対応するジェスチャを識別するモデルである。ジェスチャ学習部14は、例えば、特徴量に基づくクラス分類あるいはクラスタリングにより学習を行う。クラス分類のような教師データありの学習の場合には、マスクを介した呼吸に関する特徴量に対応するジェスチャを示すラベルが、ジェスチャ学習部14に、例えば、手動により入力される。 The gesture learning unit 14 performs learning based on the feature amount extracted by the feature amount extraction unit 13, and generates a gesture model 17 as a learning result. The gesture model 17 is a model for identifying a gesture corresponding to a feature amount (breathing pattern) related to breathing through a mask. The gesture learning unit 14 performs learning by, for example, classification based on feature quantities or clustering. In the case of learning with teacher data such as classification, a label indicating a gesture corresponding to a feature amount related to respiration through a mask is manually input to the gesture learning unit 14, for example.
 図6は、マスクを着用したユーザが行う呼吸によるジェスチャの一例を示す図である。ジェスチャモデル17は、図6に示すような、ユーザによる呼吸のパターンに対応するジェスチャを記憶する。なお、図6おいて、右斜め上がりのハッチングを付した矢印は息を吐き出す動作を示し、右斜め下がりのハッチングを付した矢印は息を吸い込む動作を示す。 FIG. 6 is a diagram showing an example of a gesture by breathing performed by a user wearing a mask. The gesture model 17 stores gestures corresponding to the user's breathing pattern, as shown in FIG. In FIG. 6, the arrow with the hatching diagonally upward to the right indicates the action of exhaling, and the arrow with the hatching diagonally downward to the right indicates the action of inhaling.
 呼吸によるジェスチャとしては、例えば、図6に示すように、息を一回吸い込むあるいは吐き出すというジェスチャがある。このジェスチャは、呼吸の長さ、呼吸の速度および呼吸の強度などに応じてさらに細かく区別されてもよい。また、呼吸によるジェスチャとしては、例えば、息を二回連続して吸い込むあるいは吐き出すというジェスチャがある。また、呼吸によるジェスチャとしては、例えば、息を二回吸い込み、その後、一回吐き出すというように、息の吸い込みと吐き出しとを組み合わせたジェスチャがある。 As a gesture by breathing, for example, as shown in FIG. 6, there is a gesture of inhaling or exhaling once. This gesture may be further subdivided according to the length of breath, the rate of breath, the intensity of breath, and the like. In addition, as a gesture by breathing, for example, there is a gesture of inhaling or exhaling breath twice in succession. In addition, as a gesture by breathing, there is a gesture that combines inhalation and exhalation, for example, inhaling twice and then exhaling once.
 また、呼吸によるジェスチャとしては、例えば、ユーザが顔を傾けながら息を吸い込むあるいは吐き出すというジェスチャがある。また、呼吸によるジェスチャとしては、例えば、ユーザが顔の向きを右向きから左向きに変えながら息を吸い込むあるいは吐き出すというジェスチャがある。また、呼吸によるジェスチャとしては、例えば、ユーザが自身の顔を時計回り(CW:Clockwise)あるいは半時計回り(CCW:Counterclockwise)に回しながら、息を吸い込むあるいは吐き出すというジェスチャがある。また、呼吸によるジェスチャとしては、例えば、ユーザが顔を正対した(傾けていない)状態で息を吸い込みあるいは吐き出したのち、横方向に移動して、もう一度息を吐き出しあるいは吸い込むというジェスチャがある。 In addition, as a gesture by breathing, for example, there is a gesture in which the user inhales or exhales while tilting his face. Further, as a gesture by breathing, for example, there is a gesture in which the user inhales or exhales while changing the direction of the face from right to left. Further, as a gesture by breathing, for example, there is a gesture in which a user inhales or exhales while turning his / her face clockwise (CW: Clockwise) or counterclockwise (CCW: Counterclockwise). Further, as a gesture by breathing, for example, there is a gesture in which the user inhales or exhales while facing (not tilting) the face, then moves laterally and exhales or inhales again.
 ジェスチャ学習部14は、上述した種々の呼吸のパターンごとの特徴量に対応するジェスチャを学習してジェスチャモデル17を生成する。 The gesture learning unit 14 learns the gestures corresponding to the features of each of the various breathing patterns described above and generates the gesture model 17.
 図1を再び参照すると、ジェスチャ推定部15は、特徴量抽出部13により抽出された特徴量に基づき、マスクを着用したユーザが行った呼吸によるジェスチャを推定する。具体的には、ジェスチャ推定部15は、ジェスチャモデル17に基づき、抽出された特徴量に対応するジェスチャを推定する。 Referring to FIG. 1 again, the gesture estimation unit 15 estimates the gesture by breathing performed by the user wearing the mask based on the feature amount extracted by the feature amount extraction unit 13. Specifically, the gesture estimation unit 15 estimates the gesture corresponding to the extracted feature amount based on the gesture model 17.
 なお、図1においては、推定装置10がデータ取得部16(呼吸データ取得部11および風向きデータ取得部12)を備え、特徴量抽出部13は、データ取得部16から呼吸データおよび風向きデータを取得する例を用いて説明したが、本開示はこれに限られるものではない。例えば、推定装置10は、ユーザが着用するマスクに装着された検出機構110の距離センサ116およびユーザの頭部に装着された慣性センサなどの検出結果を分析し、呼吸データおよび風向きデータを取得する外部装置とのネットワークを介した通信により、呼吸データおよび風向きデータを取得してもよい。したがって、本開示においては、呼吸データ取得部11および風向きデータ取得部12は必須の構成ではない。また、構築済みのジェスチャモデル17が存在する場合には、推定装置10は、ジェスチャ学習部14を備えなくてもよい。 In FIG. 1, the estimation device 10 includes a data acquisition unit 16 (breathing data acquisition unit 11 and wind direction data acquisition unit 12), and the feature amount extraction unit 13 acquires breathing data and wind direction data from the data acquisition unit 16. However, the present disclosure is not limited to this. For example, the estimation device 10 analyzes the detection results of the distance sensor 116 of the detection mechanism 110 attached to the mask worn by the user and the inertial sensor attached to the user's head, and acquires breathing data and wind direction data. Respiratory data and wind direction data may be acquired by communicating with an external device via a network. Therefore, in the present disclosure, the respiration data acquisition unit 11 and the wind direction data acquisition unit 12 are not essential configurations. Further, when the constructed gesture model 17 exists, the estimation device 10 does not have to include the gesture learning unit 14.
 次に、本実施形態に係る推定装置10の動作について説明する。 Next, the operation of the estimation device 10 according to the present embodiment will be described.
 図7は、本実施形態に係る推定装置10におけるジェスチャモデル17の学習の際の動作の一例を示すフローチャートである。 FIG. 7 is a flowchart showing an example of the operation of the gesture model 17 in the estimation device 10 according to the present embodiment during learning.
 データ取得部16は、マスクを着用したユーザによる呼吸の状態(息の吸い込みおよび吐き出しの状態)を示す呼吸データおよびマスクを着用したユーザの呼吸により起こる風の向きを示す風向きデータを取得する(ステップS11)。具体的には、データ取得部16を構成する呼吸データ取得部11は、例えば、図2を参照して説明した検出機構110が備える距離センサ116の検出結果を呼吸データとして取得する。また、データ取得部16を構成する風向きデータ取得部12は、ユーザの頭部に装着された慣性センサによるユーザの頭部の方向の検出結果を風向きデータとして取得する。 The data acquisition unit 16 acquires breathing data indicating the state of breathing by the user wearing the mask (state of inhalation and exhalation) and wind direction data indicating the direction of the wind caused by the breathing of the user wearing the mask (step). S11). Specifically, the respiratory data acquisition unit 11 constituting the data acquisition unit 16 acquires, for example, the detection result of the distance sensor 116 included in the detection mechanism 110 described with reference to FIG. 2 as respiratory data. Further, the wind direction data acquisition unit 12 constituting the data acquisition unit 16 acquires the detection result of the direction of the user's head by the inertial sensor mounted on the user's head as wind direction data.
 特徴量抽出部13は、データ取得部16により取得された呼吸データおよび風向きデータから、ユーザによるマスクを介した呼吸に関する特徴量を抽出し(ステップS12)、抽出した特徴量をジェスチャ学習部14に出力する。なお、上述したように、特徴量抽出部13は、ネットワークを介した外部装置との通信により呼吸データおよび風向きデータを取得してもよい。したがって、呼吸データ取得部11により呼吸データを取得し、風向きデータ取得部12により風向きデータを取得するステップS11の処理は必須ではない。 The feature amount extraction unit 13 extracts the feature amount related to breathing through the mask by the user from the breathing data and the wind direction data acquired by the data acquisition unit 16 (step S12), and transfers the extracted feature amount to the gesture learning unit 14. Output. As described above, the feature amount extraction unit 13 may acquire breathing data and wind direction data by communicating with an external device via a network. Therefore, the process of step S11 in which the respiration data is acquired by the respiration data acquisition unit 11 and the wind direction data is acquired by the wind direction data acquisition unit 12 is not essential.
 ジェスチャ学習部14は、特徴量抽出部13により抽出された特徴量に基づく学習により、ジェスチャモデル17を生成し、保存する(ステップS13)。 The gesture learning unit 14 generates and stores the gesture model 17 by learning based on the feature amount extracted by the feature amount extraction unit 13 (step S13).
 次に、本実施形態に係る推定装置10におけるジェスチャの推定の際の動作について説明する。図8は、推定装置10におけるジェスチャの推定の際の動作を示すフローチャートであり、本実施形態に係る推定装置10による推定方法を説明するための図である。 Next, the operation at the time of estimating the gesture in the estimation device 10 according to the present embodiment will be described. FIG. 8 is a flowchart showing the operation of the estimation device 10 when estimating the gesture, and is a diagram for explaining the estimation method by the estimation device 10 according to the present embodiment.
 データ取得部16は、呼吸データおよび風向きデータを取得する(ステップS21)。呼吸データおよび風向きデータの取得の方法は、図7を参照して説明したステップS11と同様の方法でよいため、説明を省略する。 The data acquisition unit 16 acquires breathing data and wind direction data (step S21). Since the method of acquiring the respiration data and the wind direction data may be the same as that of step S11 described with reference to FIG. 7, the description thereof will be omitted.
 特徴量抽出部13は、データ取得部16により取得された呼吸データおよび風向きデータから、ユーザによるマスクを介した呼吸に関する特徴量を抽出し(ステップS22)、抽出した特徴量をジェスチャ推定部15に出力する。なお、上述したように、特徴量抽出部13は、ネットワークを介した外部装置との通信により呼吸データおよび風向きデータを取得してもよい。したがって、呼吸データ取得部11により呼吸データを取得し、風向きデータ取得部12により風向きデータを取得するステップS21の処理は必須ではない。 The feature amount extraction unit 13 extracts the feature amount related to breathing through the mask by the user from the breathing data and the wind direction data acquired by the data acquisition unit 16 (step S22), and transfers the extracted feature amount to the gesture estimation unit 15. Output. As described above, the feature amount extraction unit 13 may acquire breathing data and wind direction data by communicating with an external device via a network. Therefore, the process of step S21 in which the respiration data acquisition unit 11 acquires the respiration data and the wind direction data acquisition unit 12 acquires the wind direction data is not essential.
 ジェスチャ推定部15は、特徴量抽出部13により抽出された特徴量に基づき、ユーザが行った呼吸によるジェスチャを推定する(ステップS23)。具体的には、ジェスチャ推定部15は、ジェスチャモデル17に基づき、抽出された特徴量に対応するジェスチャを推定する。 The gesture estimation unit 15 estimates the gesture due to the breath performed by the user based on the feature amount extracted by the feature amount extraction unit 13 (step S23). Specifically, the gesture estimation unit 15 estimates the gesture corresponding to the extracted feature amount based on the gesture model 17.
 このように本実施形態に係る推定装置10による推定方法は、マスクを装着したユーザによる、マスクを介した呼吸に関する特徴量を抽出するステップ(ステップS22)と、抽出された特徴量に基づき、ユーザが行った呼吸によるジェスチャを推定するステップ(ステップS23)と、を含む。 As described above, the estimation method by the estimation device 10 according to the present embodiment is based on the step (step S22) of extracting the feature amount related to respiration through the mask by the user wearing the mask and the extracted feature amount. Includes a step (step S23) of estimating the respiratory gesture performed by.
 本実施形態に係る推定装置10は、推定したジェスチャに基づき、イヤホン、スマートフォンなどの機器を操作する操作機能をさらに備えてよい。このような操作機能を有する推定装置10の構成例を図9に示す。 The estimation device 10 according to the present embodiment may further include an operation function for operating a device such as an earphone or a smartphone based on the estimated gesture. FIG. 9 shows a configuration example of the estimation device 10 having such an operation function.
 図9に示す推定装置10は、図1に示す推定装置10と比較して、操作DB21と、操作実行部22とをさらに備える。 The estimation device 10 shown in FIG. 9 further includes an operation DB 21 and an operation execution unit 22 as compared with the estimation device 10 shown in FIG.
 操作DB21は、ユーザが行う呼吸によるジェスチャと、機器の操作とを対応付けて記憶するデータベースである。図10は、操作DB21の構成例を示す図である。なお、図10においては、音楽再生機能を有する音楽機器(例えば、イヤホンなど)を、ジェスチャにより操作する例を示している。 The operation DB 21 is a database that stores gestures by breathing performed by the user in association with the operation of the device. FIG. 10 is a diagram showing a configuration example of the operation DB 21. Note that FIG. 10 shows an example in which a music device having a music reproduction function (for example, an earphone or the like) is operated by a gesture.
 図10に示すように、操作DB21は、呼吸によるジェスチャと、そのジェスチャに対応する音楽機器の操作とを対応付けて記憶する。図10に示す例では、息を二回吸い込むジェスチャに、音楽再生を一時停止する操作が対応付けられている。また、顔を時計回り(CW)に回転させながら呼吸するジェスチャに、次の曲を再生する操作が対応付けられている。また、顔を反時計回り(CCW)に回転させながら呼吸するジェスチャに、前の曲を再生する操作が対応付けられている。また、息を下から上に吐き出すジェスチャに、音量をアップする操作が対応付けられている。 As shown in FIG. 10, the operation DB 21 stores the gesture by breathing and the operation of the music device corresponding to the gesture in association with each other. In the example shown in FIG. 10, the gesture of inhaling twice is associated with the operation of pausing the music reproduction. Further, the gesture of breathing while rotating the face clockwise (CW) is associated with the operation of playing the next song. Further, the gesture of breathing while rotating the face counterclockwise (CCW) is associated with the operation of playing the previous song. In addition, the gesture of exhaling from the bottom to the top is associated with the operation of increasing the volume.
 図9を再び参照すると、操作実行部22は、ジェスチャ推定部15により推定されたジェスチャに対応付けて操作DB21に記憶されている操作を実行する。 Referring to FIG. 9 again, the operation execution unit 22 executes the operation stored in the operation DB 21 in association with the gesture estimated by the gesture estimation unit 15.
 次に、図9に示す推定装置10の動作について説明する。まず、操作DB21への機器の操作の登録の際の動作について説明する。 Next, the operation of the estimation device 10 shown in FIG. 9 will be described. First, the operation at the time of registering the operation of the device in the operation DB 21 will be described.
 図11は、推定装置10における操作DB21への操作の登録の際の動作の一例を示すフローチャートである。 FIG. 11 is a flowchart showing an example of an operation when registering an operation in the operation DB 21 in the estimation device 10.
 操作DB21は、ユーザが行う呼吸によるジェスチャと、そのジェスチャに対応する機器の操作とが入力されると、入力されたジェスチャと機器の操作とを対応付けて記憶する(ステップS31)。 When the gesture by breathing performed by the user and the operation of the device corresponding to the gesture are input, the operation DB 21 stores the input gesture and the operation of the device in association with each other (step S31).
 次に、図9に示す推定装置10における機器への操作の実行の際の動作について、図12に示すフローチャートを参照して説明する。図12において、図8と同様の処理には同じ符号を付し、説明を省略する。 Next, the operation of the estimation device 10 shown in FIG. 9 when the operation on the device is executed will be described with reference to the flowchart shown in FIG. In FIG. 12, the same processing as in FIG. 8 is designated by the same reference numerals, and the description thereof will be omitted.
 ジェスチャ推定部15により、ユーザが行った呼吸によるジェスチャが推定されると、操作実行部22は、推定されたジェスチャに対応して操作DB21に記憶されている機器の操作を実行する(ステップS41)。図10に示す例では、操作実行部22は、例えば、音楽の再生中に、息を二回吸い込むジェスチャが行われたとジェスチャ推定部15により推定されると、音楽再生を一時停止する。 When the gesture estimation unit 15 estimates the gesture due to the breathing performed by the user, the operation execution unit 22 executes the operation of the device stored in the operation DB 21 in response to the estimated gesture (step S41). .. In the example shown in FIG. 10, the operation execution unit 22 pauses the music reproduction, for example, when the gesture estimation unit 15 estimates that the gesture of inhaling the breath is performed twice during the reproduction of the music.
 なお、図10においては、音楽機器の操作を例として説明したが、本開示は、これに限られるものではない。操作実行部22は、推定したジェスチャに基づき、スマートフォンあるいはゲーム機器などを操作してよい。この場合、操作実行部22は、例えば、ユーザが行うジェスチャに応じて、タップ操作、ダブルタップ操作、シューティングゲームにおける射出方向の制御操作などを行ってよい。また、操作実行部22は、推定したジェスチャに基づき、異なるアプリを起動してよい。この場合、推定装置10は、例えば、ユーザが行うジェスチャに応じて、異なるアプリケーション(レシピ検索アプリケーション、買い物メモアプリケーションなど)を起動してよい。また、操作実行部22は、推定したジェスチャに応じて、音声認識における句読点の挿入操作、改行操作、単語としての認識か、制御文字としての認識かを指定する操作などを行ってよい。 Although the operation of the music device has been described as an example in FIG. 10, the present disclosure is not limited to this. The operation execution unit 22 may operate a smartphone, a game device, or the like based on the estimated gesture. In this case, the operation execution unit 22 may perform a tap operation, a double tap operation, a control operation of the ejection direction in a shooting game, or the like, for example, according to a gesture performed by the user. Further, the operation execution unit 22 may start a different application based on the estimated gesture. In this case, the estimation device 10 may start different applications (recipe search application, shopping memo application, etc.) according to the gesture performed by the user, for example. Further, the operation execution unit 22 may perform an operation of inserting a punctuation mark in speech recognition, a line feed operation, an operation of designating recognition as a word or a recognition as a control character, and the like according to the estimated gesture.
 このように本実施形態においては、推定装置10は、マスクを着用したユーザによる、マスクを介した呼吸に関する特徴量を抽出する特徴量抽出部13と、抽出された特徴量に基づき、ユーザが行った呼吸によるジェスチャを推定するジェスチャ推定部15と、を備える。 As described above, in the present embodiment, the estimation device 10 is performed by the user based on the feature amount extraction unit 13 for extracting the feature amount related to respiration through the mask by the user wearing the mask and the extracted feature amount. A gesture estimation unit 15 for estimating a gesture due to breathing is provided.
 マスクを介した呼吸に関する特徴量に基づき、ユーザが行った呼吸によるジェスチャを推定することで、手指を自由に動かせない状況でも、マスクを着用したユーザが行うジェスチャを推定することができる。また、上述したように、マスクを介した呼吸に関する特徴量は、必ずしもユーザにセンサなどを装着する必要が無い。そのため、ユーザがマスクを着用した状態でも、より汎用的かつユーザの負荷を増大させることなく、ユーザが行うジェスチャを推定することができる。 By estimating the gesture due to the breathing performed by the user based on the feature amount related to breathing through the mask, it is possible to estimate the gesture performed by the user wearing the mask even in a situation where the fingers cannot be moved freely. Further, as described above, it is not always necessary for the user to wear a sensor or the like for the feature amount related to breathing through the mask. Therefore, even when the user wears the mask, it is possible to estimate the gesture performed by the user, which is more general and does not increase the load on the user.
 上述した推定装置10の各部として機能させるためにコンピュータを好適に用いることが可能である。そのようなコンピュータは、推定装置10の各部の機能を実現する処理内容を記述したプログラムを該コンピュータの記憶部に格納しておき、該コンピュータのCPU(Central Processing Unit)によってこのプログラムを読み出して実行させることで実現することができる。すなわち、当該プログラムは、コンピュータを、上述した推定装置10として機能させることができる。 It is possible to preferably use a computer to function as each part of the above-mentioned estimation device 10. Such a computer stores a program describing processing contents that realize the functions of each part of the estimation device 10 in the storage unit of the computer, and the CPU (Central Processing Unit) of the computer reads and executes this program. It can be realized by letting it. That is, the program can make the computer function as the estimation device 10 described above.
 また、このプログラムは、コンピュータ読取り可能媒体に記録されていてもよい。コンピュータ読取り可能媒体を用いれば、コンピュータにインストールすることが可能である。ここで、プログラムが記録されたコンピュータ読取り可能媒体は、非一過性の記録媒体であってもよい。非一過性の記録媒体は、特に限定されるものではないが、例えば、CD-ROMやDVD-ROMなどの記録媒体であってもよい。また、このプログラムは、ネットワークを介して提供することも可能である。 Further, this program may be recorded on a computer-readable medium. It can be installed on a computer using a computer-readable medium. Here, the computer-readable medium on which the program is recorded may be a non-transient recording medium. The non-transient recording medium is not particularly limited, but may be, for example, a recording medium such as a CD-ROM or a DVD-ROM. This program can also be provided via a network.
 本開示は、上述した各実施形態で特定された構成に限定されず、請求の範囲に記載した発明の要旨を逸脱しない範囲内で種々の変形が可能である。例えば、各構成部などに含まれる機能などは論理的に矛盾しないように再配置可能であり、複数の構成部などを1つに組み合わせたり、或いは分割したりすることが可能である。 The present disclosure is not limited to the configuration specified in each of the above-described embodiments, and various modifications can be made without departing from the gist of the invention described in the claims. For example, the functions included in each component can be rearranged so as not to be logically inconsistent, and a plurality of components can be combined or divided into one.
 10  推定装置
 11  呼吸データ取得部
 12  風向きデータ取得部
 13  特徴量抽出部
 14  ジェスチャ学習部
 15  ジェスチャ推定部
 16  データ取得部
 17  ジェスチャモデル
 21  操作DB
 22  操作実行部
 111  第1の外装部材
 112  第2の外装部材
 113  支持部
 114  空気弁
 115  フィルタ
 116  距離センサ
 111a,112a  開口部
 111b,112b  立設部
10 Estimator 11 Respiratory data acquisition unit 12 Wind direction data acquisition unit 13 Feature extraction unit 14 Gesture learning unit 15 Gesture estimation unit 16 Data acquisition unit 17 Gesture model 21 Operation DB
22 Operation execution part 111 First exterior member 112 Second exterior member 113 Support part 114 Air valve 115 Filter 116 Distance sensor 111a, 112a Opening part 111b, 112b Standing part

Claims (8)

  1.  マスクを着用したユーザによる、前記マスクを介した呼吸に関する特徴量を抽出する特徴量抽出部と、
     前記特徴量抽出部により抽出された特徴量に基づき、前記ユーザが行った前記呼吸によるジェスチャを推定するジェスチャ推定部と、を備える推定装置。
    A feature amount extraction unit that extracts features related to breathing through the mask by a user wearing a mask, and a feature amount extraction unit.
    An estimation device including a gesture estimation unit that estimates a gesture due to the respiration performed by the user based on the feature amount extracted by the feature amount extraction unit.
  2.  請求項1に記載の推定装置において、
     前記マスクを介した呼吸に関する特徴量に対応するジェスチャを識別するジェスチャモデルを学習するジェスチャ学習部をさらに備え、
     前記ジェスチャ推定部は、前記ジェスチャモデルに基づき、前記特徴量抽出部により抽出された特徴量に対応するジェスチャを推定する、推定装置。
    In the estimation device according to claim 1,
    Further equipped with a gesture learning unit for learning a gesture model for identifying a gesture corresponding to a feature amount related to breathing through the mask.
    The gesture estimation unit is an estimation device that estimates a gesture corresponding to a feature amount extracted by the feature amount extraction unit based on the gesture model.
  3.  請求項1または2に記載の推定装置において、
     前記ユーザによるマスクを介した息の吸い込みおよび吐き出しの状態を示す呼吸データおよび前記呼吸により起こる風の向きを示す風向きデータを取得するデータ取得部をさらに備え、
     前記特徴量抽出部は、前記データ取得部により取得された前記呼吸データおよび前記風向きデータから前記特徴量を抽出する、推定装置。
    In the estimation device according to claim 1 or 2.
    Further provided with a data acquisition unit for acquiring breathing data indicating the state of inhalation and exhalation of breath through the mask by the user and wind direction data indicating the direction of wind caused by the breathing.
    The feature amount extraction unit is an estimation device that extracts the feature amount from the breathing data and the wind direction data acquired by the data acquisition unit.
  4.  請求項3に記載の推定装置において、
     前記データ取得部は、前記マスクに取り付けられ、前記ユーザによる息の吸い込みおよび吐き出しに応じて変位する空気弁の変位に基づき、前記呼吸データを取得する、推定装置。
    In the estimation device according to claim 3,
    The data acquisition unit is an estimation device attached to the mask and acquires the breathing data based on the displacement of the air valve that is displaced according to the inhalation and exhalation of the user.
  5.  請求項3に記載の推定装置において、
     前記データ取得部は、前記ユーザが着用したマスクを撮影した撮影画像における前記マスクの形状の変化に基づき、前記呼吸データを取得する、推定装置。
    In the estimation device according to claim 3,
    The data acquisition unit is an estimation device that acquires the respiration data based on a change in the shape of the mask in a photographed image of the mask worn by the user.
  6.  請求項1または2に記載の推定装置において、
     前記特徴量抽出部は、ネットワークを介した外部装置との通信により、前記ユーザによるマスクを介した息の吸い込みおよび吐き出しの状態を示す呼吸データおよび前記呼吸により起こる風の向きを示す風向きデータを取得し、該取得した呼吸データおよび風向きデータから前記特徴量を抽出する、推定装置。
    In the estimation device according to claim 1 or 2.
    The feature amount extraction unit acquires breathing data indicating the state of inhalation and exhalation of breath through the mask by the user and wind direction data indicating the direction of the wind caused by the breath by communicating with an external device via a network. An estimation device that extracts the feature amount from the acquired breathing data and wind direction data.
  7.  マスクを装着したユーザによる、前記マスクを介した呼吸に関する特徴量を抽出するステップと、
     前記抽出された特徴量に基づき、前記ユーザが行った前記呼吸によるジェスチャを推定するステップと、を含む推定方法。
    A step of extracting features related to breathing through the mask by the user wearing the mask, and
    An estimation method including a step of estimating a gesture due to the respiration performed by the user based on the extracted feature amount.
  8.  コンピュータを、請求項1から6のいずれか一項に記載の推定装置として機能させるためのプログラム。 A program for making a computer function as an estimation device according to any one of claims 1 to 6.
PCT/JP2020/037659 2020-10-02 2020-10-02 Estimation device, estimation method, and program WO2022070429A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/037659 WO2022070429A1 (en) 2020-10-02 2020-10-02 Estimation device, estimation method, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/037659 WO2022070429A1 (en) 2020-10-02 2020-10-02 Estimation device, estimation method, and program

Publications (1)

Publication Number Publication Date
WO2022070429A1 true WO2022070429A1 (en) 2022-04-07

Family

ID=80950430

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/037659 WO2022070429A1 (en) 2020-10-02 2020-10-02 Estimation device, estimation method, and program

Country Status (1)

Country Link
WO (1) WO2022070429A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002150903A (en) * 2000-11-10 2002-05-24 Yazaki Corp Exhalation switch device and reference value setting method of the exhalation switch device
JP2002366275A (en) * 2001-06-08 2002-12-20 Victor Co Of Japan Ltd Electronic voting terminal
JP2011523030A (en) * 2008-03-26 2011-08-04 ピエール・ボナ Method and system for a MEMS detector allowing control of devices using human exhalation
WO2017082165A1 (en) * 2015-11-10 2017-05-18 日本電信電話株式会社 Respiration estimating method and device
JP2017182500A (en) * 2016-03-30 2017-10-05 富士通株式会社 Input device, input program, and input method
JP2019083019A (en) * 2018-11-28 2019-05-30 株式会社デンソー Driver state determination device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002150903A (en) * 2000-11-10 2002-05-24 Yazaki Corp Exhalation switch device and reference value setting method of the exhalation switch device
JP2002366275A (en) * 2001-06-08 2002-12-20 Victor Co Of Japan Ltd Electronic voting terminal
JP2011523030A (en) * 2008-03-26 2011-08-04 ピエール・ボナ Method and system for a MEMS detector allowing control of devices using human exhalation
WO2017082165A1 (en) * 2015-11-10 2017-05-18 日本電信電話株式会社 Respiration estimating method and device
JP2017182500A (en) * 2016-03-30 2017-10-05 富士通株式会社 Input device, input program, and input method
JP2019083019A (en) * 2018-11-28 2019-05-30 株式会社デンソー Driver state determination device

Similar Documents

Publication Publication Date Title
KR101056406B1 (en) Game device, game processing method and information recording medium
JP5759375B2 (en) Motion detection system
CN110251133B (en) Method and apparatus for respiratory monitoring
CN111356968A (en) Rendering virtual hand gestures based on detected hand input
JP2020067939A (en) Infection risk identification system, information terminal, and infection risk identification method
Heo et al. A realistic game system using multi-modal user interfaces
CN108882870A (en) Biont information analytical equipment, system and program
KR20170073927A (en) Method and device for authenticating user
CN103974736A (en) Automatic patient synchrony adjustment for non invasive ventilation
CN108769391B (en) Sudden death prevention artificial intelligence life monitoring cardio-pulmonary resuscitation system
CN107405106A (en) Respiration rate detection means, respiration rate detection method and program recorded medium
JP6244026B2 (en) Input device, biosensor, program, computer-readable medium, and mode setting method
JP7401634B2 (en) Server device, program and method
US20210068674A1 (en) Track user movements and biological responses in generating inputs for computer systems
US11294464B2 (en) Adapting media content to a sensed state of a user
Wang et al. Nod to auth: Fluent ar/vr authentication with user head-neck modeling
WO2022070429A1 (en) Estimation device, estimation method, and program
Onishi et al. GazeBreath: Input method using gaze pointing and breath selection
CN106805974A (en) Respiration detection device and method of operating the same
JP6552158B2 (en) Analysis device, analysis method, and program
Chen et al. VisaudiBlow: Fine-grained Blowing Interaction Based on Visual and Auditory Detection
Onishi et al. DualBreath: Input Method Using Nasal and Mouth Breathing
CN116110535B (en) Breathing biofeedback method based on virtual reality, feedback equipment and storage medium
EP4292671A1 (en) Methods, apparatuses, and systems for evaluating respiratory protective devices
KR20200025229A (en) Electronic apparatus and thereof control method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20956357

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20956357

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP