WO2023171356A1 - Patient monitoring system, patient monitoring method, and program - Google Patents

Patient monitoring system, patient monitoring method, and program Download PDF

Info

Publication number
WO2023171356A1
WO2023171356A1 PCT/JP2023/006124 JP2023006124W WO2023171356A1 WO 2023171356 A1 WO2023171356 A1 WO 2023171356A1 JP 2023006124 W JP2023006124 W JP 2023006124W WO 2023171356 A1 WO2023171356 A1 WO 2023171356A1
Authority
WO
WIPO (PCT)
Prior art keywords
patient
feature amount
posture
facial expression
monitoring system
Prior art date
Application number
PCT/JP2023/006124
Other languages
French (fr)
Japanese (ja)
Inventor
宇紀 深澤
弘泰 馬場
穂 森田
奈々 河村
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2023171356A1 publication Critical patent/WO2023171356A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/113Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb occurring during breathing
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state

Definitions

  • the present technology relates to a patient monitoring system, a patient monitoring method, and a program, and particularly relates to a patient monitoring system, a patient monitoring method, and a program that are capable of stably extracting features of a patient's appearance.
  • Patent Document 1 discloses a technique for estimating the emotions of a patient, such as an infant, who has difficulty expressing emotions from the feature amounts of a facial image.
  • the present technology was developed in view of this situation, and is intended to make it possible to stably extract features of a patient's appearance.
  • a patient monitoring system acquires an expression feature that is a feature related to the patient's face and a posture feature that is a feature related to the patient's posture based on an image showing the patient's appearance.
  • an estimation unit that estimates the patient's condition based on at least one of the facial expression feature and the posture feature; and an output unit that outputs an estimation result of the patient's condition.
  • an expression feature amount which is a feature amount related to the patient's face
  • a posture feature amount which is a feature amount related to the patient's posture
  • the patient's condition is estimated based on at least one of the facial expression feature and the posture feature, and an estimation result of the patient's condition is output.
  • FIG. 1 is a diagram illustrating an example configuration of a patient monitoring system according to an embodiment of the present technology.
  • FIG. 3 is a diagram illustrating an example of estimating appearance feature amounts.
  • FIG. 3 is an enlarged diagram showing a display on a monitor. It is a figure showing an example of calculation of a contribution rate.
  • FIG. 2 is a block diagram showing an example of a functional configuration of an information processing device.
  • FIG. 2 is a block diagram showing a configuration example of an information processing section.
  • FIG. 3 is a diagram illustrating an example of a method for acquiring feature amounts and recognition reliability.
  • 3 is a flowchart illustrating processing of the information processing device. It is a figure showing an example of calculation of a contribution rate.
  • FIG. 3 is a diagram illustrating an example configuration of a patient monitoring system according to an embodiment of the present technology.
  • FIG. 3 is a diagram illustrating an example of estimating appearance feature amounts.
  • FIG. 3 is an enlarged diagram showing a display on a monitor
  • FIG. 3 is a block diagram showing another example of the configuration of the information processing section.
  • FIG. 3 is a block diagram showing another example of the configuration of the information processing section.
  • FIG. 3 is a diagram illustrating an example of a method for acquiring patient images.
  • FIG. 7 is a diagram illustrating another example of a method for acquiring patient images.
  • FIG. 3 is a diagram showing an example of extraction of facial expression features.
  • FIG. 6 is an enlarged diagram showing a monitor display when the appearance feature amount cannot be displayed.
  • FIG. 2 is a block diagram showing an example of the hardware configuration of a computer.
  • FIG. 1 is a diagram illustrating a configuration example of a patient monitoring system according to an embodiment of the present technology.
  • the patient monitoring system in FIG. 1 is configured by connecting a camera #1 that photographs a patient in an ICU or the like to an information processing device 1 via wired or wireless communication.
  • a monitor 2 is connected to the information processing device 1 .
  • the information processing device 1 and monitor 2 are installed, for example, in an ICU.
  • the information processing device 1 and the monitor 2 may be installed not in the ICU but in another room of the medical facility where the ICU is located.
  • the patient monitoring system shown in FIG. 1 is a system used, for example, by users such as doctors and medical staff to monitor the condition of patients in the ICU.
  • one camera #1 that takes pictures of one patient is connected to the information processing device 1, but multiple cameras that take pictures of different patients are connected to the information processing device 1. It is also possible to do so.
  • Common image recognition technologies include extracting the degree of distress from facial expressions by estimating facial expressions by detecting facial landmarks, and detecting body movements from joint point information by classifying behaviors by detecting skeletal landmarks of the whole body. There are techniques to quantify this.
  • a patient image taken by camera #1 is transmitted to the information processing device 1 as indicated by arrow A1.
  • a facial image that shows the patient's face and a whole body image that shows the skeleton of the patient's whole body are transmitted to the information processing device 1 as patient images.
  • the information processing device 1 extracts facial landmarks as facial expression features based on the facial image transmitted from camera #1. Furthermore, the information processing device 1 extracts skeletal landmarks as posture features based on the whole-body video. Note that the facial expression feature amount is information indicating the position of feature points of each part of the patient's face. The posture feature amount is information indicating the position of feature points on the patient's body.
  • the information processing device 1 estimates (extracts) an appearance feature indicating the patient's condition based on at least one of the facial expression feature and posture feature extracted from the patient image.
  • Six types of features are estimated as appearance features: degree of agony, degree of sedation, mandibular breathing, shoulder breathing, seesaw breathing (breathing in which the chest collapses and the abdomen expands during inspiration), and amount of body movement. .
  • FIG. 2 is a diagram illustrating an example of estimating appearance feature amounts.
  • the degree of agony, degree of sedation, and mandibular breathing are features that appear on the face, so as shown in A of FIG. 2, the facial features extracted from the facial image P1 are mainly Estimated based on.
  • shoulder breathing, seesaw breathing, and body movement amount are features that appear on the body, so they are estimated mainly based on the posture features extracted from the whole body image P2, as shown in B of FIG. .
  • the degree of agony, degree of sedation, and mandibular respiration are appearance features that are estimated mainly based on facial expression features. If the recognition reliability of facial expression features is low because the face is covered with a mask, the appearance features are estimated based also on the posture features. If the patient's face is not facing a predetermined direction, such as the front, the appearance feature amount is mainly estimated based on the posture feature amount rather than the facial expression feature amount.
  • shoulder breathing, seesaw breathing, and body movement amount are appearance features estimated mainly based on posture features, but in the information processing device 1, the patient's entire body is covered with a blanket, etc. If the recognition reliability of the posture features is low, the appearance features are estimated based also on the facial expression features.
  • the information processing device 1 calculates a contribution rate that represents the contribution of each facial expression feature and posture feature to the estimated results of the degree of distress, degree of sedation, and mandibular respiration, which are appearance feature amounts regarding the features that appear on the face. is calculated.
  • the contribution rate of the facial features is usually higher than the contribution rate of the posture features to the estimation results of the degree of distress, the degree of sedation, and mandibular respiration.
  • the contribution rate representing the contribution of each facial expression feature and posture feature is calculated for the estimation results of shoulder breathing, seesaw breathing, and body movement amount, which are appearance features of the features that appear on the body. Ru.
  • the contribution rate of posture features is usually higher than the contribution rate of facial expressions to the estimation results of shoulder breathing, seesaw breathing, and body movement. .
  • Information on the calculated contribution rate is output to the monitor 2 as shown by arrow A2 in FIG. 1, and is displayed on the monitoring screen, which is a screen used for patient monitoring, together with information indicating the appearance feature amount.
  • FIG. 3 is an enlarged view of the display on the monitor 2 in FIG. 1.
  • a waveform W1 representing a change in the degree of distress is displayed at the top of the screen, and below it, the contribution rates of facial features and posture features to the degree of distress at each time are displayed.
  • the horizontal axis of the graph shown in the upper part of FIG. 3 represents time, and the vertical axis represents the value of the degree of distress.
  • Each appearance feature including the degree of distress is expressed numerically.
  • the horizontal axis of the graph shown in the lower part of FIG. 3 represents time, and the vertical axis represents contribution rate.
  • the contribution rate is expressed as a value of 0-100%, for example.
  • the monitor 2 displays the appearance feature of the patient at each time, as well as the respective contribution rates of the facial expression feature and posture feature to the appearance feature at each time. For example, a doctor viewing the display on the monitor 2 can confirm the basis of each appearance feature based on the contribution rate.
  • FIG. 4 is a diagram showing an example of calculating the contribution rate in FIG. 3.
  • the first row in FIG. 4 represents changes in facial features at each time after time t0.
  • the section indicated by the bidirectional arrow A11 and the arrow A12 is a section in which it is difficult to extract reliable facial features because the patient's face is turned sideways or the patient is wearing a mask.
  • the recognition reliability of the facial expression feature amount in the section indicated by arrow A11 and arrow A12 is a value less than or equal to the threshold value.
  • the second row in FIG. 4 represents changes in the posture feature amount at each time after time t0.
  • the section indicated by the bidirectional arrow A13 is a section in which it is difficult to extract reliable posture features because the entire body is covered with a blanket or the like.
  • the recognition reliability of the posture feature amount in the section indicated by arrow A13 is a value that is less than or equal to the threshold value.
  • the degree of agony shown in the third row ahead of the white arrow is calculated based on such facial expression and posture features
  • the contribution of each time of the facial and posture features to the degree of agony is calculated.
  • the rate is calculated as shown in the fourth row.
  • the degree of distress is an appearance feature estimated mainly from facial expression features.
  • the degree of distress is estimated with the contribution rate of the facial expression feature amounts as 100%.
  • the degree of distress is estimated with the contribution rate of the posture features set to 100%, as indicated by diagonal lines.
  • the section from time t1 to time t2 corresponds to the section indicated by arrow A11.
  • the degree of distress is estimated with the contribution rate of the facial expression feature values as 100%.
  • the degree of distress is estimated with the contribution rate of the posture features as 100%, as in the interval from time t1 to time t2. be exposed.
  • the section from time t3 to time t4 corresponds to the section indicated by arrow A12.
  • the contribution rates of each of the facial expression feature amount and posture feature amount are calculated in the same manner.
  • the contribution rates shown in Fig. 4 are calculated based only on the facial expression features (assuming the contribution rate of the facial expression features as 100%) without weighting the facial expression features and posture features, or based only on the posture features. (assuming the contribution rate of the posture feature amount to 100%) is the contribution rate when estimating the degree of distress at each time.
  • the appearance feature amount is estimated based on at least one of the facial expression feature amount and the posture feature amount.
  • the appearance feature amount can be stably extracted.
  • the monitor 2 that constitutes the patient monitoring system displays information indicating the appearance feature amount as well as information indicating the contribution rate. By visualizing the contribution rate, the doctor viewing the display on monitor 2 can confirm the basis of each appearance feature based on the contribution rate and make a final judgment regarding the patient's condition. .
  • FIG. 5 is a block diagram showing an example of the functional configuration of the information processing device 1. As shown in FIG. At least some of the functional units shown in FIG. 5 are realized by executing a predetermined program by the CPU of the computer that constitutes the information processing device 1.
  • the information processing device 1 includes a video acquisition section 11, an information processing section 12, and a display control section 13.
  • the image acquisition unit 11 acquires the patient image captured by camera #1. Acquisition of patient images may be started in response to designation of a patient to be monitored by a user, such as a doctor or medical staff. The patient image acquired by the image acquisition section 11 is output to the information processing section 12.
  • the information processing unit 12 estimates the appearance feature amount of the patient based on the patient image supplied from the image acquisition unit 11.
  • the information processing unit 12 also calculates a contribution rate used for estimating the appearance feature amount.
  • Information indicating the appearance feature amount and information indicating the contribution rate are output to the display control unit 13.
  • the display control unit 13 displays the appearance feature amount together with the contribution rate on the monitor 2 based on the information supplied from the appearance feature calculation unit 36.
  • the display control unit 13 functions as an output unit that outputs the estimation result of the patient's condition.
  • FIG. 6 is a block diagram showing a configuration example of the information processing section 12 in FIG. 5.
  • the information processing unit 12 includes a facial expression recognition unit 31, a posture recognition unit 32, a dynamic contribution rate calculation unit 33, a static contribution rate calculation unit 34, a contribution rate determination unit 35, and an appearance feature amount calculation unit 36.
  • a face image P1 supplied from the image acquisition section 11 as a patient image is input to the facial expression recognition section 31, and a whole body image P2 is input to the posture recognition section 32.
  • the facial expression recognition unit 31 extracts facial expression features from the facial image P1 supplied from the image acquisition unit 11. In addition, the facial expression recognition unit 31 calculates recognition reliability indicating the degree of confidence in recognition of the extracted facial features.
  • a facial expression feature extraction model M1 configured by a neural network or the like as shown in A in FIG. 7 is prepared in advance.
  • the facial expression feature extraction model M1 generated by machine learning is an inference model that receives the facial image P1 as an input and outputs the facial expression feature and the recognition reliability of the facial expression feature.
  • the facial expression recognition unit 31 inputs each frame constituting the facial image P1 into the facial expression feature extraction model M1, thereby extracting the facial expression feature and calculating the recognition reliability. For example, facial expression feature amounts and recognition reliability are acquired for each frame forming the facial image P1.
  • the facial expression feature amount and recognition reliability for each frame may be obtained by analyzing each frame that constitutes the facial image P1.
  • Information on facial expression features extracted by the facial expression recognition unit 31 is output to the appearance feature calculation unit 36, and information on recognition reliability is output to the dynamic contribution rate calculation unit 33.
  • the posture recognition unit 32 extracts posture features from the whole body image P2 supplied from the image acquisition unit 11. Further, the posture recognition unit 32 calculates recognition reliability indicating the degree of reliability in recognition of the extracted posture feature amount.
  • the posture recognition unit 32 is prepared in advance with a posture feature extraction model M2 configured by a neural network or the like, as shown in FIG. 7B.
  • the posture feature extraction model M2 generated by machine learning is an inference model that receives the whole body image P2 as an input and outputs posture features and recognition reliability of the posture features.
  • the posture recognition unit 32 extracts posture features and calculates recognition reliability by inputting each frame forming the whole-body video P2 to the posture feature extraction model M2. For example, posture feature amounts and recognition reliability are acquired for each frame configuring the whole-body video P2.
  • the posture feature amount and recognition reliability may be obtained for each frame.
  • Information on the posture feature extracted by the posture recognition unit 32 is output to the appearance feature calculation unit 36, and information on recognition reliability is output to the dynamic contribution rate calculation unit 33.
  • the facial expression recognition unit 31 and the posture recognition unit 32 function as an acquisition unit that acquires facial expression feature quantities that are feature quantities related to the patient's face, and posture feature quantities that are feature quantities related to the patient's posture.
  • the dynamic contribution rate calculation unit 33 calculates the dynamic contribution rate based on the recognition reliability calculated by the facial expression recognition unit 31 and the recognition reliability calculated by the posture recognition unit 32.
  • the dynamic contribution rate is a contribution rate that changes depending on the recognition reliability value of each of the facial expression feature amount and the posture feature amount.
  • the dynamic contribution rate is calculated by setting the contribution rate of a feature quantity with high recognition reliability to 100%, and setting the contribution rate of a feature quantity with low recognition reliability to 0%.
  • a contribution rate of any value other than 0% and 100% may be calculated depending on the level of recognition reliability.
  • Information on the dynamic contribution rate calculated by the dynamic contribution rate calculating section 33 is output to the contribution rate determining section 35.
  • the static contribution rate calculation unit 34 calculates a static contribution rate that is a contribution rate for each appearance feature amount. For example, for each appearance feature such as the degree of distress, the respective contribution rates of the facial expression feature and the posture feature are set in advance. When the measurement mode for a distressed state is set, the static contribution rate is increased according to the appearance feature value (patient's condition) set by the user as the monitoring target, such as increasing the contribution rate of the facial expression feature value. It may be calculated.
  • the static contribution rate calculation unit 34 calculates the static contribution rate such that the contribution rate of the facial expression feature is higher than the contribution rate of the posture feature with respect to the degree of distress as the appearance feature. do. In addition, the static contribution rate calculation unit 34 calculates a static contribution rate such that the contribution rate of the posture feature is higher than the contribution rate of the facial expression feature with respect to shoulder breathing as the appearance feature. do. Information on the static contribution rate calculated by the static contribution rate calculation unit 34 is output to the contribution rate determination unit 35.
  • the contribution rate determination unit 35 uses the dynamic contribution rate calculated by the dynamic contribution rate calculation unit 33 and the static contribution rate calculated by the static contribution rate calculation unit 34 to estimate the appearance feature amount.
  • the final contribution rate is determined for each appearance feature. For example, the final contribution rate is determined by performing a predetermined calculation based on the dynamic contribution rate and the static contribution rate. Information on the contribution rate determined by the contribution rate determination unit 35 is output to the appearance feature amount calculation unit 36.
  • the appearance feature calculation unit 36 calculates a predetermined value based on the facial expression feature extracted by the facial expression recognition unit 31, the posture feature extracted by the posture recognition unit 32, and the contribution rate determined by the contribution rate determination unit 35. Perform calculations and estimate appearance features.
  • the appearance feature amount estimated by the appearance feature amount calculation section 36 is output to the display control section 13 in FIG. 5 together with information on the contribution rate.
  • Appearance feature values may be estimated using an inference model generated by machine learning.
  • the appearance feature calculation unit 36 is provided with an inference model that receives, for example, facial expression features, posture features, and contribution rates as inputs, and outputs the respective appearance features.
  • the appearance feature calculation unit 36 functions as an estimation unit that estimates the patient's condition based on at least one of the facial expression feature and the posture feature.
  • step S1 the image acquisition unit 11 (FIG. 5) receives and acquires the patient image transmitted from camera #1.
  • step S2 the facial expression recognition unit 31 of the information processing unit 12 extracts facial expression features from the facial image P1 supplied from the image acquisition unit 11.
  • step S3 the posture recognition unit 32 extracts posture features from the whole body image P2 supplied from the image acquisition unit 11.
  • step S4 the dynamic contribution rate calculation unit 33 calculates the dynamic contribution rate based on the recognition reliability calculated by the facial expression recognition unit 31 and the recognition reliability calculated by the posture recognition unit 32.
  • step S5 the static contribution rate calculation unit 34 calculates the static contribution rate of each appearance feature amount.
  • step S6 the contribution rate determination unit 35 determines the appearance feature amount based on the dynamic contribution rate calculated by the dynamic contribution rate calculation unit 33 and the static contribution rate calculated by the static contribution rate calculation unit 34. Determine the contribution rate used to estimate .
  • step S7 the appearance feature calculation unit 36 calculates the facial expression feature extracted by the facial expression recognition unit 31, the posture feature extracted by the posture recognition unit 32, and the contribution rate determined by the contribution rate determination unit 35. Estimate appearance features based on the
  • step S8 the display control unit 13 causes the monitor 2 to display the appearance feature estimated by the appearance feature calculation unit 36 together with the contribution rate.
  • the monitor 2 displays six types of appearance characteristic amounts, including degree of agony, degree of sedation, mandibular breathing, shoulder breathing, seesaw breathing, and amount of body movement, as well as their respective contribution rates.
  • the information processing device 1 can stably extract appearance feature amounts.
  • the doctor who sees the display on the monitor 2 can check the basis of each appearance feature based on the contribution rate and judge the patient's condition.
  • the appearance feature amount and contribution rate may be output using other communication means such as sound from a speaker or light emission from an LED, instead of using a screen display.
  • appearance features are estimated without weighting facial expression features and posture features
  • appearance features may be estimated by using a combination of facial expression features and posture features.
  • the facial expression feature amount and the posture feature amount are each used for estimating the appearance feature amount after being subjected to predetermined weighting.
  • FIG. 9 is a diagram showing an example of calculation of contribution rate. Descriptions that overlap with the description of FIG. 4 will be omitted as appropriate.
  • the first row in FIG. 9 represents changes in facial features at each time after time t0.
  • the section indicated by the bidirectional arrow A21 and the arrow A22 is a section in which it is difficult to extract reliable facial features.
  • the second row in FIG. 9 represents changes in the posture feature amount at each time after time t0.
  • the section indicated by the bidirectional arrow A23 is a section in which it is difficult to extract reliable posture features.
  • the degree of agony shown in the third row ahead of the white arrow is calculated based on such facial expression and posture features
  • the contribution of each time of the facial and posture features to the degree of agony is calculated.
  • the rate is calculated as shown in the fourth row.
  • the degree of distress is estimated with the contribution rate of the facial expression feature as 100%.
  • the recognition reliability of the facial expression feature is slightly lower than the threshold, and the recognition reliability of the posture feature is slightly higher than the threshold, so the facial expression and posture features are each given a predetermined contribution.
  • the degree of distress has been estimated by using these factors in combination. In the example of FIG. 9, the degree of distress is estimated, for example, with the contribution rate of the facial expression feature amount being 85% and the contribution rate of the posture feature amount being 15%.
  • the degree of distress is estimated by setting the contribution rate of the posture feature to 100%.
  • the section from time t12 to time t13 corresponds to the section indicated by arrow A21.
  • the recognition reliability of the facial expression feature is slightly lower than the threshold, and the recognition reliability of the posture feature is slightly higher than the threshold, so for example, the contribution rate of the facial expression feature is set to 85%. , the degree of distress is estimated with the contribution rate of posture features as 15%.
  • the degree of distress is estimated by setting the contribution rate of the facial expression feature to 100%.
  • the section from time t14 to time t15 corresponds to the section indicated by arrow A23.
  • the contribution rates of each of the facial expression feature amount and posture feature amount are calculated in the same manner.
  • the contribution rate shown in FIG. 9 is the contribution rate when estimating the degree of distress at each time by weighting the facial expression feature amount and the posture feature amount, and using a combination of the facial expression feature amount and the posture feature amount.
  • the respective contribution rates of the facial expression feature amount and the posture feature amount change depending on the recognition reliability values of the facial expression feature amount and the posture feature amount, respectively.
  • the information processing device 1 can stably extract appearance features by using a combination of facial expression features and posture features.
  • the monitor 2 displays information indicating the appearance feature amount as well as information indicating the contribution rate indicating the degree of combination. By visualizing the contribution rate, the doctor viewing the display on the monitor 2 can confirm how each appearance feature was estimated by combining the facial expression feature and the posture feature.
  • FIG. 10 is a block diagram showing another example of the configuration of the information processing section 12. As shown in FIG. Among the configurations shown in FIG. 10, the same components as those described with reference to FIG. 6 are designated by the same reference numerals. Duplicate explanations will be omitted as appropriate.
  • Patient attribute information is information representing patient attributes such as facial paralysis, bone fracture, and respiratory disease.
  • the static contribution rate calculation unit 34 calculates the static contribution rate based on the patient's attribute information in addition to the information on the contribution rate set in advance for each appearance feature amount.
  • Patient attribute information is obtained by referring to electronic medical records and the like.
  • the static contribution rate may be calculated based only on patient attribute information.
  • the static contribution rate calculation unit 34 calculates that the contribution rate of the facial features will be low and the posture features will be difficult to extract.
  • the static contribution rate is calculated so that the contribution rate of is high.
  • the static contribution rate calculation unit 34 calculates a low contribution rate of the posture features.
  • the static contribution rate is calculated so that the contribution rate of the facial features becomes high.
  • the information on the static contribution rate calculated based also on the attribute information in this way is output to the contribution rate determining section 35 and used for determining the final contribution rate.
  • the information processing device 1 can estimate the appearance feature amount in a manner that the feature amount suitable for the patient's symptoms contributes more.
  • FIG. 11 is a block diagram showing another example of the configuration of the information processing section 12. As shown in FIG. Among the configurations shown in FIG. 11, the same components as those described with reference to FIG. 6 are designated by the same reference numerals. Duplicate explanations will be omitted as appropriate.
  • the example shown in FIG. 11 differs from the configuration described with reference to FIG. 6 in that a correction section 51 is provided between the contribution rate determination section 35 and the appearance feature calculation section 36.
  • the correction unit 51 is supplied with information on vital signs such as respiration rate and heart rate, which indicate the patient's biological reactions.
  • the correction unit 51 corrects the contribution rate determined by the contribution rate determination unit 35 based on information on the patient's vital signs. For example, information indicating how to correct the contribution rate of each facial expression feature amount and posture feature amount is set in the correction unit 51 according to the content of changes in each vital sign.
  • the correction unit 51 corrects the contribution rate of the facial features to be higher as the contribution rate used for estimating the degree of distress.
  • the correction unit 51 corrects the contribution rate of the facial features to be higher as the contribution rate used for estimating mandibular respiration.
  • Information on the contribution rate corrected by the correction unit 51 is output to the appearance feature calculation unit 36.
  • the appearance feature is estimated using the contribution rate corrected by the correction unit 51.
  • the information processing device 1 can estimate the appearance feature amount according to the patient's real-time situation.
  • FIG. 12 is a diagram illustrating an example of a method for acquiring patient images.
  • a high-resolution whole-body image P11 is captured by camera #11 and supplied to the image acquisition unit 11.
  • a cropping process is performed on the whole body image P11, and a face image P1 is generated as shown in the upper right corner of FIG. Furthermore, a resolution reduction process is performed on the whole body image P11, and a whole body image P2 is generated as shown in the lower right corner of FIG.
  • the face image P1 and the whole body image P2 generated and acquired by the image acquisition unit 11 in this manner are output to the facial expression recognition unit 31 and posture recognition unit 32 in FIG. 6, respectively.
  • FIG. 13 is a diagram illustrating another example of a method for acquiring patient images.
  • a face image P1 is photographed by camera #11-1, and a whole body image P2 is photographed by camera #11-2.
  • Camera #11-2 may be a relatively inexpensive camera with low resolution.
  • a face image P1 taken by camera #11-1 and a whole body image P2 taken by camera #11-2 are supplied to the image acquisition unit 11.
  • the face image and the whole body image may be captured by different cameras.
  • processing such as facial recognition is sometimes performed on devices around patients.
  • the patient's face may not be recognized because a respirator or the like is attached to the patient's face.
  • FIG. 14 is a diagram showing an example of extraction of facial expression features.
  • the position of the patient's face may be specified based on the posture feature extracted from the whole body image P2, and the face image P1 may be generated based on the face position information.
  • crop processing is performed to cut out the image based on the position information of the face, and a face image P1-1 in which the periphery of the patient's face is enlarged is generated.
  • a face image P1-1 in which the periphery of the patient's face is enlarged is generated.
  • a message is displayed notifying that the appearance feature amount cannot be displayed because facial expressions cannot be recognized.
  • the message shown in FIG. 15 is displayed, for example, when it is impossible to display an appearance feature with a high contribution rate of the facial expression feature. If an appearance feature with a high contribution rate of the posture feature cannot be displayed, a message is displayed notifying that the appearance feature cannot be displayed because the posture cannot be recognized.
  • a doctor who looks at the display on monitor 2 may find that either the appearance feature is not displayed because reliable facial expression features have not been extracted, or the appearance feature is not displayed because reliable posture features have not been extracted. You can find out if it is not displayed.
  • Monitoring may be performed according to priorities set as follows. For example, when monitoring multiple patients, monitoring is performed according to priority when the performance or functionality of the monitoring system is insufficient. ⁇ Patients designated by the doctor to be monitored are prioritized for monitoring. ⁇ Patients with large movements are automatically extracted and prioritized for monitoring. ⁇ Patients with facial expressions of agony are automatically extracted, Priority monitoring ⁇ Patients with low breathing rate are automatically selected and prioritized for monitoring ⁇ Monitoring is performed according to the severity calculated from vital signs and various test data
  • facial expression features and posture features may be extracted using the camera.
  • the camera does not output a patient image, but only information on the feature amount.
  • facial expression features and posture features are extracted by a camera and transmitted from the camera.
  • a small hole may be formed in an object in the room where the patient is, and the patient image may be taken through the hole. Further, a feature detection model corresponding to a low-resolution patient image may be prepared.
  • a specific action may be counted based on the patient's posture, and the count result may be displayed as an appearance feature amount. For example, if a patient frequently touches his or her abdomen, information such as the duration of touching and the number of times the patient touches is displayed on the monitor 2.
  • the series of processes described above can be executed by hardware or software.
  • a program constituting the software is installed from a program recording medium into a computer built into dedicated hardware or a general-purpose personal computer.
  • FIG. 16 is a block diagram showing an example of the hardware configuration of a computer that executes the above-described series of processes using a program.
  • the information processing device 1 has a configuration similar to that shown in FIG. 16.
  • a CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • An input/output interface 1005 is further connected to the bus 1004.
  • An input section 1006, an output section 1007, a storage section 1008, a communication section 1009, and a drive 1010 are connected to the input/output interface 1005.
  • the drive 1010 drives a removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
  • the CPU 1001 executes the series of processes described above by, for example, loading a program stored in the storage unit 1008 into the RAM 1003 via the input/output interface 1005 and the bus 1004 and executing it. will be held.
  • a program executed by the CPU 1001 is installed in the storage unit 1008 by being recorded on a removable medium 1011 or provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital broadcasting.
  • the program executed by the computer may be a program in which processing is performed chronologically in accordance with the order described in this specification, in parallel, or at necessary timing such as when a call is made. It may also be a program that performs processing.
  • a system refers to a collection of multiple components (devices, modules (components), etc.), regardless of whether all the components are in the same casing. Therefore, multiple devices housed in separate casings and connected via a network, and a single device with multiple modules housed in one casing are both systems. .
  • the present technology can take a cloud computing configuration in which one function is shared and jointly processed by multiple devices via a network.
  • each step described in the above flowchart can be executed by one device or can be shared and executed by multiple devices.
  • one step includes multiple processes
  • the multiple processes included in that one step can be executed by one device or can be shared and executed by multiple devices.
  • an acquisition unit that acquires a facial expression feature, which is a feature related to the patient's face, and a posture feature, which is a feature related to the patient's posture, based on a video showing the patient's appearance; an estimation unit that estimates the condition of the patient based on at least one of the facial expression feature amount and the posture feature amount;
  • a patient monitoring system comprising: an output unit that outputs an estimation result of the patient's condition.
  • the patient monitoring system estimates the condition of the patient based on the posture feature amount when the patient's face direction is not a predetermined direction.
  • the estimating unit estimates at least one of the patient's emotional state, the patient's conscious state, and the patient's breathing state as the patient's state. patient monitoring system as described in .
  • the acquisition unit acquires the facial expression feature extracted from a facial image showing the patient's face, and acquires the posture feature extracted from a whole body image showing the patient's skeleton.
  • the patient monitoring system according to any one of 5).
  • the output unit outputs, together with the estimation result of the patient's condition, information on a contribution rate representing the contribution of each of the facial expression feature amount and the posture feature amount to the estimation result of the patient's condition.
  • the patient monitoring system according to any one of 1) to (6).
  • (8) The patient according to any one of (1) to (7), wherein the estimation unit estimates the condition of the patient based on the facial expression feature amount and the posture feature amount, which are weighted based on their reliability. monitoring system.
  • (9) The patient monitoring system according to (7) or (8), wherein the estimation unit estimates the condition of the patient based on the contribution rate according to the condition set by the user as a monitoring target.
  • the patient monitoring system according to any one of (7) to (9), wherein the estimation unit estimates the patient's condition based on the contribution rate calculated based on the patient's attribute information. (11) The estimating unit estimates the condition of the patient based on the contribution rate corrected based on vital signs that are information indicating biological reactions of the patient. patient monitoring system. (12) The patient monitoring system according to any one of (6) to (11), wherein the acquisition unit acquires the facial expression feature amount and the posture feature amount extracted using an inference model generated by machine learning. (13) The patient monitoring system according to any one of (1) to (12), wherein the estimation unit estimates the condition of the patient using an inference model generated by machine learning.
  • the patient monitoring system Based on a video showing the appearance of the patient, obtain an expression feature amount that is a feature amount related to the patient's face, and a posture feature amount that is a feature amount related to the patient's posture; Estimating the condition of the patient based on at least one of the facial expression feature amount and the posture feature amount, A patient monitoring method that outputs an estimation result of the patient's condition.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physiology (AREA)
  • Pulmonology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The present technology provides a patient monitoring system, a patient monitoring method, and a program enabling stable extraction of a feature amount of the appearance of a patient. A patient monitoring system according to one aspect of the present technology acquires a facial expression feature amount which is a feature amount related to the face of the patient and a posture feature amount related to the posture of the patient on the basis of an image of the appearance of a patient, estimates the state of the patient on the basis of at least one of the facial expression feature amount and the posture feature amount, and outputs an estimation result of the state of the patient. The present technology is applicable to a monitoring system for observing a state of a patient in an ICU.

Description

患者モニタリングシステム、患者モニタリング方法、およびプログラムPatient monitoring systems, patient monitoring methods, and programs
 本技術は、患者モニタリングシステム、患者モニタリング方法、およびプログラムに関し、特に、患者の外観の特徴量を安定して抽出することができるようにした患者モニタリングシステム、患者モニタリング方法、およびプログラムに関する。 The present technology relates to a patient monitoring system, a patient monitoring method, and a program, and particularly relates to a patient monitoring system, a patient monitoring method, and a program that are capable of stably extracting features of a patient's appearance.
 ICU(Intensive Care Unit)などの医療現場において、医療スタッフは、患者のバイタルサインだけではなく、患者の様子を細かくモニタリングしている。そのため、患者の様子を効率的にモニタリングすることができるシステムが求められている。 In medical settings such as ICUs (Intensive Care Units), medical staff not only monitor patients' vital signs but also closely monitor patients' conditions. Therefore, there is a need for a system that can efficiently monitor the condition of patients.
 例えば、特許文献1には、感情表現が困難な乳児などの患者の感情を顔画像の特徴量から推定する技術が開示されている。 For example, Patent Document 1 discloses a technique for estimating the emotions of a patient, such as an infant, who has difficulty expressing emotions from the feature amounts of a facial image.
国際公開第2007/043712号International Publication No. 2007/043712
 ICUなどで治療を受けている患者に対しては、人工呼吸器をはじめとした様々な器具が顔周辺に取り付けられていることが多い。そのため、患者の顔映像から特徴量を取得し続けることが難しい。 For patients receiving treatment in ICUs, various devices, including ventilators, are often attached to the area around their faces. Therefore, it is difficult to continuously acquire feature amounts from images of patients' faces.
 また、顔が横向きになることによって患者の表情を検出できなくなることや、体の大部分がブランケットなどで隠れることによって関節点を検出できなくなることが起こりうる。この場合も、患者の映像から特徴量を安定して抽出することが難しくなる。 Furthermore, if the patient's face turns sideways, it may become impossible to detect the patient's facial expressions, or if most of the body is hidden by a blanket, it may become impossible to detect joint points. In this case as well, it becomes difficult to stably extract feature amounts from patient images.
 本技術はこのような状況に鑑みてなされたものであり、患者の外観の特徴量を安定して抽出することができるようにするものである。 The present technology was developed in view of this situation, and is intended to make it possible to stably extract features of a patient's appearance.
 本技術の一側面の患者モニタリングシステムは、患者の外観が映る映像に基づいて、前記患者の顔に関する特徴量である表情特徴量と、前記患者の姿勢に関する特徴量である姿勢特徴量とを取得する取得部と、前記表情特徴量と前記姿勢特徴量のうちの少なくともいずれかに基づいて、前記患者の状態を推定する推定部と、前記患者の状態の推定結果を出力する出力部とを備える。 A patient monitoring system according to one aspect of the present technology acquires an expression feature that is a feature related to the patient's face and a posture feature that is a feature related to the patient's posture based on an image showing the patient's appearance. an estimation unit that estimates the patient's condition based on at least one of the facial expression feature and the posture feature; and an output unit that outputs an estimation result of the patient's condition. .
 本技術の一側面においては、患者の外観が映る映像に基づいて、前記患者の顔に関する特徴量である表情特徴量と、前記患者の姿勢に関する特徴量である姿勢特徴量とが取得され、前記表情特徴量と前記姿勢特徴量のうちの少なくともいずれかに基づいて、前記患者の状態が推定され、前記患者の状態の推定結果が出力される。 In one aspect of the present technology, an expression feature amount, which is a feature amount related to the patient's face, and a posture feature amount, which is a feature amount related to the patient's posture, are acquired based on an image showing the patient's appearance; The patient's condition is estimated based on at least one of the facial expression feature and the posture feature, and an estimation result of the patient's condition is output.
本技術の一実施形態に係る患者モニタリングシステムの構成例を示す図である。1 is a diagram illustrating an example configuration of a patient monitoring system according to an embodiment of the present technology. 外観特徴量の推定の例を示す図である。FIG. 3 is a diagram illustrating an example of estimating appearance feature amounts. モニタの表示を拡大して示す図である。FIG. 3 is an enlarged diagram showing a display on a monitor. 寄与率の算出例を示す図である。It is a figure showing an example of calculation of a contribution rate. 情報処理装置の機能構成例を示すブロック図である。FIG. 2 is a block diagram showing an example of a functional configuration of an information processing device. 情報処理部の構成例を示すブロック図である。FIG. 2 is a block diagram showing a configuration example of an information processing section. 特徴量と認識信頼度の取得方法の例を示す図である。FIG. 3 is a diagram illustrating an example of a method for acquiring feature amounts and recognition reliability. 情報処理装置の処理について説明するフローチャートである。3 is a flowchart illustrating processing of the information processing device. 寄与率の算出の例を示す図である。It is a figure showing an example of calculation of a contribution rate. 情報処理部の他の構成例を示すブロック図である。FIG. 3 is a block diagram showing another example of the configuration of the information processing section. 情報処理部の他の構成例を示すブロック図である。FIG. 3 is a block diagram showing another example of the configuration of the information processing section. 患者映像の取得方法の例を示す図である。FIG. 3 is a diagram illustrating an example of a method for acquiring patient images. 患者映像の取得方法の他の例を示す図である。FIG. 7 is a diagram illustrating another example of a method for acquiring patient images. 表情特徴量の抽出例を示す図である。FIG. 3 is a diagram showing an example of extraction of facial expression features. 外観特徴量を表示できない場合におけるモニタの表示を拡大して示す図である。FIG. 6 is an enlarged diagram showing a monitor display when the appearance feature amount cannot be displayed. コンピュータのハードウェアの構成例を示すブロック図である。FIG. 2 is a block diagram showing an example of the hardware configuration of a computer.
 以下、本技術を実施するための形態について説明する。説明は以下の順序で行う。
 1.第1の実施の形態(重み付けなしの例)
 2.第2の実施の形態(重み付けありの例)
 3.第3の実施の形態(患者の属性情報に基づいて静的寄与率を算出する例)
 4.第4の実施の形態(患者の生体反応に基づいて寄与率を補正する例)
 5.変形例
Hereinafter, a mode for implementing the present technology will be described. The explanation will be given in the following order.
1. First embodiment (example without weighting)
2. Second embodiment (example with weighting)
3. Third embodiment (example of calculating static contribution rate based on patient attribute information)
4. Fourth embodiment (example of correcting contribution rate based on patient's biological reaction)
5. Variant
<<第1の実施の形態(重み付けなしの例)>>
<本技術の概要>
 図1は、本技術の一実施形態に係る患者モニタリングシステムの構成例を示す図である。
<<First embodiment (example without weighting)>>
<Overview of this technology>
FIG. 1 is a diagram illustrating a configuration example of a patient monitoring system according to an embodiment of the present technology.
 図1の患者モニタリングシステムは、ICUなどにいる患者を撮影するカメラ#1が、有線や無線の通信を介して情報処理装置1に接続されることにより構成される。情報処理装置1にはモニタ2が接続される。 The patient monitoring system in FIG. 1 is configured by connecting a camera #1 that photographs a patient in an ICU or the like to an information processing device 1 via wired or wireless communication. A monitor 2 is connected to the information processing device 1 .
 情報処理装置1やモニタ2は、例えばICU内に設置される。ICU内ではなく、ICUがある医療施設の他の部屋などに情報処理装置1やモニタ2が設置されるようにしてもよい。図1の患者モニタリングシステムは、例えば、ユーザである医者や医療スタッフが、ICUにいる患者の状態をモニタリングすることに用いられるシステムである。図1の例においては1人の患者の様子を撮影する1台のカメラ#1が情報処理装置1に接続されているが、それぞれ異なる患者を撮影する複数台のカメラが情報処理装置1に接続されるようにしてもよい。 The information processing device 1 and monitor 2 are installed, for example, in an ICU. The information processing device 1 and the monitor 2 may be installed not in the ICU but in another room of the medical facility where the ICU is located. The patient monitoring system shown in FIG. 1 is a system used, for example, by users such as doctors and medical staff to monitor the condition of patients in the ICU. In the example of FIG. 1, one camera #1 that takes pictures of one patient is connected to the information processing device 1, but multiple cameras that take pictures of different patients are connected to the information processing device 1. It is also possible to do so.
 患者の様態の変化は、バイタルサインの他に、患者の表情や体全体の動きに表れる。一般的な画像認識技術として、顔のランドマーク検出による表情推定を行うことによって表情から苦悶度を抽出したり、全身の骨格ランドマーク検出による行動分類を行うことによって関節点の情報から体の動きを定量化したりする技術がある。 In addition to vital signs, changes in the patient's condition are reflected in the patient's facial expressions and overall body movements. Common image recognition technologies include extracting the degree of distress from facial expressions by estimating facial expressions by detecting facial landmarks, and detecting body movements from joint point information by classifying behaviors by detecting skeletal landmarks of the whole body. There are techniques to quantify this.
 図1の例においては、カメラ#1によって撮影された患者映像が、矢印A1で示すように情報処理装置1に対して送信される。例えば、患者の顔が映る映像である顔映像と、患者の全身の骨格が映る映像である全身映像が患者映像として情報処理装置1に対して送信される。 In the example of FIG. 1, a patient image taken by camera #1 is transmitted to the information processing device 1 as indicated by arrow A1. For example, a facial image that shows the patient's face and a whole body image that shows the skeleton of the patient's whole body are transmitted to the information processing device 1 as patient images.
 情報処理装置1は、カメラ#1から送信されてきた顔映像に基づいて、表情特徴量としての顔ランドマークを抽出する。また、情報処理装置1は、全身映像に基づいて、姿勢特徴量としての骨格ランドマークを抽出する。なお、表情特徴量は、患者の顔の各部位の特徴点の位置などを示す情報である。姿勢特徴量は、患者の体の特徴点の位置などを示す情報である。 The information processing device 1 extracts facial landmarks as facial expression features based on the facial image transmitted from camera #1. Furthermore, the information processing device 1 extracts skeletal landmarks as posture features based on the whole-body video. Note that the facial expression feature amount is information indicating the position of feature points of each part of the patient's face. The posture feature amount is information indicating the position of feature points on the patient's body.
 情報処理装置1は、患者映像から抽出した表情特徴量と姿勢特徴量のうちの少なくともいずれかに基づいて、患者の状態を示す外観特徴量を推定(抽出)する。外観特徴量として、例えば、苦悶度、鎮静具合、下顎呼吸、肩呼吸、シーソー呼吸(吸気時に胸がへこみ、腹部が膨らむ呼吸)、および、体の動き量の6種類の特徴量が推定される。 The information processing device 1 estimates (extracts) an appearance feature indicating the patient's condition based on at least one of the facial expression feature and posture feature extracted from the patient image. Six types of features are estimated as appearance features: degree of agony, degree of sedation, mandibular breathing, shoulder breathing, seesaw breathing (breathing in which the chest collapses and the abdomen expands during inspiration), and amount of body movement. .
 図2は、外観特徴量の推定の例を示す図である。 FIG. 2 is a diagram illustrating an example of estimating appearance feature amounts.
 上述した6種類の特徴量のうち、苦悶度、鎮静具合、下顎呼吸は、顔に表れる特徴であるから、主に、図2のAに示すように、顔映像P1から抽出された表情特徴量に基づいて推定される。 Among the six types of feature amounts mentioned above, the degree of agony, degree of sedation, and mandibular breathing are features that appear on the face, so as shown in A of FIG. 2, the facial features extracted from the facial image P1 are mainly Estimated based on.
 一方、肩呼吸、シーソー呼吸、体の動き量は、体に表れる特徴であるから、主に、図2のBに示すように、全身映像P2から抽出された姿勢特徴量に基づいて推定される。 On the other hand, shoulder breathing, seesaw breathing, and body movement amount are features that appear on the body, so they are estimated mainly based on the posture features extracted from the whole body image P2, as shown in B of FIG. .
 このように、苦悶度、鎮静具合、下顎呼吸は、主に表情特徴量に基づいて推定される外観特徴量であるが、情報処理装置1においては、患者の顔が横を向いていたり、患者の顔がマスクで覆われていたりして表情特徴量の認識信頼度が低い場合には、姿勢特徴量にも基づいて、それらの外観特徴量の推定が行われる。患者の顔が正面などの所定の向きを向いていない場合には、主に、表情特徴量ではなく、姿勢特徴量に基づいて外観特徴量の推定が行われることになる。 In this way, the degree of agony, degree of sedation, and mandibular respiration are appearance features that are estimated mainly based on facial expression features. If the recognition reliability of facial expression features is low because the face is covered with a mask, the appearance features are estimated based also on the posture features. If the patient's face is not facing a predetermined direction, such as the front, the appearance feature amount is mainly estimated based on the posture feature amount rather than the facial expression feature amount.
 また、肩呼吸、シーソー呼吸、体の動き量は、主に姿勢特徴量に基づいて推定される外観特徴量であるが、情報処理装置1においては、患者の体全体がブランケットなどで覆われるなどして姿勢特徴量の認識信頼度が低い場合には、表情特徴量にも基づいて、それらの外観特徴量の推定が行われる。 In addition, shoulder breathing, seesaw breathing, and body movement amount are appearance features estimated mainly based on posture features, but in the information processing device 1, the patient's entire body is covered with a blanket, etc. If the recognition reliability of the posture features is low, the appearance features are estimated based also on the facial expression features.
 これにより、顔に表れる特徴についての外観特徴量と体に表れる特徴についての外観特徴量を、それぞれ安定して推定することが可能となる。 This makes it possible to stably estimate the appearance feature amounts for the features that appear on the face and the appearance feature amounts for the features that appear on the body.
 情報処理装置1においては、顔に表れる特徴についての外観特徴量である苦悶度、鎮静具合、下顎呼吸の推定結果に対して、表情特徴量と姿勢特徴量のそれぞれが寄与した割合を表す寄与率が算出される。表情特徴量の認識信頼度が高い場合、通常、苦悶度、鎮静具合、下顎呼吸の推定結果に対しては、表情特徴量の寄与率の方が、姿勢特徴量の寄与率より高くなる。 The information processing device 1 calculates a contribution rate that represents the contribution of each facial expression feature and posture feature to the estimated results of the degree of distress, degree of sedation, and mandibular respiration, which are appearance feature amounts regarding the features that appear on the face. is calculated. When the recognition reliability of the facial features is high, the contribution rate of the facial features is usually higher than the contribution rate of the posture features to the estimation results of the degree of distress, the degree of sedation, and mandibular respiration.
 また、体に表れる特徴についての外観特徴量である肩呼吸、シーソー呼吸、体の動き量の推定結果に対して、表情特徴量と姿勢特徴量のそれぞれが寄与した割合を表す寄与率が算出される。姿勢特徴量の認識信頼度が高い場合、通常、肩呼吸、シーソー呼吸、体の動き量の推定結果に対しては、姿勢特徴量の寄与率の方が、表情特徴量の寄与率より高くなる。 In addition, the contribution rate representing the contribution of each facial expression feature and posture feature is calculated for the estimation results of shoulder breathing, seesaw breathing, and body movement amount, which are appearance features of the features that appear on the body. Ru. When the recognition reliability of posture features is high, the contribution rate of posture features is usually higher than the contribution rate of facial expressions to the estimation results of shoulder breathing, seesaw breathing, and body movement. .
 算出された寄与率の情報は図1の矢印A2に示すようにモニタ2に出力され、患者のモニタリングに用いられる画面であるモニタリング画面に、外観特徴量を示す情報とともに表示される。 Information on the calculated contribution rate is output to the monitor 2 as shown by arrow A2 in FIG. 1, and is displayed on the monitoring screen, which is a screen used for patient monitoring, together with information indicating the appearance feature amount.
 図3は、図1のモニタ2の表示を拡大して示す図である。 FIG. 3 is an enlarged view of the display on the monitor 2 in FIG. 1.
 図3に示すように、苦悶度の変化を表す波形W1が画面の上方に表示され、その下に、各時刻の苦悶度に対する、表情特徴量と姿勢特徴量のそれぞれの寄与率が表示される。図3の上段に示すグラフの横軸は時刻を表し、縦軸は苦悶度の値を表す。苦悶度を含むそれぞれの外観特徴量は数値化して表される。図3の下段に示すグラフの横軸は時刻を表し、縦軸は寄与率を表す。寄与率は例えば0-100%の値で表される。 As shown in FIG. 3, a waveform W1 representing a change in the degree of distress is displayed at the top of the screen, and below it, the contribution rates of facial features and posture features to the degree of distress at each time are displayed. . The horizontal axis of the graph shown in the upper part of FIG. 3 represents time, and the vertical axis represents the value of the degree of distress. Each appearance feature including the degree of distress is expressed numerically. The horizontal axis of the graph shown in the lower part of FIG. 3 represents time, and the vertical axis represents contribution rate. The contribution rate is expressed as a value of 0-100%, for example.
 このように、モニタ2には、患者の各時刻の外観特徴量とともに、各時刻の外観特徴量に対する表情特徴量と姿勢特徴量のそれぞれの寄与率が表示される。モニタ2の表示を見た例えば医者は、それぞれの外観特徴量の根拠を寄与率に基づいて確認することができる。 In this way, the monitor 2 displays the appearance feature of the patient at each time, as well as the respective contribution rates of the facial expression feature and posture feature to the appearance feature at each time. For example, a doctor viewing the display on the monitor 2 can confirm the basis of each appearance feature based on the contribution rate.
 図4は、図3の寄与率の算出例を示す図である。 FIG. 4 is a diagram showing an example of calculating the contribution rate in FIG. 3.
 図4の1段目は、時刻t0以降の各時刻の表情特徴量の変化を表す。双方向の矢印A11と矢印A12で示す区間は、患者の顔が横向きになっていたりマスクを装着したりしていることによって、信頼できる表情特徴量を抽出することが困難な区間である。例えば、矢印A11と矢印A12で示す区間の表情特徴量の認識信頼度は閾値以下の値となる。 The first row in FIG. 4 represents changes in facial features at each time after time t0. The section indicated by the bidirectional arrow A11 and the arrow A12 is a section in which it is difficult to extract reliable facial features because the patient's face is turned sideways or the patient is wearing a mask. For example, the recognition reliability of the facial expression feature amount in the section indicated by arrow A11 and arrow A12 is a value less than or equal to the threshold value.
 図4の2段目は、時刻t0以降の各時刻の姿勢特徴量の変化を表す。双方向の矢印A13で示す区間は、ブランケットなどで体全体を覆っていることによって、信頼できる姿勢特徴量を抽出することが困難な区間である。例えば、矢印A13で示す区間の姿勢特徴量の認識信頼度は閾値以下の値となる。 The second row in FIG. 4 represents changes in the posture feature amount at each time after time t0. The section indicated by the bidirectional arrow A13 is a section in which it is difficult to extract reliable posture features because the entire body is covered with a blanket or the like. For example, the recognition reliability of the posture feature amount in the section indicated by arrow A13 is a value that is less than or equal to the threshold value.
 このような表情特徴量と姿勢特徴量に基づいて、白抜き矢印の先の3段目に示すような苦悶度が算出された場合、苦悶度に対する表情特徴量と姿勢特徴量の各時刻の寄与率は、4段目に示すように算出される。 When the degree of agony shown in the third row ahead of the white arrow is calculated based on such facial expression and posture features, the contribution of each time of the facial and posture features to the degree of agony is calculated. The rate is calculated as shown in the fourth row.
 図2を参照して説明したように、苦悶度は、主に表情特徴量によって推定される外観特徴量である。図4の例においては、信頼できる表情特徴量の抽出が行われている時刻t0から時刻t1の区間においては、表情特徴量の寄与率を100%として苦悶度の推定が行われている。 As explained with reference to FIG. 2, the degree of distress is an appearance feature estimated mainly from facial expression features. In the example of FIG. 4, in the interval from time t0 to time t1 where reliable facial expression feature amounts are extracted, the degree of distress is estimated with the contribution rate of the facial expression feature amounts as 100%.
 信頼できる表情特徴量の抽出が困難な区間である時刻t1から時刻t2の区間においては、斜線を付して示すように、姿勢特徴量の寄与率を100%として苦悶度の推定が行われる。時刻t1から時刻t2の区間は矢印A11の区間に対応する。 In the interval from time t1 to time t2, where it is difficult to extract reliable facial expression features, the degree of distress is estimated with the contribution rate of the posture features set to 100%, as indicated by diagonal lines. The section from time t1 to time t2 corresponds to the section indicated by arrow A11.
 信頼できる表情特徴量の抽出が行われている時刻t2から時刻t3の区間においては、表情特徴量の寄与率を100%として苦悶度の推定が行われる。 In the interval from time t2 to time t3 during which reliable facial expression feature values are extracted, the degree of distress is estimated with the contribution rate of the facial expression feature values as 100%.
 信頼できる表情特徴量の抽出が困難な区間である時刻t3から時刻t4の区間においても、時刻t1から時刻t2の区間と同様に、姿勢特徴量の寄与率を100%として苦悶度の推定が行われる。時刻t3から時刻t4の区間は矢印A12の区間に対応する。 Even in the interval from time t3 to time t4, where it is difficult to extract reliable facial expression features, the degree of distress is estimated with the contribution rate of the posture features as 100%, as in the interval from time t1 to time t2. be exposed. The section from time t3 to time t4 corresponds to the section indicated by arrow A12.
 時刻t4以降も、同様にして表情特徴量と姿勢特徴量のそれぞれの寄与率が算出される。すなわち、図4に示す寄与率は、表情特徴量と姿勢特徴量の重み付けをすることなく、表情特徴量のみに基づいて(表情特徴量の寄与率を100%として)、または、姿勢特徴量のみに基づいて(姿勢特徴量の寄与率を100%として)、各時刻の苦悶度を推定する場合の寄与率となる。 After time t4, the contribution rates of each of the facial expression feature amount and posture feature amount are calculated in the same manner. In other words, the contribution rates shown in Fig. 4 are calculated based only on the facial expression features (assuming the contribution rate of the facial expression features as 100%) without weighting the facial expression features and posture features, or based only on the posture features. (assuming the contribution rate of the posture feature amount to 100%) is the contribution rate when estimating the degree of distress at each time.
 このように、患者モニタリングシステムを構成する情報処理装置1においては、表情特徴量と姿勢特徴量のうちの少なくともいずれかに基づいて、外観特徴量が推定される。外観特徴量を推定するために複数種類の特徴量を用いることによって、外観特徴量を安定して抽出することができる。 In this way, in the information processing device 1 that constitutes the patient monitoring system, the appearance feature amount is estimated based on at least one of the facial expression feature amount and the posture feature amount. By using a plurality of types of feature amounts to estimate the appearance feature amount, the appearance feature amount can be stably extracted.
 患者モニタリングシステムを構成するモニタ2には、外観特徴量を示す情報とともに、寄与率を示す情報が表示される。寄与率が可視化されることにより、モニタ2の表示を見た医者は、それぞれの外観特徴量の根拠を寄与率に基づいて確認し、患者の状態に対して最終的な判断を下すことができる。 The monitor 2 that constitutes the patient monitoring system displays information indicating the appearance feature amount as well as information indicating the contribution rate. By visualizing the contribution rate, the doctor viewing the display on monitor 2 can confirm the basis of each appearance feature based on the contribution rate and make a final judgment regarding the patient's condition. .
 以上のようにして外観特徴量とともに寄与率を表示させる情報処理装置1の一連の動作については、フローチャートを参照して後述する。 A series of operations of the information processing device 1 that displays the contribution rate together with the appearance feature amount as described above will be described later with reference to a flowchart.
<情報処理装置1の構成>
 図5は、情報処理装置1の機能構成例を示すブロック図である。図5に示す各機能部のうちの少なくとも一部は、情報処理装置1を構成するコンピュータのCPUにより所定のプログラムが実行されることによって実現される。
<Configuration of information processing device 1>
FIG. 5 is a block diagram showing an example of the functional configuration of the information processing device 1. As shown in FIG. At least some of the functional units shown in FIG. 5 are realized by executing a predetermined program by the CPU of the computer that constitutes the information processing device 1.
 情報処理装置1は、映像取得部11、情報処理部12、および表示制御部13により構成される。 The information processing device 1 includes a video acquisition section 11, an information processing section 12, and a display control section 13.
 映像取得部11は、カメラ#1によって撮影された患者映像を取得する。モニタリング対象とする患者がユーザである医者や医療スタッフにより指定されたことに応じて、患者映像の取得が開始されるようにしてもよい。映像取得部11によって取得された患者映像は情報処理部12に出力される。 The image acquisition unit 11 acquires the patient image captured by camera #1. Acquisition of patient images may be started in response to designation of a patient to be monitored by a user, such as a doctor or medical staff. The patient image acquired by the image acquisition section 11 is output to the information processing section 12.
 情報処理部12は、映像取得部11から供給された患者映像に基づいて、患者の外観特徴量を推定する。また、情報処理部12は、外観特徴量の推定に用いられる寄与率を算出する。外観特徴量を示す情報と寄与率を示す情報は表示制御部13に出力される。 The information processing unit 12 estimates the appearance feature amount of the patient based on the patient image supplied from the image acquisition unit 11. The information processing unit 12 also calculates a contribution rate used for estimating the appearance feature amount. Information indicating the appearance feature amount and information indicating the contribution rate are output to the display control unit 13.
 表示制御部13は、外観特徴量演算部36から供給された情報に基づいて、外観特徴量を寄与率とともにモニタ2に表示させる。表示制御部13は、患者の状態の推定結果を出力する出力部として機能する。 The display control unit 13 displays the appearance feature amount together with the contribution rate on the monitor 2 based on the information supplied from the appearance feature calculation unit 36. The display control unit 13 functions as an output unit that outputs the estimation result of the patient's condition.
 図6は、図5の情報処理部12の構成例を示すブロック図である。 FIG. 6 is a block diagram showing a configuration example of the information processing section 12 in FIG. 5.
 情報処理部12は、表情認識部31、姿勢認識部32、動的寄与率演算部33、静的寄与率演算部34、寄与率決定部35、および外観特徴量演算部36により構成される。患者映像として映像取得部11から供給された顔映像P1は表情認識部31に入力され、全身映像P2は姿勢認識部32に入力される。 The information processing unit 12 includes a facial expression recognition unit 31, a posture recognition unit 32, a dynamic contribution rate calculation unit 33, a static contribution rate calculation unit 34, a contribution rate determination unit 35, and an appearance feature amount calculation unit 36. A face image P1 supplied from the image acquisition section 11 as a patient image is input to the facial expression recognition section 31, and a whole body image P2 is input to the posture recognition section 32.
 表情認識部31は、映像取得部11から供給された顔映像P1から表情特徴量を抽出する。また、表情認識部31は、抽出した表情特徴量に対する認識の信頼の程度を表す認識信頼度を算出する。 The facial expression recognition unit 31 extracts facial expression features from the facial image P1 supplied from the image acquisition unit 11. In addition, the facial expression recognition unit 31 calculates recognition reliability indicating the degree of confidence in recognition of the extracted facial features.
 例えば、表情認識部31には、図7のAに示すような、ニューラルネットワークなどにより構成される表情特徴量抽出モデルM1があらかじめ用意される。機械学習によって生成された表情特徴量抽出モデルM1は、顔映像P1を入力とし、表情特徴量と、表情特徴量の認識信頼度とを出力とする推論モデルである。 For example, in the facial expression recognition unit 31, a facial expression feature extraction model M1 configured by a neural network or the like as shown in A in FIG. 7 is prepared in advance. The facial expression feature extraction model M1 generated by machine learning is an inference model that receives the facial image P1 as an input and outputs the facial expression feature and the recognition reliability of the facial expression feature.
 表情認識部31は、顔映像P1を構成する各フレームを表情特徴量抽出モデルM1に入力することによって、表情特徴量を抽出するとともに、認識信頼度を算出する。例えば、顔映像P1を構成するフレーム毎に表情特徴量と認識信頼度が取得される。 The facial expression recognition unit 31 inputs each frame constituting the facial image P1 into the facial expression feature extraction model M1, thereby extracting the facial expression feature and calculating the recognition reliability. For example, facial expression feature amounts and recognition reliability are acquired for each frame forming the facial image P1.
 顔映像P1を構成する各フレームを解析することによって、フレーム毎の表情特徴量と認識信頼度が取得されるようにしてもよい。 The facial expression feature amount and recognition reliability for each frame may be obtained by analyzing each frame that constitutes the facial image P1.
 表情認識部31によって抽出された表情特徴量の情報は外観特徴量演算部36に出力され、認識信頼度の情報は動的寄与率演算部33に出力される。 Information on facial expression features extracted by the facial expression recognition unit 31 is output to the appearance feature calculation unit 36, and information on recognition reliability is output to the dynamic contribution rate calculation unit 33.
 姿勢認識部32は、映像取得部11から供給された全身映像P2から姿勢特徴量を抽出する。また、姿勢認識部32は、抽出した姿勢特徴量に対する認識の信頼の程度を表す認識信頼度を算出する。 The posture recognition unit 32 extracts posture features from the whole body image P2 supplied from the image acquisition unit 11. Further, the posture recognition unit 32 calculates recognition reliability indicating the degree of reliability in recognition of the extracted posture feature amount.
 例えば、姿勢認識部32には、図7のBに示すような、ニューラルネットワークなどにより構成される姿勢特徴量抽出モデルM2があらかじめ用意される。機械学習によって生成された姿勢特徴量抽出モデルM2は、全身映像P2を入力とし、姿勢特徴量と、姿勢特徴量の認識信頼度とを出力とする推論モデルである。 For example, the posture recognition unit 32 is prepared in advance with a posture feature extraction model M2 configured by a neural network or the like, as shown in FIG. 7B. The posture feature extraction model M2 generated by machine learning is an inference model that receives the whole body image P2 as an input and outputs posture features and recognition reliability of the posture features.
 姿勢認識部32は、全身映像P2を構成する各フレームを姿勢特徴量抽出モデルM2に入力することによって、姿勢特徴量を抽出するとともに、認識信頼度を算出する。例えば、全身映像P2を構成するフレーム毎に姿勢特徴量と認識信頼度が取得される。 The posture recognition unit 32 extracts posture features and calculates recognition reliability by inputting each frame forming the whole-body video P2 to the posture feature extraction model M2. For example, posture feature amounts and recognition reliability are acquired for each frame configuring the whole-body video P2.
 全身映像P2を構成する各フレームを解析することによって、フレーム毎の姿勢特徴量と認識信頼度が取得されるようにしてもよい。 By analyzing each frame constituting the whole-body video P2, the posture feature amount and recognition reliability may be obtained for each frame.
 姿勢認識部32によって抽出された姿勢特徴量の情報は外観特徴量演算部36に出力され、認識信頼度の情報は動的寄与率演算部33に出力される。 Information on the posture feature extracted by the posture recognition unit 32 is output to the appearance feature calculation unit 36, and information on recognition reliability is output to the dynamic contribution rate calculation unit 33.
 このように、表情認識部31と姿勢認識部32は、患者の顔に関する特徴量である表情特徴量と、患者の姿勢に関する特徴量である姿勢特徴量を取得する取得部として機能する。 In this way, the facial expression recognition unit 31 and the posture recognition unit 32 function as an acquisition unit that acquires facial expression feature quantities that are feature quantities related to the patient's face, and posture feature quantities that are feature quantities related to the patient's posture.
 動的寄与率演算部33は、表情認識部31により算出された認識信頼度と、姿勢認識部32により算出された認識信頼度に基づいて動的寄与率を算出する。動的寄与率は、表情特徴量と姿勢特徴量のそれぞれの認識信頼度の値に応じて変化する寄与率である。 The dynamic contribution rate calculation unit 33 calculates the dynamic contribution rate based on the recognition reliability calculated by the facial expression recognition unit 31 and the recognition reliability calculated by the posture recognition unit 32. The dynamic contribution rate is a contribution rate that changes depending on the recognition reliability value of each of the facial expression feature amount and the posture feature amount.
 例えば、認識信頼度の高い特徴量の寄与率を100%とし、認識信頼度の低い特徴量の寄与率を0%とするようにして動的寄与率が算出される。表情特徴量と姿勢特徴量のそれぞれの動的寄与率として、0%と100%以外の任意の値の寄与率が認識信頼度の高さに応じて算出されるようにしてもよい。動的寄与率演算部33によって算出された動的寄与率の情報は寄与率決定部35に出力される。 For example, the dynamic contribution rate is calculated by setting the contribution rate of a feature quantity with high recognition reliability to 100%, and setting the contribution rate of a feature quantity with low recognition reliability to 0%. As the dynamic contribution rate of each of the facial expression feature amount and the posture feature amount, a contribution rate of any value other than 0% and 100% may be calculated depending on the level of recognition reliability. Information on the dynamic contribution rate calculated by the dynamic contribution rate calculating section 33 is output to the contribution rate determining section 35.
 静的寄与率演算部34は、外観特徴量ごとの寄与率である静的寄与率を算出する。例えば、苦悶度などのそれぞれの外観特徴量に対して、表情特徴量と姿勢特徴量のそれぞれの寄与率があらかじめ設定される。苦悶状態の測定モードが設定されている場合には表情特徴量の寄与率を高くするといったように、モニタリング対象としてユーザにより設定された外観特徴量(患者の状態)に応じた静的寄与率が算出されるようにしてもよい。 The static contribution rate calculation unit 34 calculates a static contribution rate that is a contribution rate for each appearance feature amount. For example, for each appearance feature such as the degree of distress, the respective contribution rates of the facial expression feature and the posture feature are set in advance. When the measurement mode for a distressed state is set, the static contribution rate is increased according to the appearance feature value (patient's condition) set by the user as the monitoring target, such as increasing the contribution rate of the facial expression feature value. It may be calculated.
 例えば、静的寄与率演算部34は、外観特徴量としての苦悶度に対して、表情特徴量の寄与率の方が、姿勢特徴量の寄与率よりも高くなるように静的寄与率を算出する。また、静的寄与率演算部34は、外観特徴量としての肩呼吸に対して、姿勢特徴量の寄与率の方が、表情特徴量の寄与率よりも高くなるように静的寄与率を算出する。静的寄与率演算部34によって算出された静的寄与率の情報は寄与率決定部35に出力される。 For example, the static contribution rate calculation unit 34 calculates the static contribution rate such that the contribution rate of the facial expression feature is higher than the contribution rate of the posture feature with respect to the degree of distress as the appearance feature. do. In addition, the static contribution rate calculation unit 34 calculates a static contribution rate such that the contribution rate of the posture feature is higher than the contribution rate of the facial expression feature with respect to shoulder breathing as the appearance feature. do. Information on the static contribution rate calculated by the static contribution rate calculation unit 34 is output to the contribution rate determination unit 35.
 寄与率決定部35は、動的寄与率演算部33により算出された動的寄与率と、静的寄与率演算部34により算出された静的寄与率に基づいて、外観特徴量の推定に用いられる最終的な寄与率を外観特徴量毎に決定する。例えば、動的寄与率と静的寄与率に基づいて所定の演算を行うことによって最終的な寄与率が決定される。寄与率決定部35によって決定された寄与率の情報は、外観特徴量演算部36に出力される。 The contribution rate determination unit 35 uses the dynamic contribution rate calculated by the dynamic contribution rate calculation unit 33 and the static contribution rate calculated by the static contribution rate calculation unit 34 to estimate the appearance feature amount. The final contribution rate is determined for each appearance feature. For example, the final contribution rate is determined by performing a predetermined calculation based on the dynamic contribution rate and the static contribution rate. Information on the contribution rate determined by the contribution rate determination unit 35 is output to the appearance feature amount calculation unit 36.
 外観特徴量演算部36は、表情認識部31により抽出された表情特徴量、姿勢認識部32により抽出された姿勢特徴量、および、寄与率決定部35により決定された寄与率に基づいて所定の演算を行い、外観特徴量を推定する。外観特徴量演算部36によって推定された外観特徴量は、寄与率の情報とともに図5の表示制御部13に出力される。 The appearance feature calculation unit 36 calculates a predetermined value based on the facial expression feature extracted by the facial expression recognition unit 31, the posture feature extracted by the posture recognition unit 32, and the contribution rate determined by the contribution rate determination unit 35. Perform calculations and estimate appearance features. The appearance feature amount estimated by the appearance feature amount calculation section 36 is output to the display control section 13 in FIG. 5 together with information on the contribution rate.
 外観特徴量の推定が、機械学習によって生成された推論モデルを用いて行われるようにしてもよい。この場合、外観特徴量演算部36には、例えば、表情特徴量、姿勢特徴量、および寄与率を入力とし、それぞれの外観特徴量を出力とする推論モデルが用意される。外観特徴量演算部36は、表情特徴量と姿勢特徴量のうちの少なくともいずれかに基づいて、患者の状態を推定する推定部として機能する。 Appearance feature values may be estimated using an inference model generated by machine learning. In this case, the appearance feature calculation unit 36 is provided with an inference model that receives, for example, facial expression features, posture features, and contribution rates as inputs, and outputs the respective appearance features. The appearance feature calculation unit 36 functions as an estimation unit that estimates the patient's condition based on at least one of the facial expression feature and the posture feature.
<情報処理装置1の動作>
 図8のフローチャートを参照して、以上のような構成を有する情報処理装置1の処理について説明する。図8の処理は、例えば、カメラ#1から患者映像が送信されてきたときに開始される。
<Operation of information processing device 1>
The processing of the information processing apparatus 1 having the above configuration will be described with reference to the flowchart of FIG. 8. The process in FIG. 8 is started, for example, when a patient image is transmitted from camera #1.
 ステップS1において、映像取得部11(図5)は、カメラ#1から送信されてきた患者映像を受信し、取得する。 In step S1, the image acquisition unit 11 (FIG. 5) receives and acquires the patient image transmitted from camera #1.
 ステップS2において、情報処理部12の表情認識部31は、映像取得部11から供給された顔映像P1から表情特徴量を抽出する。 In step S2, the facial expression recognition unit 31 of the information processing unit 12 extracts facial expression features from the facial image P1 supplied from the image acquisition unit 11.
 ステップS3において、姿勢認識部32は、映像取得部11から供給された全身映像P2から姿勢特徴量を抽出する。 In step S3, the posture recognition unit 32 extracts posture features from the whole body image P2 supplied from the image acquisition unit 11.
 ステップS4において、動的寄与率演算部33は、表情認識部31により算出された認識信頼度と、姿勢認識部32により算出された認識信頼度に基づいて、動的寄与率を算出する。 In step S4, the dynamic contribution rate calculation unit 33 calculates the dynamic contribution rate based on the recognition reliability calculated by the facial expression recognition unit 31 and the recognition reliability calculated by the posture recognition unit 32.
 ステップS5において、静的寄与率演算部34は、それぞれの外観特徴量の静的寄与率を算出する。 In step S5, the static contribution rate calculation unit 34 calculates the static contribution rate of each appearance feature amount.
 ステップS6において、寄与率決定部35は、動的寄与率演算部33により算出された動的寄与率と、静的寄与率演算部34により算出された静的寄与率に基づいて、外観特徴量の推定に用いられる寄与率を決定する。 In step S6, the contribution rate determination unit 35 determines the appearance feature amount based on the dynamic contribution rate calculated by the dynamic contribution rate calculation unit 33 and the static contribution rate calculated by the static contribution rate calculation unit 34. Determine the contribution rate used to estimate .
 ステップS7において、外観特徴量演算部36は、表情認識部31により抽出された表情特徴量、姿勢認識部32により抽出された姿勢特徴量、および、寄与率決定部35により決定された寄与率に基づいて外観特徴量を推定する。 In step S7, the appearance feature calculation unit 36 calculates the facial expression feature extracted by the facial expression recognition unit 31, the posture feature extracted by the posture recognition unit 32, and the contribution rate determined by the contribution rate determination unit 35. Estimate appearance features based on the
 ステップS8において、表示制御部13は、外観特徴量演算部36により推定された外観特徴量を寄与率とともにモニタ2に表示させる。モニタ2には、苦悶度、鎮静具合、下顎呼吸、肩呼吸、シーソー呼吸、および、体の動き量の6種類のそれぞれの外観特徴量とともに、それぞれの寄与率が表示される。 In step S8, the display control unit 13 causes the monitor 2 to display the appearance feature estimated by the appearance feature calculation unit 36 together with the contribution rate. The monitor 2 displays six types of appearance characteristic amounts, including degree of agony, degree of sedation, mandibular breathing, shoulder breathing, seesaw breathing, and amount of body movement, as well as their respective contribution rates.
 以上のような一連の処理が、例えば、カメラ#1からの患者映像の送信が終了するまで続けられる。 A series of processes as described above are continued until, for example, the transmission of the patient image from camera #1 is completed.
 以上の処理により、情報処理装置1においては、外観特徴量を安定して抽出することができる。モニタ2の表示を見た医者は、それぞれの外観特徴量の根拠を寄与率に基づいて確認し、患者の状態を判断することができる。 Through the above processing, the information processing device 1 can stably extract appearance feature amounts. The doctor who sees the display on the monitor 2 can check the basis of each appearance feature based on the contribution rate and judge the patient's condition.
 外観特徴量や寄与率の出力が、画面表示を用いて行われるのではなく、スピーカからの音声、LEDの発光などの他の伝達手段を用いて行われるようにしてもよい。 The appearance feature amount and contribution rate may be output using other communication means such as sound from a speaker or light emission from an LED, instead of using a screen display.
<<第2の実施の形態(重み付けありの例)>>
 表情特徴量と姿勢特徴量の重み付けをすることなく外観特徴量を推定する場合について説明したが、表情特徴量と姿勢特徴量を組み合わせて用いることによって外観特徴量が推定されるようにしてもよい。表情特徴量と姿勢特徴量は、それぞれ、所定の重み付けが行われた後に外観特徴量の推定に用いられる。
<<Second embodiment (example with weighting)>>
Although a case has been described in which appearance features are estimated without weighting facial expression features and posture features, appearance features may be estimated by using a combination of facial expression features and posture features. . The facial expression feature amount and the posture feature amount are each used for estimating the appearance feature amount after being subjected to predetermined weighting.
 図9は、寄与率の算出の例を示す図である。図4の説明と重複する説明については適宜省略する。 FIG. 9 is a diagram showing an example of calculation of contribution rate. Descriptions that overlap with the description of FIG. 4 will be omitted as appropriate.
 図9の1段目は、時刻t0以降の各時刻の表情特徴量の変化を表す。双方向の矢印A21と矢印A22で示す区間は、信頼できる表情特徴量を抽出することが困難な区間である。 The first row in FIG. 9 represents changes in facial features at each time after time t0. The section indicated by the bidirectional arrow A21 and the arrow A22 is a section in which it is difficult to extract reliable facial features.
 図9の2段目は、時刻t0以降の各時刻の姿勢特徴量の変化を表す。双方向の矢印A23で示す区間は、信頼できる姿勢特徴量を抽出することが困難な区間である。 The second row in FIG. 9 represents changes in the posture feature amount at each time after time t0. The section indicated by the bidirectional arrow A23 is a section in which it is difficult to extract reliable posture features.
 このような表情特徴量と姿勢特徴量に基づいて、白抜き矢印の先の3段目に示すような苦悶度が算出された場合、苦悶度に対する表情特徴量と姿勢特徴量の各時刻の寄与率は、4段目に示すように算出される。 When the degree of agony shown in the third row ahead of the white arrow is calculated based on such facial expression and posture features, the contribution of each time of the facial and posture features to the degree of agony is calculated. The rate is calculated as shown in the fourth row.
 例えば、時刻t0から時刻t11の区間においては、表情特徴量の認識信頼度が閾値より高いことから、表情特徴量の寄与率を100%として苦悶度の推定が行われている。 For example, in the interval from time t0 to time t11, since the recognition reliability of the facial expression feature is higher than the threshold, the degree of distress is estimated with the contribution rate of the facial expression feature as 100%.
 時刻t11から時刻t12の区間においては、表情特徴量の認識信頼度が閾値より若干低く、姿勢特徴量の認識信頼度が閾値より若干高いことから、表情特徴量と姿勢特徴量をそれぞれ所定の寄与率で組み合わせて用いることによって苦悶度の推定が行われている。図9の例においては、例えば、表情特徴量の寄与率を85%とし、姿勢特徴量の寄与率を15%として苦悶度の推定が行われている。 In the interval from time t11 to time t12, the recognition reliability of the facial expression feature is slightly lower than the threshold, and the recognition reliability of the posture feature is slightly higher than the threshold, so the facial expression and posture features are each given a predetermined contribution. The degree of distress has been estimated by using these factors in combination. In the example of FIG. 9, the degree of distress is estimated, for example, with the contribution rate of the facial expression feature amount being 85% and the contribution rate of the posture feature amount being 15%.
 時刻t12から時刻t13の区間においては、表情特徴量の認識信頼度が閾値より低いことから、姿勢特徴量の寄与率を100%として苦悶度の推定が行われている。時刻t12から時刻t13の区間は矢印A21の区間に対応する。 In the interval from time t12 to time t13, since the recognition reliability of the facial expression feature is lower than the threshold, the degree of distress is estimated by setting the contribution rate of the posture feature to 100%. The section from time t12 to time t13 corresponds to the section indicated by arrow A21.
 時刻t13から時刻t14の区間においては、表情特徴量の認識信頼度が閾値より若干低く、姿勢特徴量の認識信頼度が閾値より若干高いことから、例えば、表情特徴量の寄与率を85%とし、姿勢特徴量の寄与率を15%として苦悶度の推定が行われている。 In the interval from time t13 to time t14, the recognition reliability of the facial expression feature is slightly lower than the threshold, and the recognition reliability of the posture feature is slightly higher than the threshold, so for example, the contribution rate of the facial expression feature is set to 85%. , the degree of distress is estimated with the contribution rate of posture features as 15%.
 時刻t14から時刻t15の区間においては、表情特徴量の認識信頼度が閾値より高いことから、表情特徴量の寄与率を100%として苦悶度の推定が行われている。時刻t14から時刻t15の区間は矢印A23の区間に対応する。 In the interval from time t14 to time t15, since the recognition reliability of the facial expression feature is higher than the threshold, the degree of distress is estimated by setting the contribution rate of the facial expression feature to 100%. The section from time t14 to time t15 corresponds to the section indicated by arrow A23.
 時刻t15以降も、同様にして表情特徴量と姿勢特徴量のそれぞれの寄与率が算出される。すなわち、図9に示す寄与率は、表情特徴量と姿勢特徴量の重み付けを行い、表情特徴量と姿勢特徴量を組み合わせて用いることによって、各時刻の苦悶度を推定する場合の寄与率となる。表情特徴量と姿勢特徴量のそれぞれの寄与率は、表情特徴量と姿勢特徴量のそれぞれの認識信頼度の値によって変化する。 After time t15, the contribution rates of each of the facial expression feature amount and posture feature amount are calculated in the same manner. In other words, the contribution rate shown in FIG. 9 is the contribution rate when estimating the degree of distress at each time by weighting the facial expression feature amount and the posture feature amount, and using a combination of the facial expression feature amount and the posture feature amount. . The respective contribution rates of the facial expression feature amount and the posture feature amount change depending on the recognition reliability values of the facial expression feature amount and the posture feature amount, respectively.
 情報処理装置1は、表情特徴量と姿勢特徴量を組み合わせて用いることによって、外観特徴量を安定して抽出することができる。 The information processing device 1 can stably extract appearance features by using a combination of facial expression features and posture features.
 モニタ2には、外観特徴量を示す情報とともに、組み合わせの程度を表す寄与率を示す情報が表示される。寄与率が可視化されることにより、モニタ2の表示を見た医者は、それぞれの外観特徴量が、表情特徴量と姿勢特徴量をどのように組み合わせて推定されたのかを確認することができる。 The monitor 2 displays information indicating the appearance feature amount as well as information indicating the contribution rate indicating the degree of combination. By visualizing the contribution rate, the doctor viewing the display on the monitor 2 can confirm how each appearance feature was estimated by combining the facial expression feature and the posture feature.
<<第3の実施の形態(患者の属性情報に基づいて静的寄与率を算出する例)>>
 図10は、情報処理部12の他の構成例を示すブロック図である。図10に示す構成のうち、図6を参照して説明した構成と同じ構成については同じ符号を付してある。重複する説明については適宜省略する。
<<Third embodiment (example of calculating static contribution rate based on patient attribute information)>>
FIG. 10 is a block diagram showing another example of the configuration of the information processing section 12. As shown in FIG. Among the configurations shown in FIG. 10, the same components as those described with reference to FIG. 6 are designated by the same reference numerals. Duplicate explanations will be omitted as appropriate.
 図10の例においては、患者の属性情報が静的寄与率演算部34に対して入力される点で、図6を参照して説明した構成と異なる。患者の属性情報は、顔面麻痺、骨折、呼吸器疾患などの、患者の属性を表す情報である。 The example shown in FIG. 10 differs from the configuration described with reference to FIG. 6 in that patient attribute information is input to the static contribution calculation unit 34. Patient attribute information is information representing patient attributes such as facial paralysis, bone fracture, and respiratory disease.
 静的寄与率演算部34は、それぞれの外観特徴量に対してあらかじめ設定された寄与率の情報に加えて、患者の属性情報に基づいて静的寄与率を算出する。患者の属性情報は、電子カルテなどを参照することによって取得される。静的寄与率が患者の属性情報のみに基づいて算出されるようにしてもよい。 The static contribution rate calculation unit 34 calculates the static contribution rate based on the patient's attribute information in addition to the information on the contribution rate set in advance for each appearance feature amount. Patient attribute information is obtained by referring to electronic medical records and the like. The static contribution rate may be calculated based only on patient attribute information.
 例えば、顔面麻痺や火傷などにより、信頼できる表情特徴量を抽出することが困難である可能性が高い場合、静的寄与率演算部34は、表情特徴量の寄与率が低くなり、姿勢特徴量の寄与率が高くなるように静的寄与率の算出を行う。また、腕などの骨折や呼吸器疾患などにより、信頼できる姿勢特徴量を抽出することが困難である可能性が高い場合、静的寄与率演算部34は、姿勢特徴量の寄与率が低くなり、表情特徴量の寄与率が高くなるように静的寄与率の算出を行う。このようにして属性情報にも基づいて算出された静的寄与率の情報は、寄与率決定部35に出力され、最終的な寄与率の決定に用いられる。 For example, if there is a high possibility that it is difficult to extract reliable facial features due to facial paralysis or burns, the static contribution rate calculation unit 34 calculates that the contribution rate of the facial features will be low and the posture features will be difficult to extract. The static contribution rate is calculated so that the contribution rate of is high. In addition, if there is a high possibility that it is difficult to extract reliable posture features due to a fracture of an arm or a respiratory disease, the static contribution rate calculation unit 34 calculates a low contribution rate of the posture features. , the static contribution rate is calculated so that the contribution rate of the facial features becomes high. The information on the static contribution rate calculated based also on the attribute information in this way is output to the contribution rate determining section 35 and used for determining the final contribution rate.
 患者の属性情報に基づいて静的寄与率が算出されることにより、情報処理装置1は、患者の症状に適した特徴量がより寄与する形で外観特徴量を推定することができる。 By calculating the static contribution rate based on the patient's attribute information, the information processing device 1 can estimate the appearance feature amount in a manner that the feature amount suitable for the patient's symptoms contributes more.
<<第4の実施の形態(患者の生体反応に基づいて寄与率を補正する例)>>
 図11は、情報処理部12の他の構成例を示すブロック図である。図11に示す構成のうち、図6を参照して説明した構成と同じ構成については同じ符号を付してある。重複する説明については適宜省略する。
<<Fourth embodiment (example of correcting contribution rate based on patient's biological reaction)>>
FIG. 11 is a block diagram showing another example of the configuration of the information processing section 12. As shown in FIG. Among the configurations shown in FIG. 11, the same components as those described with reference to FIG. 6 are designated by the same reference numerals. Duplicate explanations will be omitted as appropriate.
 図11の例においては、寄与率決定部35と外観特徴量演算部36の間に補正部51が設けられている点で、図6を参照して説明した構成と異なる。補正部51に対しては、患者の生体反応を示す、呼吸数や心拍数などのバイタルサインの情報が供給される。 The example shown in FIG. 11 differs from the configuration described with reference to FIG. 6 in that a correction section 51 is provided between the contribution rate determination section 35 and the appearance feature calculation section 36. The correction unit 51 is supplied with information on vital signs such as respiration rate and heart rate, which indicate the patient's biological reactions.
 補正部51は、患者のバイタルサインの情報に基づいて、寄与率決定部35により決定された寄与率を補正する。例えば、それぞれのバイタルサインの変化の内容に応じて、表情特徴量と姿勢特徴量のそれぞれの寄与率をどのように補正するのかを表す情報が補正部51に対して設定されている。 The correction unit 51 corrects the contribution rate determined by the contribution rate determination unit 35 based on information on the patient's vital signs. For example, information indicating how to correct the contribution rate of each facial expression feature amount and posture feature amount is set in the correction unit 51 according to the content of changes in each vital sign.
 一般的に、患者が痛みを強く感じる場合、心拍数が上昇することが多い。例えば心拍数が閾値より高い場合、補正部51は、苦悶度の推定に用いる寄与率として、表情特徴量の寄与率が高くなるように補正する。 In general, when a patient feels strong pain, their heart rate often increases. For example, when the heart rate is higher than the threshold, the correction unit 51 corrects the contribution rate of the facial features to be higher as the contribution rate used for estimating the degree of distress.
 また、呼吸数が閾値より高い場合、補正部51は、下顎呼吸の推定に用いる寄与率として、表情特徴量の寄与率が高くなるように補正する。 Furthermore, when the respiration rate is higher than the threshold value, the correction unit 51 corrects the contribution rate of the facial features to be higher as the contribution rate used for estimating mandibular respiration.
 補正部51によって補正された寄与率の情報は外観特徴量演算部36に出力される。外観特徴量演算部36においては、補正部51により補正された寄与率を用いて、外観特徴量の推定が行われる。 Information on the contribution rate corrected by the correction unit 51 is output to the appearance feature calculation unit 36. In the appearance feature calculation unit 36, the appearance feature is estimated using the contribution rate corrected by the correction unit 51.
 患者の生体反応を示す情報に基づいて寄与率が補正されることにより、情報処理装置1は、患者のリアルタイムの状況に応じた外観特徴量を推定することができる。 By correcting the contribution rate based on information indicating the patient's biological reaction, the information processing device 1 can estimate the appearance feature amount according to the patient's real-time situation.
<<変形例>>
<患者映像の取得方法>
・患者映像の取得方法1
 図12は、患者映像の取得方法の例を示す図である。
<<Modified example>>
<How to obtain patient images>
・How to obtain patient images 1
FIG. 12 is a diagram illustrating an example of a method for acquiring patient images.
 図12の例においては、高解像度の全身映像P11がカメラ#11により撮影され、映像取得部11に対して供給される。 In the example of FIG. 12, a high-resolution whole-body image P11 is captured by camera #11 and supplied to the image acquisition unit 11.
 映像取得部11においては、全身映像P11に対してクロップ処理が行われ、図12の右上に示すように顔映像P1が生成される。また、全身映像P11に対して低解像度化処理が行われ、図12の右下に示すように全身映像P2が生成される。 In the image acquisition unit 11, a cropping process is performed on the whole body image P11, and a face image P1 is generated as shown in the upper right corner of FIG. Furthermore, a resolution reduction process is performed on the whole body image P11, and a whole body image P2 is generated as shown in the lower right corner of FIG.
 このようにして映像取得部11により生成され、取得された顔映像P1と全身映像P2が、それぞれ、図6の表情認識部31と姿勢認識部32に出力される。 The face image P1 and the whole body image P2 generated and acquired by the image acquisition unit 11 in this manner are output to the facial expression recognition unit 31 and posture recognition unit 32 in FIG. 6, respectively.
 低解像度の全身映像P2を用いることにより、姿勢特徴量の抽出処理や認識信頼度の算出処理の処理コストを削減することができる。 By using the low-resolution whole-body video P2, it is possible to reduce the processing cost of the posture feature amount extraction process and the recognition reliability calculation process.
・患者映像の取得方法2
 図13は、患者映像の取得方法の他の例を示す図である。
・How to obtain patient images 2
FIG. 13 is a diagram illustrating another example of a method for acquiring patient images.
 図13の例においては、顔映像P1がカメラ#11-1により撮影され、全身映像P2がカメラ#11-2により撮影される。カメラ#11-2は、解像度が低い比較的安価なカメラであってもよい。カメラ#11-1により撮影された顔映像P1とカメラ#11-2により撮影された全身映像P2が映像取得部11に対して供給される。 In the example of FIG. 13, a face image P1 is photographed by camera #11-1, and a whole body image P2 is photographed by camera #11-2. Camera #11-2 may be a relatively inexpensive camera with low resolution. A face image P1 taken by camera #11-1 and a whole body image P2 taken by camera #11-2 are supplied to the image acquisition unit 11.
 このように、顔映像と全身映像がそれぞれ異なるカメラにより撮影されるようにしてもよい。 In this way, the face image and the whole body image may be captured by different cameras.
<表情特徴量抽出の高精度化>
 ICUにおいて、患者の周辺にある機器に対して顔認識などの処理が行われることがある。例えば、患者の顔に呼吸器などが装着されることにより、患者の顔が認識されない場合がある。
<Improving the accuracy of facial expression feature extraction>
In the ICU, processing such as facial recognition is sometimes performed on devices around patients. For example, the patient's face may not be recognized because a respirator or the like is attached to the patient's face.
 図14は、表情特徴量の抽出例を示す図である。 FIG. 14 is a diagram showing an example of extraction of facial expression features.
 図14に示すように、全身映像P2から抽出された姿勢特徴量に基づいて、患者の顔の位置が特定され、顔の位置情報に基づいて顔映像P1が生成されるようにしてもよい。 As shown in FIG. 14, the position of the patient's face may be specified based on the posture feature extracted from the whole body image P2, and the face image P1 may be generated based on the face position information.
 図14の例においては、顔の位置情報に基づいて切り出すようにしてクロップ処理が行われ、患者の顔周辺が大きく映る顔映像P1-1が生成されている。顔映像P1-1に基づいて表情特徴量の抽出が行われることにより、高ロバスト/高精度な表情特徴の抽出が可能となる。 In the example shown in FIG. 14, crop processing is performed to cut out the image based on the position information of the face, and a face image P1-1 in which the periphery of the patient's face is enlarged is generated. By extracting the facial expression feature amount based on the facial image P1-1, it becomes possible to extract the facial expression feature with high robustness and accuracy.
<外観特徴量の表示について>
 信頼できる表情特徴量と姿勢特徴量の抽出が困難な場合、外観特徴量の表示が行われずに、図15に示すように、外観特徴量を表示できない理由が表示されるようにしてもよい。
<About the display of appearance features>
If it is difficult to extract reliable facial expression and posture features, the appearance feature may not be displayed and the reason why the appearance feature cannot be displayed may be displayed, as shown in FIG. 15.
 図15の例においては、表情を認識することができないために、外観特徴量を表示できないことを通知するメッセージが表示されている。図15に示すメッセージは、例えば、表情特徴量の寄与率が高い外観特徴量を表示させることができない場合に表示される。姿勢特徴量の寄与率が高い外観特徴量を表示させることができない場合には、姿勢を認識することができないために、外観特徴量を表示できないことを通知するメッセージが表示される。 In the example of FIG. 15, a message is displayed notifying that the appearance feature amount cannot be displayed because facial expressions cannot be recognized. The message shown in FIG. 15 is displayed, for example, when it is impossible to display an appearance feature with a high contribution rate of the facial expression feature. If an appearance feature with a high contribution rate of the posture feature cannot be displayed, a message is displayed notifying that the appearance feature cannot be displayed because the posture cannot be recognized.
 これにより、モニタ2の表示を見た医者は、信頼できる表情特徴量が抽出されていないために外観特徴量が表示されていないのか、信頼できる姿勢特徴量が抽出されていないために外観特徴量が表示されていないのかを知ることできる。 As a result, a doctor who looks at the display on monitor 2 may find that either the appearance feature is not displayed because reliable facial expression features have not been extracted, or the appearance feature is not displayed because reliable posture features have not been extracted. You can find out if it is not displayed.
<優先度の高い患者のモニタリング方法>
 以下のようにして設定された優先度に応じてモニタリングが行われるようにしてもよい。例えば、複数の患者をモニタリングする場合において、モニタリングシステムの性能や機能が十分でないときに優先度に応じたモニタリングが行われる。
・医者が指定したモニタリング対象の患者が優先的にモニタリングされる
・動きの大きい患者が自動的に抽出され、優先的にモニタリングされる
・苦悶の表情をしている患者が自動的に抽出され、優先的にモニタリングされる
・呼吸数の少ない患者が自動的に抽出され、優先的にモニタリングされる
・バイタルサインや各種の検査データから演算される重症度に応じてモニタリングされる
<How to monitor high-priority patients>
Monitoring may be performed according to priorities set as follows. For example, when monitoring multiple patients, monitoring is performed according to priority when the performance or functionality of the monitoring system is insufficient.
・Patients designated by the doctor to be monitored are prioritized for monitoring. ・Patients with large movements are automatically extracted and prioritized for monitoring. ・Patients with facial expressions of agony are automatically extracted, Priority monitoring ・Patients with low breathing rate are automatically selected and prioritized for monitoring ・Monitoring is performed according to the severity calculated from vital signs and various test data
<患者のプライバシー保護について>
 患者のプライバシー保護の観点から、表情特徴量や姿勢特徴量の抽出がカメラにおいて行われるようにしてもよい。この場合、カメラからは、患者映像が出力されずに、特徴量の情報のみが出力される。情報処理装置1においては、カメラによって抽出され、カメラから送信されてきた表情特徴量と姿勢特徴量が取得される。
<About patient privacy protection>
From the perspective of protecting the patient's privacy, facial expression features and posture features may be extracted using the camera. In this case, the camera does not output a patient image, but only information on the feature amount. In the information processing device 1, facial expression features and posture features are extracted by a camera and transmitted from the camera.
 カメラによる監視を患者が意識しないように、患者がいる部屋にあるオブジェに小さい穴が形成され、その穴から患者映像の撮影が行われるようにしてもよい。また、低解像度の患者映像に対応する特徴量検出モデルが用意されるようにしてもよい。 In order to prevent the patient from being aware of the camera monitoring, a small hole may be formed in an object in the room where the patient is, and the patient image may be taken through the hole. Further, a feature detection model corresponding to a low-resolution patient image may be prepared.
 これにより、一般病棟のように、通常はカメラが設置されていない場所であっても、上述した患者モニタリングを機能させることが可能となる。 This makes it possible to function the patient monitoring described above even in places where cameras are not normally installed, such as in general wards.
<その他>
 苦悶度、鎮静具合、下顎呼吸、肩呼吸、シーソー呼吸、体の動き量の6種類の特徴量が推定されるものとしたが、患者の外観から推定できる他の種類の特徴量が推定されるようにしてもよい。患者の感情状態、患者の意識状態、および、患者の呼吸状態のうちの少なくともいずれかを表す特徴量が外観特徴量として推定されるようにすることが可能である。
<Others>
Six types of features were assumed to be estimated: degree of agony, degree of sedation, mandibular breathing, shoulder breathing, see-saw breathing, and amount of body movement, but other types of features that can be estimated from the patient's appearance are also estimated. You can do it like this. It is possible to estimate a feature amount representing at least one of the patient's emotional state, the patient's conscious state, and the patient's breathing state as the appearance feature amount.
 患者の姿勢に基づいて特定の行動(体の動き量)がカウントされ、そのカウント結果が外観特徴量として表示されるようにしてもよい。例えば、患者が腹部に頻繁に触れている場合、触れていた時間や触れた回数などの情報がモニタ2に表示される。 A specific action (amount of body movement) may be counted based on the patient's posture, and the count result may be displayed as an appearance feature amount. For example, if a patient frequently touches his or her abdomen, information such as the duration of touching and the number of times the patient touches is displayed on the monitor 2.
・プログラムについて
 上述した一連の処理は、ハードウェアにより実行することもできるし、ソフトウェアにより実行することもできる。一連の処理をソフトウェアにより実行する場合には、そのソフトウェアを構成するプログラムが、専用のハードウェアに組み込まれているコンピュータ、または汎用のパーソナルコンピュータなどに、プログラム記録媒体からインストールされる。
- Regarding the program The series of processes described above can be executed by hardware or software. When a series of processes is executed by software, a program constituting the software is installed from a program recording medium into a computer built into dedicated hardware or a general-purpose personal computer.
 図16は、上述した一連の処理をプログラムにより実行するコンピュータのハードウェアの構成例を示すブロック図である。情報処理装置1は、図16に示す構成と同様の構成を有する。 FIG. 16 is a block diagram showing an example of the hardware configuration of a computer that executes the above-described series of processes using a program. The information processing device 1 has a configuration similar to that shown in FIG. 16.
 CPU(Central Processing Unit)1001、ROM(Read Only Memory)1002、RAM(Random Access Memory)1003は、バス1004により相互に接続されている。 A CPU (Central Processing Unit) 1001, a ROM (Read Only Memory) 1002, and a RAM (Random Access Memory) 1003 are interconnected by a bus 1004.
 バス1004には、さらに、入出力インタフェース1005が接続されている。入出力インタフェース1005には、入力部1006、出力部1007、記憶部1008、通信部1009、およびドライブ1010が接続されている。ドライブ1010は、磁気ディスク、光ディスク、光磁気ディスク、または半導体メモリなどのリムーバブルメディア1011を駆動する。 An input/output interface 1005 is further connected to the bus 1004. An input section 1006, an output section 1007, a storage section 1008, a communication section 1009, and a drive 1010 are connected to the input/output interface 1005. The drive 1010 drives a removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
 以上のように構成されるコンピュータでは、CPU1001が、例えば、記憶部1008に記憶されているプログラムを入出力インタフェース1005およびバス1004を介してRAM1003にロードして実行することにより、上述した一連の処理が行われる。 In the computer configured as described above, the CPU 1001 executes the series of processes described above by, for example, loading a program stored in the storage unit 1008 into the RAM 1003 via the input/output interface 1005 and the bus 1004 and executing it. will be held.
 CPU1001が実行するプログラムは、例えばリムーバブルメディア1011に記録して、あるいは、ローカルエリアネットワーク、インターネット、デジタル放送といった、有線または無線の伝送媒体を介して提供され、記憶部1008にインストールされる。 A program executed by the CPU 1001 is installed in the storage unit 1008 by being recorded on a removable medium 1011 or provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital broadcasting.
 なお、コンピュータが実行するプログラムは、本明細書で説明する順序に沿って時系列に処理が行われるプログラムであっても良いし、並列に、あるいは呼び出しが行われたとき等の必要なタイミングで処理が行われるプログラムであってもよい。 Note that the program executed by the computer may be a program in which processing is performed chronologically in accordance with the order described in this specification, in parallel, or at necessary timing such as when a call is made. It may also be a program that performs processing.
 本明細書において、システムとは、複数の構成要素(装置、モジュール(部品)等)の集合を意味し、すべての構成要素が同一筐体中にあるか否かは問わない。したがって、別個の筐体に収納され、ネットワークを介して接続されている複数の装置、および、1つの筐体の中に複数のモジュールが収納されている1つの装置は、いずれも、システムである。 In this specification, a system refers to a collection of multiple components (devices, modules (components), etc.), regardless of whether all the components are in the same casing. Therefore, multiple devices housed in separate casings and connected via a network, and a single device with multiple modules housed in one casing are both systems. .
 本明細書に記載された効果はあくまで例示であって限定されるものでは無く、また他の効果があってもよい。 The effects described in this specification are merely examples and are not limiting, and other effects may also exist.
 本技術の実施の形態は、上述した実施の形態に限定されるものではなく、本技術の要旨を逸脱しない範囲において種々の変更が可能である。 The embodiments of the present technology are not limited to the embodiments described above, and various changes can be made without departing from the gist of the present technology.
 例えば、本技術は、1つの機能をネットワークを介して複数の装置で分担、共同して処理するクラウドコンピューティングの構成をとることができる。 For example, the present technology can take a cloud computing configuration in which one function is shared and jointly processed by multiple devices via a network.
 また、上述のフローチャートで説明した各ステップは、1つの装置で実行する他、複数の装置で分担して実行することができる。 Furthermore, each step described in the above flowchart can be executed by one device or can be shared and executed by multiple devices.
 さらに、1つのステップに複数の処理が含まれる場合には、その1つのステップに含まれる複数の処理は、1つの装置で実行する他、複数の装置で分担して実行することができる。 Further, when one step includes multiple processes, the multiple processes included in that one step can be executed by one device or can be shared and executed by multiple devices.
・構成の組み合わせ例
 本技術は、以下のような構成をとることもできる。
- Examples of combinations of configurations The present technology can also have the following configurations.
(1)
 患者の外観が映る映像に基づいて、前記患者の顔に関する特徴量である表情特徴量と、前記患者の姿勢に関する特徴量である姿勢特徴量とを取得する取得部と、
 前記表情特徴量と前記姿勢特徴量のうちの少なくともいずれかに基づいて、前記患者の状態を推定する推定部と、
 前記患者の状態の推定結果を出力する出力部と
 を備える患者モニタリングシステム。
(2)
 前記推定部は、前記表情特徴量の信頼度が閾値より低い場合、前記姿勢特徴量に基づいて前記患者の状態を推定する
 前記(1)に記載の患者モニタリングシステム。
(3)
 前記推定部は、前記姿勢特徴量の信頼度が閾値より低い場合、前記表情特徴量に基づいて前記患者の状態を推定する
 前記(1)に記載の患者モニタリングシステム。
(4)
 前記推定部は、前記患者の顔の向きが所定の向きではない場合、前記姿勢特徴量に基づいて前記患者の状態を推定する
 前記(1)に記載の患者モニタリングシステム。
(5)
 前記推定部は、前記患者の状態として、前記患者の感情状態、前記患者の意識状態、および、前記患者の呼吸状態のうちの少なくともいずれかを推定する
 前記(1)乃至(4)のいずれかに記載の患者モニタリングシステム。
(6)
 前記取得部は、前記患者の顔が映る顔映像から抽出された前記表情特徴量を取得し、前記患者の骨格が映る全身映像から抽出された前記姿勢特徴量を取得する
 前記(1)乃至(5)のいずれかに記載の患者モニタリングシステム。
(7)
 前記出力部は、前記患者の状態の推定結果に対して前記表情特徴量と前記姿勢特徴量のそれぞれが寄与した割合を表す寄与率の情報を、前記患者の状態の推定結果とともに出力する
 前記(1)乃至(6)のいずれかに記載の患者モニタリングシステム。
(8)
 前記推定部は、それぞれの信頼度に基づいて重み付けされた前記表情特徴量と前記姿勢特徴量に基づいて、前記患者の状態を推定する
 前記(1)乃至(7)のいずれかに記載の患者モニタリングシステム。
(9)
 前記推定部は、モニタリング対象としてユーザにより設定された状態に応じた前記寄与率に基づいて、前記患者の状態を推定する
 前記(7)または(8)に記載の患者モニタリングシステム。
(10)
 前記推定部は、前記患者の属性情報に基づいて算出された前記寄与率に基づいて、前記患者の状態を推定する
 前記(7)乃至(9)のいずれかに記載の患者モニタリングシステム。
(11)
 前記推定部は、前記患者の生体反応を示す情報であるバイタルサインに基づいて補正された前記寄与率に基づいて、前記患者の状態を推定する
 前記(7)乃至(10)のいずれかに記載の患者モニタリングシステム。
(12)
 前記取得部は、機械学習によって生成された推論モデルを用いて抽出された前記表情特徴量と前記姿勢特徴量とを取得する
 前記(6)乃至(11)のいずれかに記載の患者モニタリングシステム。
(13)
 前記推定部は、機械学習によって生成された推論モデルを用いて前記患者の状態を推定する
 前記(1)乃至(12)のいずれかに記載の患者モニタリングシステム。
(14)
 モニタリング対象とする前記患者がユーザにより指定されたことに応じて、モニタリング対象とする前記患者の外観が映る前記映像の取得を開始する映像取得部をさらに備える
 前記(1)乃至(13)のいずれかに記載の患者モニタリングシステム。
(15)
 前記取得部は、前記患者の外観を撮影するカメラによって抽出され、前記カメラから送信されてきた前記表情特徴量と前記姿勢特徴量とを取得する
 前記(1)乃至(14)のいずれかに記載の患者モニタリングシステム。
(16)
 前記推定部は、前記患者の特定の行動を前記姿勢特徴量に基づいて推定する
 前記(1)乃至(15)のいずれかに記載の患者モニタリングシステム。
(17)
 患者モニタリングシステムが、
 患者の外観が映る映像に基づいて、前記患者の顔に関する特徴量である表情特徴量と、前記患者の姿勢に関する特徴量である姿勢特徴量とを取得し、
 前記表情特徴量と前記姿勢特徴量のうちの少なくともいずれかに基づいて、前記患者の状態を推定し、
 前記患者の状態の推定結果を出力する
 患者モニタリング方法。
(18)
 コンピュータに、
 患者の外観が映る映像に基づいて、前記患者の顔に関する特徴量である表情特徴量と、前記患者の姿勢に関する特徴量である姿勢特徴量とを取得し、
 前記表情特徴量と前記姿勢特徴量のうちの少なくともいずれかに基づいて、前記患者の状態を推定し、
 前記患者の状態の推定結果を出力する
 処理を実行させるためのプログラム。
(1)
an acquisition unit that acquires a facial expression feature, which is a feature related to the patient's face, and a posture feature, which is a feature related to the patient's posture, based on a video showing the patient's appearance;
an estimation unit that estimates the condition of the patient based on at least one of the facial expression feature amount and the posture feature amount;
A patient monitoring system comprising: an output unit that outputs an estimation result of the patient's condition.
(2)
The patient monitoring system according to (1), wherein the estimation unit estimates the condition of the patient based on the posture feature when the reliability of the facial expression feature is lower than a threshold.
(3)
The patient monitoring system according to (1), wherein the estimation unit estimates the condition of the patient based on the facial expression feature when the reliability of the posture feature is lower than a threshold.
(4)
The patient monitoring system according to (1), wherein the estimation unit estimates the condition of the patient based on the posture feature amount when the patient's face direction is not a predetermined direction.
(5)
The estimating unit estimates at least one of the patient's emotional state, the patient's conscious state, and the patient's breathing state as the patient's state. patient monitoring system as described in .
(6)
The acquisition unit acquires the facial expression feature extracted from a facial image showing the patient's face, and acquires the posture feature extracted from a whole body image showing the patient's skeleton. The patient monitoring system according to any one of 5).
(7)
The output unit outputs, together with the estimation result of the patient's condition, information on a contribution rate representing the contribution of each of the facial expression feature amount and the posture feature amount to the estimation result of the patient's condition. The patient monitoring system according to any one of 1) to (6).
(8)
The patient according to any one of (1) to (7), wherein the estimation unit estimates the condition of the patient based on the facial expression feature amount and the posture feature amount, which are weighted based on their reliability. monitoring system.
(9)
The patient monitoring system according to (7) or (8), wherein the estimation unit estimates the condition of the patient based on the contribution rate according to the condition set by the user as a monitoring target.
(10)
The patient monitoring system according to any one of (7) to (9), wherein the estimation unit estimates the patient's condition based on the contribution rate calculated based on the patient's attribute information.
(11)
The estimating unit estimates the condition of the patient based on the contribution rate corrected based on vital signs that are information indicating biological reactions of the patient. patient monitoring system.
(12)
The patient monitoring system according to any one of (6) to (11), wherein the acquisition unit acquires the facial expression feature amount and the posture feature amount extracted using an inference model generated by machine learning.
(13)
The patient monitoring system according to any one of (1) to (12), wherein the estimation unit estimates the condition of the patient using an inference model generated by machine learning.
(14)
Any of (1) to (13) above, further comprising an image acquisition unit that starts acquiring the image showing the external appearance of the patient to be monitored, in response to the patient being designated by the user. A patient monitoring system described in Crab.
(15)
The acquisition unit acquires the facial expression feature amount and the posture feature amount that are extracted by a camera that photographs the appearance of the patient and transmitted from the camera, according to any one of (1) to (14) above. patient monitoring system.
(16)
The patient monitoring system according to any one of (1) to (15), wherein the estimation unit estimates a specific behavior of the patient based on the posture feature amount.
(17)
The patient monitoring system
Based on a video showing the appearance of the patient, obtain an expression feature amount that is a feature amount related to the patient's face, and a posture feature amount that is a feature amount related to the patient's posture;
Estimating the condition of the patient based on at least one of the facial expression feature amount and the posture feature amount,
A patient monitoring method that outputs an estimation result of the patient's condition.
(18)
to the computer,
Based on a video showing the appearance of the patient, obtain an expression feature amount that is a feature amount related to the patient's face, and a posture feature amount that is a feature amount related to the patient's posture;
Estimating the condition of the patient based on at least one of the facial expression feature amount and the posture feature amount,
A program for executing a process of outputting an estimation result of the patient's condition.
 1 情報処理装置, 2 モニタ, 11 映像取得部, 12 情報処理部, 13 表示制御部, 31 表情認識部, 32 姿勢認識部, 33 動的寄与率演算部, 34 静的寄与率演算部, 35 寄与率決定部, 36 外観特徴量演算部, 51 補正部 1 Information processing device, 2 Monitor, 11 Video acquisition unit, 12 Information processing unit, 13 Display control unit, 31 Facial expression recognition unit, 32 Posture recognition unit, 33 Dynamic contribution rate calculation unit, 34 Static contribution rate calculation unit, 35 Contribution rate determination unit, 36 Appearance feature calculation unit, 51 Correction unit

Claims (18)

  1.  患者の外観が映る映像に基づいて、前記患者の顔に関する特徴量である表情特徴量と、前記患者の姿勢に関する特徴量である姿勢特徴量とを取得する取得部と、
     前記表情特徴量と前記姿勢特徴量のうちの少なくともいずれかに基づいて、前記患者の状態を推定する推定部と、
     前記患者の状態の推定結果を出力する出力部と
     を備える患者モニタリングシステム。
    an acquisition unit that acquires a facial expression feature, which is a feature related to the patient's face, and a posture feature, which is a feature related to the patient's posture, based on a video showing the patient's appearance;
    an estimation unit that estimates the condition of the patient based on at least one of the facial expression feature amount and the posture feature amount;
    A patient monitoring system comprising: an output unit that outputs an estimation result of the patient's condition.
  2.  前記推定部は、前記表情特徴量の信頼度が閾値より低い場合、前記姿勢特徴量に基づいて前記患者の状態を推定する
     請求項1に記載の患者モニタリングシステム。
    The patient monitoring system according to claim 1, wherein the estimation unit estimates the condition of the patient based on the posture feature when the reliability of the facial expression feature is lower than a threshold.
  3.  前記推定部は、前記姿勢特徴量の信頼度が閾値より低い場合、前記表情特徴量に基づいて前記患者の状態を推定する
     請求項1に記載の患者モニタリングシステム。
    The patient monitoring system according to claim 1, wherein the estimation unit estimates the condition of the patient based on the facial expression feature when the reliability of the posture feature is lower than a threshold.
  4.  前記推定部は、前記患者の顔の向きが所定の向きではない場合、前記姿勢特徴量に基づいて前記患者の状態を推定する
     請求項1に記載の患者モニタリングシステム。
    The patient monitoring system according to claim 1, wherein the estimation unit estimates the condition of the patient based on the posture feature amount when the patient's face direction is not a predetermined direction.
  5.  前記推定部は、前記患者の状態として、前記患者の感情状態、前記患者の意識状態、および、前記患者の呼吸状態のうちの少なくともいずれかを推定する
     請求項1に記載の患者モニタリングシステム。
    The patient monitoring system according to claim 1, wherein the estimation unit estimates at least one of the patient's emotional state, the patient's conscious state, and the patient's respiratory state as the patient's state.
  6.  前記取得部は、前記患者の顔が映る顔映像から抽出された前記表情特徴量を取得し、前記患者の骨格が映る全身映像から抽出された前記姿勢特徴量を取得する
     請求項1に記載の患者モニタリングシステム。
    The acquisition unit acquires the facial expression feature extracted from a facial image showing the patient's face, and acquires the posture feature extracted from a whole body image showing the patient's skeleton. Patient monitoring system.
  7.  前記出力部は、前記患者の状態の推定結果に対して前記表情特徴量と前記姿勢特徴量のそれぞれが寄与した割合を表す寄与率の情報を、前記患者の状態の推定結果とともに出力する
     請求項1に記載の患者モニタリングシステム。
    The output unit outputs, together with the estimation result of the patient's condition, information on a contribution rate representing the contribution of each of the facial expression feature amount and the posture feature amount to the estimation result of the patient's condition. 1. The patient monitoring system according to 1.
  8.  前記推定部は、それぞれの信頼度に基づいて重み付けされた前記表情特徴量と前記姿勢特徴量に基づいて、前記患者の状態を推定する
     請求項1に記載の患者モニタリングシステム。
    The patient monitoring system according to claim 1, wherein the estimation unit estimates the condition of the patient based on the facial expression feature amount and the posture feature amount, which are weighted based on their reliability.
  9.  前記推定部は、モニタリング対象としてユーザにより設定された状態に応じた前記寄与率に基づいて、前記患者の状態を推定する
     請求項7に記載の患者モニタリングシステム。
    The patient monitoring system according to claim 7, wherein the estimation unit estimates the condition of the patient based on the contribution rate according to the condition set by the user as the monitoring target.
  10.  前記推定部は、前記患者の属性情報に基づいて算出された前記寄与率に基づいて、前記患者の状態を推定する
     請求項7に記載の患者モニタリングシステム。
    The patient monitoring system according to claim 7, wherein the estimation unit estimates the patient's condition based on the contribution rate calculated based on the patient's attribute information.
  11.  前記推定部は、前記患者の生体反応を示す情報であるバイタルサインに基づいて補正された前記寄与率に基づいて、前記患者の状態を推定する
     請求項7に記載の患者モニタリングシステム。
    The patient monitoring system according to claim 7, wherein the estimation unit estimates the condition of the patient based on the contribution rate corrected based on vital signs that are information indicating biological reactions of the patient.
  12.  前記取得部は、機械学習によって生成された推論モデルを用いて抽出された前記表情特徴量と前記姿勢特徴量とを取得する
     請求項6に記載の患者モニタリングシステム。
    The patient monitoring system according to claim 6, wherein the acquisition unit acquires the facial expression feature amount and the posture feature amount extracted using an inference model generated by machine learning.
  13.  前記推定部は、機械学習によって生成された推論モデルを用いて前記患者の状態を推定する
     請求項1に記載の患者モニタリングシステム。
    The patient monitoring system according to claim 1, wherein the estimator estimates the patient's condition using an inference model generated by machine learning.
  14.  モニタリング対象とする前記患者がユーザにより指定されたことに応じて、モニタリング対象とする前記患者の外観が映る前記映像の取得を開始する映像取得部をさらに備える
     請求項1に記載の患者モニタリングシステム。
    The patient monitoring system according to claim 1, further comprising an image acquisition unit that starts acquiring the image showing the external appearance of the patient to be monitored, in response to the patient being designated by the user.
  15.  前記取得部は、前記患者の外観を撮影するカメラによって抽出され、前記カメラから送信されてきた前記表情特徴量と前記姿勢特徴量とを取得する
     請求項1に記載の患者モニタリングシステム。
    The patient monitoring system according to claim 1, wherein the acquisition unit acquires the facial expression feature amount and the posture feature amount extracted by a camera that photographs the appearance of the patient and transmitted from the camera.
  16.  前記推定部は、前記患者の特定の行動を前記姿勢特徴量に基づいて推定する
     請求項1に記載の患者モニタリングシステム。
    The patient monitoring system according to claim 1, wherein the estimation unit estimates a specific behavior of the patient based on the posture feature amount.
  17.  患者モニタリングシステムが、
     患者の外観が映る映像に基づいて、前記患者の顔に関する特徴量である表情特徴量と、前記患者の姿勢に関する特徴量である姿勢特徴量とを取得し、
     前記表情特徴量と前記姿勢特徴量のうちの少なくともいずれかに基づいて、前記患者の状態を推定し、
     前記患者の状態の推定結果を出力する
     患者モニタリング方法。
    The patient monitoring system
    Based on a video showing the appearance of the patient, obtain an expression feature amount that is a feature amount related to the patient's face, and a posture feature amount that is a feature amount related to the patient's posture;
    Estimating the condition of the patient based on at least one of the facial expression feature amount and the posture feature amount,
    A patient monitoring method that outputs an estimation result of the patient's condition.
  18.  コンピュータに、
     患者の外観が映る映像に基づいて、前記患者の顔に関する特徴量である表情特徴量と、前記患者の姿勢に関する特徴量である姿勢特徴量とを取得し、
     前記表情特徴量と前記姿勢特徴量のうちの少なくともいずれかに基づいて、前記患者の状態を推定し、
     前記患者の状態の推定結果を出力する
     処理を実行させるためのプログラム。
    to the computer,
    Based on a video showing the appearance of the patient, obtain an expression feature amount that is a feature amount related to the patient's face, and a posture feature amount that is a feature amount related to the patient's posture;
    Estimating the condition of the patient based on at least one of the facial expression feature amount and the posture feature amount,
    A program for executing a process of outputting an estimation result of the patient's condition.
PCT/JP2023/006124 2022-03-07 2023-02-21 Patient monitoring system, patient monitoring method, and program WO2023171356A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-034034 2022-03-07
JP2022034034 2022-03-07

Publications (1)

Publication Number Publication Date
WO2023171356A1 true WO2023171356A1 (en) 2023-09-14

Family

ID=87934967

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/006124 WO2023171356A1 (en) 2022-03-07 2023-02-21 Patient monitoring system, patient monitoring method, and program

Country Status (1)

Country Link
WO (1) WO2023171356A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016513517A (en) * 2013-03-14 2016-05-16 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. Device and method for obtaining vital sign information of a subject
JP2017500942A (en) * 2013-12-16 2017-01-12 メドトロニック・ミニメッド・インコーポレーテッド Method and system for improving the reliability of orthogonal redundant sensors
CN108711452A (en) * 2018-01-25 2018-10-26 鲁东大学 The health state analysis method and system of view-based access control model
JP2019524187A (en) * 2016-06-22 2019-09-05 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. Method and apparatus for determining respiratory information of a subject
JP2020010865A (en) * 2018-07-19 2020-01-23 本田技研工業株式会社 Driver state determining device and driver state determination method
JP2020120908A (en) * 2019-01-30 2020-08-13 パナソニックIpマネジメント株式会社 Mental state estimation system, mental state estimation method, and program
CN113257440A (en) * 2021-06-21 2021-08-13 杭州金线连科技有限公司 ICU intelligent nursing system based on patient video identification
JP2021185999A (en) * 2020-05-26 2021-12-13 株式会社島津製作所 Physical ability presentation method, and physical ability presentation device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016513517A (en) * 2013-03-14 2016-05-16 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. Device and method for obtaining vital sign information of a subject
JP2017500942A (en) * 2013-12-16 2017-01-12 メドトロニック・ミニメッド・インコーポレーテッド Method and system for improving the reliability of orthogonal redundant sensors
JP2019524187A (en) * 2016-06-22 2019-09-05 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. Method and apparatus for determining respiratory information of a subject
CN108711452A (en) * 2018-01-25 2018-10-26 鲁东大学 The health state analysis method and system of view-based access control model
JP2020010865A (en) * 2018-07-19 2020-01-23 本田技研工業株式会社 Driver state determining device and driver state determination method
JP2020120908A (en) * 2019-01-30 2020-08-13 パナソニックIpマネジメント株式会社 Mental state estimation system, mental state estimation method, and program
JP2021185999A (en) * 2020-05-26 2021-12-13 株式会社島津製作所 Physical ability presentation method, and physical ability presentation device
CN113257440A (en) * 2021-06-21 2021-08-13 杭州金线连科技有限公司 ICU intelligent nursing system based on patient video identification

Similar Documents

Publication Publication Date Title
US20170311864A1 (en) Health care assisting device and health care assisting method
KR20070009478A (en) 3d anatomical visualization of physiological signals for online monitoring
CN114283494A (en) Early warning method, device, equipment and storage medium for user falling
WO2019013257A1 (en) Monitoring assistance system and method for controlling same, and program
TW201837901A (en) Emotion recognition device and emotion recognition program
JP2020120908A (en) Mental state estimation system, mental state estimation method, and program
CN108882853A (en) Measurement physiological parameter is triggered in time using visual context
JP7356849B2 (en) Monitoring system, monitoring method and storage medium
JP2019152914A (en) Nursing facility child watching system and information processing method
US20210225489A1 (en) Determining the likelihood of patient self-extubation
KR20210151742A (en) Method, apparatus and computer program for detecting shock occurrence through biometric image analysis of artificial intelligence model
WO2023171356A1 (en) Patient monitoring system, patient monitoring method, and program
WO2020203015A1 (en) Illness aggravation estimation system
WO2023189309A1 (en) Computer program, information processing method, and information processing device
US20220409120A1 (en) Information Processing Method, Computer Program, Information Processing Device, and Information Processing System
US20220254241A1 (en) Ai-based video tagging for alarm management
CN115349824A (en) Health early warning method and device, computer equipment and storage medium
US20210375462A1 (en) System and method utilizing software-enabled artificial intelligence for health monitoring and medical diagnostics
Siedel et al. Contactless interactive fall detection and sleep quality estimation for supporting elderly with incipient dementia
US20240127469A1 (en) Touchless volume waveform sampling to determine respiration rate
WO2023178957A1 (en) Vital sign monitoring method, related device, and computer-readable storage medium
US20220284739A1 (en) Method for generating learned model, system for generating learned model, prediction device, and prediction system
CN109195505B (en) Physiological measurement processing
WO2022224524A1 (en) Patient monitoring system
US20220386981A1 (en) Information processing system and information processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23766534

Country of ref document: EP

Kind code of ref document: A1