CN114639149B - Sick bed terminal with emotion recognition function - Google Patents

Sick bed terminal with emotion recognition function Download PDF

Info

Publication number
CN114639149B
CN114639149B CN202210269101.3A CN202210269101A CN114639149B CN 114639149 B CN114639149 B CN 114639149B CN 202210269101 A CN202210269101 A CN 202210269101A CN 114639149 B CN114639149 B CN 114639149B
Authority
CN
China
Prior art keywords
image
unit
face
emotion recognition
infusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210269101.3A
Other languages
Chinese (zh)
Other versions
CN114639149A (en
Inventor
赵明高
包建义
钱克勤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Huitian Technology Co ltd
Original Assignee
Hangzhou Huitian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Huitian Technology Co ltd filed Critical Hangzhou Huitian Technology Co ltd
Priority to CN202210269101.3A priority Critical patent/CN114639149B/en
Publication of CN114639149A publication Critical patent/CN114639149A/en
Application granted granted Critical
Publication of CN114639149B publication Critical patent/CN114639149B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M5/00Devices for bringing media into the body in a subcutaneous, intra-vascular or intramuscular way; Accessories therefor, e.g. filling or cleaning devices, arm-rests
    • A61M5/14Infusion devices, e.g. infusing by gravity; Blood infusion; Accessories therefor
    • A61M5/168Means for controlling media flow to the body or for metering media to the body, e.g. drip meters, counters ; Monitoring media flow to the body
    • A61M5/16877Adjusting flow; Devices for setting a flow rate
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M5/00Devices for bringing media into the body in a subcutaneous, intra-vascular or intramuscular way; Accessories therefor, e.g. filling or cleaning devices, arm-rests
    • A61M5/14Infusion devices, e.g. infusing by gravity; Blood infusion; Accessories therefor
    • A61M5/168Means for controlling media flow to the body or for metering media to the body, e.g. drip meters, counters ; Monitoring media flow to the body
    • A61M5/16886Means for controlling media flow to the body or for metering media to the body, e.g. drip meters, counters ; Monitoring media flow to the body for measuring fluid flow rate, i.e. flowmeters
    • A61M5/1689Drip counters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M5/00Devices for bringing media into the body in a subcutaneous, intra-vascular or intramuscular way; Accessories therefor, e.g. filling or cleaning devices, arm-rests
    • A61M5/14Infusion devices, e.g. infusing by gravity; Blood infusion; Accessories therefor
    • A61M5/168Means for controlling media flow to the body or for metering media to the body, e.g. drip meters, counters ; Monitoring media flow to the body
    • A61M5/172Means for controlling media flow to the body or for metering media to the body, e.g. drip meters, counters ; Monitoring media flow to the body electrical or electronic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2205/00General characteristics of the apparatus
    • A61M2205/18General characteristics of the apparatus with alarm
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M2205/00General characteristics of the apparatus
    • A61M2205/33Controlling, regulating or measuring
    • A61M2205/3306Optical measuring means

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Hematology (AREA)
  • Biophysics (AREA)
  • Anesthesiology (AREA)
  • Vascular Medicine (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Psychiatry (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Fluid Mechanics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Psychology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Pathology (AREA)
  • Surgery (AREA)
  • Social Psychology (AREA)
  • Medical Informatics (AREA)

Abstract

The invention discloses a sickbed terminal with emotion recognition function, which comprises: the system comprises a nursing calling module, an infusion monitoring module and an emotion recognition module; the emotion recognition module includes: the image acquisition unit is used for acquiring image information of a patient; an image recognition unit for recognizing a face area in the image information; an image cutting unit for cutting out a human face ROI from the face region; the image extraction unit is used for extracting a plurality of face activity units from the human face ROI; the image graying unit is used for performing graying processing on the human face ROI to obtain a grayscale image; the edge extraction unit is used for extracting the edges of the face image from the gray level image; and the emotion recognition unit is used for receiving the face activity unit, the gray level image and the edges of the face image, processing the face activity unit, the gray level image and the edges of the face image and outputting emotion categories. The sickbed terminal with the emotion recognition function can accurately recognize the emotion of a patient, so that a doctor can treat the patient conveniently.

Description

Sick bed terminal with emotion recognition function
Technical Field
The invention relates to a sickbed terminal with an emotion recognition function.
Background
With the development of intellectualization, more and more traditional devices gradually have intelligent functions. At present, most ward bedside terminals on the market integrate intelligent functions such as voice call and early warning. However, the existing sickroom bedside terminal has no emotion recognition function, cannot sense emotion change of a patient in real time, cannot meet requirements of a patient suffering from expression disorder in time, and is difficult to evaluate experience and satisfaction of the patient.
Disclosure of Invention
The invention provides a sickbed terminal with an emotion recognition function, which solves the technical problems mentioned above and specifically adopts the following technical scheme:
a hospital bed terminal having an emotion recognition function, comprising:
the nursing calling module is used for being operated by a patient to be connected to a nurse workstation and carrying out voice interaction;
the transfusion monitoring module is used for monitoring the transfusion condition of the patient;
an emotion recognition module for recognizing an emotional state of the patient;
the emotion recognition module includes:
the image acquisition unit is used for acquiring image information of a patient;
an image recognition unit for recognizing a face area in the image information;
an image cutting unit for cutting out a human face ROI from the face region;
the image extraction unit is used for extracting a plurality of face activity units from the human face ROI;
the image graying unit is used for performing graying processing on the human face ROI to obtain a grayscale image;
the edge extraction unit is used for extracting the edges of the face image from the gray level image;
and the emotion recognition unit is used for receiving the face activity unit, the gray level image and the edges of the face image, processing the face activity unit, the gray level image and the edges of the face image and outputting emotion categories.
Further, the emotion recognition unit includes:
the first splicing subunit is used for splicing the gray image and the edge of the face image;
the VGG19 convolution neural network unit is used for receiving the splicing result of the first splicing subunit and extracting features to obtain emotion features;
the characteristic processing unit is used for flattening the emotional characteristics extracted by the VGG19 convolutional neural network unit into a 1-dimensional array;
the second splicing subunit is used for splicing the emotional characteristics processed by the facial activity unit and the characteristic processing unit;
and the classifier unit is used for receiving the splicing result of the second splicing subunit and processing the splicing result to obtain the emotion category.
Further, the classifier unit is composed of two fully connected layers and one ReLU activation function layer.
Further, the image extraction unit extracts 17 face motion units from the human face ROI by using an OpenFace tool, where the 17 face motion units are AU01, AU02, AU04, AU05, AU06, AU07, AU09, AU10, AU12, AU14, AU15, AU17, AU20, AU23, AU25, AU26, and AU45, respectively.
Further, the image graying sub-module grays the human face ROI by the following formula to obtain a grayscale image:
Gray=0.299*R+0.587*G+0.114*B
wherein, R, G, B are respectively the red, green, blue channel of RGB image, gray is the image after the graying.
Further, the edge extraction sub-module extracts the edges of the face image from the gray scale image through a canny algorithm, and in the process that the edge extraction sub-module extracts the edges of the face image from the gray scale image through the canny algorithm, the upper threshold value and the lower threshold value are respectively set to be 100 and 50.
Further, the emotion recognition module further includes:
and the visual display unit is used for displaying the face image of the patient and the emotion type obtained by the analysis of the emotion recognition unit.
Further, the infusion monitoring module comprises:
the dripping speed monitoring unit is used for monitoring the infusion equipment to judge the current infusion dripping speed;
the intelligent early warning unit is used for judging whether the current infusion dripping speed is proper or not according to the infusion dripping speed detected by the dripping speed monitoring module and the type of the current infusion liquid medicine;
and the whole-process detection unit is used for calculating in real time according to the infusion dropping speed detected by the dropping speed monitoring module and the type of the current infusion liquid medicine to obtain the capacity and the remaining time of the remaining liquid medicine.
Further, the infusion monitoring module further comprises:
and an automatic cutting unit for automatically cutting off the supply of the chemical liquid when the remaining time reaches a threshold value.
Further, the infusion monitoring module sends the infusion dripping speed, the residual liquid medicine capacity and the residual time to a post-nurse station server in real time;
the nurse station server sends the received information to a nurse workstation large screen for display;
the nurse station server sends an early warning signal to the nurse station early warning equipment when the remaining time reaches a threshold value;
and the early warning equipment of the nurse station sends out an early warning signal.
The sickbed terminal with the emotion recognition function has the beneficial effects that the mood of the patient can be accurately recognized, so that the sickbed terminal is beneficial to the treatment of the patient by a doctor.
The sickbed terminal with the emotion recognition function has the advantages that the edges of the face and the gray-scale image of the face are fused in a data level fusion mode, and a network is guided to extract the image contour characteristics; a feature level fusion mode is adopted to fuse the facial action unit and the high-level features automatically extracted by the neural network, and the reliability of the emotion recognition algorithm is improved by combining the priori knowledge and the high-level features.
Drawings
Fig. 1 is a schematic diagram of emotion recognition performed by an emotion recognition module of a hospital bed terminal having an emotion recognition function according to the present invention;
fig. 2 is a schematic diagram of a network structure of an emotion recognition unit of the present invention;
fig. 3 is a display schematic of the visual display unit of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and the embodiments.
The application discloses sick bed terminal with emotion recognition function mainly contains: nursing calling module, infusion guardianship module and mood identification module. The nursing calling module is used for the patient to operate so as to be connected to the nurse workstation and carry out voice interaction. The infusion monitoring module is used for monitoring the infusion condition of the patient. The emotion recognition module is used for recognizing an emotional state of the patient.
Wherein, the emotion recognition module comprises: the device comprises an image acquisition unit, an image recognition unit, an image cutting unit, an image extraction unit, an image graying unit, an edge extraction unit and an emotion recognition unit. The emotion recognition module is responsible for monitoring the emotional state in real time, monitoring the seven components of the emotional state of the patient in real time by processing and analyzing the video image acquired by the visible light camera, and displaying and feeding back the result to the patient through the visual interface module. The method for emotion recognition by the emotion recognition module is shown in fig. 1.
Specifically, the image acquisition unit is used for acquiring image information of a patient. In this application, the image acquisition unit is the visible light camera, can carry out real-time video shooting. And cutting the video data according to the time frame to obtain a single-frame image. The image recognition unit is used for recognizing the face area in the image information. The image cutting sheet is used to cut out a face ROI (region of interest) from the face region. The image extraction unit is used for extracting a plurality of face activity units from the human face ROI. Specifically, the image extraction unit extracts 17 face motion units from the face ROI using the OpenFace tool, the 17 face motion units being AU01, AU02, AU04, AU05, AU06, AU07, AU09, AU10, AU12, AU14, AU15, AU17, AU20, AU23, AU25, AU26, and AU45, respectively.
The image graying unit is used for performing graying processing on the human face ROI to obtain a grayscale image. Specifically, the image graying sub-module performs graying processing on the human face ROI through the following formula to obtain a grayscale image:
Gray=0.299*R+0.587*G+0.114*B
wherein, R, G, B are respectively the red, green, blue channel of RGB image, gray is the image after the graying.
The edge extraction unit is used for extracting the edges of the face image from the gray level image. Specifically, the edge extraction submodule extracts the edges of the face image from the gray scale image through a canny algorithm, and in the process that the edge extraction submodule extracts the edges of the face image from the gray scale image through the canny algorithm, the upper threshold value and the lower threshold value are respectively set to be 100 and 50.
The emotion recognition unit is used for receiving the face activity unit, the gray level image and the edges of the face image, processing the face activity unit, the gray level image and the edges of the face image and outputting emotion categories. Fig. 2 is a diagram showing a network structure of the emotion recognition unit of the present application. Specifically, the emotion recognition unit includes: the device comprises a first splicing subunit, a VGG19 convolutional neural network unit, a feature processing unit, a second splicing subunit and a classifier unit.
The first splicing subunit is used for splicing the gray image and the edges of the face image. And the VGG19 convolutional neural network unit is used for receiving the splicing result of the first splicing subunit and extracting the features to obtain the emotional features. The feature processing unit is used for flattening the emotional features extracted by the VGG19 convolutional neural network unit into a 1-dimensional array. And the second splicing subunit is used for splicing the emotional characteristics processed by the facial activity unit and the characteristic processing unit. And the classifier unit is used for receiving the splicing result of the second splicing subunit and processing the splicing result to obtain the emotion category. The classifier unit is composed of two full connection layers and a ReLU activation function layer.
Specifically, as shown in the following formula,
F vgg =VGG([Gray:Edge])
F union =[flatten(F):AU]
F c =Linear(ReLU(Linear(F union ;θ 2 ,b 2 ));θ 1 ,b 1 )
result=SoftMax(F c )
wherein Gary is in the middle of R 1×W×H For a grayed face image, edge belongs to R 1×W×H Edge of face image extracted for Canny, [ Gray: edge]∈R 2×W×H Is a communication between Gary and EdgeAnd (4) performing track-level splicing. And the human face edge detection result is a binary image which corresponds to the pixels of the original gray image one by one, the value of the pixels of the edge is judged to be 1, otherwise, the value is 0, and the human face edge detection result and the pixels of the original image are overlapped in a one-to-one correspondence mode through channel level splicing. By this step, the fusion of the a priori knowledge of the face contour is realized, and the spliced data is input into the VGG19 network. The VGG19 network is a classic structure of a convolutional neural network, 3x3 convolutional kernels are used for replacing 7x7 convolutional kernels, 2 x3 convolutional kernels are used for replacing 5 x 5 convolutional kernels, the depth of the network is improved under the condition that the same perception field as that of the large-scale convolutional kernels is guaranteed, the effect of extracting characteristics of the neural network is improved to a certain extent, and the characteristics related to emotion recognition are automatically extracted through training of the VGG19 network. Features extracted from the VGG19 network are denoted as F vgg ∈R c×w×h . Flatten () is a flattening function which flattens the features extracted by VGG19 network into a 1-dimensional array of flatten (F) e R c·w·h . Flattened features and face motion unit intensity value array AU ∈ R extracted using openface 17 Stitching (this one-step stitching stitches two one-dimensional feature vectors into a longer one-dimensional vector), the resulting feature is denoted as F union ∈R c·w·h+17 . This step enables the high level features extracted by the VGG19 network to be integrated with a priori knowledge of the face motion unit (AU). Theta.theta. 21 For full connection layer weight, b 1 ,b 2 For the fully-connected layer bias term, the input to the classifier is F union Classifier output F c And obtaining a seven-component emotion result after the processing of the SoftMax () function. The network training process uses cross entropy and the model processing process is shown in fig. 2.
As a preferred embodiment, the emotion recognition module further includes: and a visual display unit.
The visual display unit is used for displaying the face image of the patient and the emotion type obtained by the analysis of the emotion recognition unit. As shown in fig. 3, the entire display content of the screen is a camera shooting screen, and the patient is guided to expose the face in the camera view. And the real-time expression classification result is displayed on the left side of the interface. And representing the probability of each category of the real-time predicted facial expression classification result in a histogram mode. And the right side of the interface displays stage data statistics, the data is stored in the cloud, the stage data statistics are carried out every 10 seconds, and the result is represented in a radar chart mode.
As a preferred embodiment, the infusion monitoring module comprises: the system comprises a dripping speed monitoring unit, an intelligent early warning unit and a whole-course detection unit.
The dripping speed monitoring unit is used for monitoring the infusion equipment to judge the current infusion dripping speed. Specifically, the dripping speed monitoring unit adopts an infrared detection technology, and the infusion dripping speed is accurately detected by the principle that the infrared light intensity changes in the dripping process of the infusion liquid. The drop speed monitoring unit preferably adopts a wireless charging technology, so that poor charging contact caused by liquid medicine pollution in the infusion process can be effectively avoided.
The intelligent early warning unit is used for judging whether the current infusion dropping speed is proper or not according to the infusion dropping speed detected by the dropping speed monitoring module and the type of the current infusion liquid medicine. The reasonable infusion dripping speed range is intelligently judged according to infusion medicines, and automatic early warning is realized when the infusion dripping speed range exceeds the range.
The whole-process detection unit is used for calculating in real time according to the infusion dropping speed detected by the dropping speed monitoring module and the type of the current infusion liquid medicine to obtain the capacity and the remaining time of the remaining liquid medicine.
As a preferred embodiment, the infusion monitoring module further comprises: an automatic cutting unit.
The automatic cutting unit is used for automatically cutting off the supply of the liquid medicine when the remaining time reaches a threshold value. The automatic cutting-off unit automatically detects and judges the empty bottle state of the transfusion, and the transfusion tube is automatically cut off by the active protection device, so that the situations of blood return and the like can be effectively prevented.
As a preferred embodiment, the infusion monitoring module sends the infusion dropping speed, the residual liquid medicine capacity and the residual time to the post-nurse station server in real time. And the nurse station server sends the received information to a nurse workstation large screen for display. And the nurse station server sends an early warning signal to the nurse station early warning equipment when the remaining time reaches a threshold value. And the early warning equipment of the nurse station sends out an early warning signal.
Specifically, the real-time infusion information of the whole ward can be displayed on a large screen of a nurse station in a summary manner, and the information such as infusion dripping speed, residual capacity and residual time is provided. The ward transfusion reminding information can be broadcasted on the large screen of the nurse station in real time. Such as: 09 bed with too fast dripping speed, 07 bed with transfusion completed, etc. Meanwhile, safe storage can be supported by ward infusion data. The storage includes information such as patient information, details of the drug, time, dripping speed, alarm, etc. The intelligent infusion data retrieval is supported according to patients, beds, time and the like. The infusion data storage, statistics and analysis functions are supported, and the working efficiency and the management level can be effectively improved through infusion big data analysis.
The foregoing shows and describes the general principles, principal features and advantages of the invention. It should be understood by those skilled in the art that the above embodiments do not limit the present invention in any way, and all technical solutions obtained by using equivalents or equivalent changes fall within the protection scope of the present invention.

Claims (3)

1. A hospital bed terminal with emotion recognition function, comprising:
the nursing calling module is used for being operated by a patient to be connected to a nurse workstation and carrying out voice interaction;
the transfusion monitoring module is used for monitoring the transfusion condition of the patient;
an emotion recognition module for recognizing an emotional state of the patient;
the emotion recognition module includes:
the image acquisition unit is used for acquiring image information of a patient;
an image recognition unit configured to recognize a face area in the image information;
an image cutting unit for cutting out a human face ROI from the face region;
the image extraction unit is used for extracting a plurality of face activity units from the human face ROI;
the image graying unit is used for performing graying processing on the human face ROI to obtain a grayscale image;
the edge extraction unit is used for extracting the edges of the face images from the gray level images;
the emotion recognition unit is used for receiving the facial activity unit, the gray level image and the human face image edge and processing the facial activity unit, the gray level image and the human face image edge so as to output emotion categories;
the emotion recognition unit includes:
the first splicing subunit is used for splicing the gray image and the edge of the face image;
the VGG19 convolutional neural network unit is used for receiving the splicing result of the first splicing subunit and extracting features to obtain emotional features;
the characteristic processing unit is used for flattening the emotional characteristics extracted by the VGG19 convolutional neural network unit into a 1-dimensional array;
the second splicing subunit is used for splicing the facial activity unit and the emotion characteristics processed by the characteristic processing unit;
the classifier unit is used for receiving the splicing result of the second splicing subunit and processing the splicing result to obtain the emotion category;
the classifier unit consists of two full connection layers and a ReLU activation function layer;
the image extraction unit extracts 17 face motion units from the human face ROI by using an OpenFace tool, wherein the 17 face motion units are AU01, AU02, AU04, AU05, AU06, AU07, AU09, AU10, AU12, AU14, AU15, AU17, AU20, AU23, AU25, AU26 and AU45 respectively;
the image graying submodule carries out graying processing on the human face ROI through the following formula to obtain a grayscale image:
Gray=0.299*R+0.587*G+0.114*B
wherein, R, G and B are respectively red, green and blue channels of an RGB image, and Gray is a grayed image;
the emotion recognition module further includes:
the visual display unit is used for displaying the face image of the patient and the emotion category obtained by the analysis of the emotion recognition unit;
the infusion monitoring module comprises:
the dripping speed monitoring unit is used for monitoring the infusion equipment to judge the current infusion dripping speed;
the intelligent early warning unit is used for judging whether the current infusion dripping speed is proper or not according to the infusion dripping speed detected by the dripping speed monitoring module and the type of the current infusion liquid medicine;
the whole-process detection unit is used for calculating in real time according to the infusion dropping speed detected by the dropping speed monitoring module and the type of the current infusion liquid medicine to obtain the capacity and the remaining time of the remaining liquid medicine;
the infusion monitoring module further comprises:
and an automatic cutting unit for automatically cutting off the supply of the chemical liquid when the remaining time reaches a threshold value.
2. The patient bed terminal with emotion recognition function according to claim 1,
the edge extraction submodule extracts the edges of the face image from the gray-scale image through a canny algorithm, and in the process that the edge extraction submodule extracts the edges of the face image from the gray-scale image through the canny algorithm, the upper threshold value and the lower threshold value are respectively set to be 100 and 50.
3. The hospital bed terminal with emotion recognition function according to claim 1,
the infusion monitoring module sends the infusion dripping speed, the residual liquid medicine capacity and the residual time to a back nurse station server in real time;
the nurse station server sends the received information to a nurse workstation large screen for display;
the nurse station server sends an early warning signal to nurse station early warning equipment when the remaining time reaches a threshold value;
and the early warning equipment of the nurse station sends out an early warning signal.
CN202210269101.3A 2022-03-18 2022-03-18 Sick bed terminal with emotion recognition function Active CN114639149B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210269101.3A CN114639149B (en) 2022-03-18 2022-03-18 Sick bed terminal with emotion recognition function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210269101.3A CN114639149B (en) 2022-03-18 2022-03-18 Sick bed terminal with emotion recognition function

Publications (2)

Publication Number Publication Date
CN114639149A CN114639149A (en) 2022-06-17
CN114639149B true CN114639149B (en) 2023-04-07

Family

ID=81950063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210269101.3A Active CN114639149B (en) 2022-03-18 2022-03-18 Sick bed terminal with emotion recognition function

Country Status (1)

Country Link
CN (1) CN114639149B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592063A (en) * 2012-03-25 2012-07-18 河北普康医疗设备有限公司 Digitalized information management system for nursing stations in hospitals and method for realizing same
CN110516593A (en) * 2019-08-27 2019-11-29 京东方科技集团股份有限公司 A kind of emotional prediction device, emotional prediction method and display device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202600707U (en) * 2012-03-25 2012-12-12 河北普康医疗设备有限公司 Digital information management system used in hospital nurse station and nursing bed
CN110020582B (en) * 2018-12-10 2023-11-24 平安科技(深圳)有限公司 Face emotion recognition method, device, equipment and medium based on deep learning
CN109460749A (en) * 2018-12-18 2019-03-12 深圳壹账通智能科技有限公司 Patient monitoring method, device, computer equipment and storage medium
CN110188615B (en) * 2019-04-30 2021-08-06 中国科学院计算技术研究所 Facial expression recognition method, device, medium and system
CN112329683B (en) * 2020-11-16 2024-01-26 常州大学 Multi-channel convolutional neural network facial expression recognition method
CN113989890A (en) * 2021-10-29 2022-01-28 河南科技大学 Face expression recognition method based on multi-channel fusion and lightweight neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592063A (en) * 2012-03-25 2012-07-18 河北普康医疗设备有限公司 Digitalized information management system for nursing stations in hospitals and method for realizing same
CN110516593A (en) * 2019-08-27 2019-11-29 京东方科技集团股份有限公司 A kind of emotional prediction device, emotional prediction method and display device

Also Published As

Publication number Publication date
CN114639149A (en) 2022-06-17

Similar Documents

Publication Publication Date Title
CN108172291B (en) Diabetic retinopathy recognition system based on fundus images
US10799168B2 (en) Individual data sharing across a social network
US7418116B2 (en) Imaging method and system
CN111292845B (en) Intelligent nursing interaction system for intelligent ward
US7319780B2 (en) Imaging method and system for health monitoring and personal security
CN110477925A (en) A kind of fall detection for home for the aged old man and method for early warning and system
EP3721320B1 (en) Communication methods and systems
CN102760200A (en) Intelligent cloud health system
US20170105668A1 (en) Image analysis for data collected from a remote computing device
CN110264443A (en) Eye fundus image lesion mask method, device and medium based on feature visualization
CN107247945A (en) A kind of ward sufferer monitor system and monitoring method based on Kinect device
JP2019526416A (en) Retina imaging apparatus and system having edge processing
CN106027663A (en) ICU nursing monitor system based on data sharing system of medical system
JP2018163644A (en) Bed exit monitoring system
CN114639149B (en) Sick bed terminal with emotion recognition function
CN114067236A (en) Target person information detection device, detection method and storage medium
CN209347003U (en) A kind of intelligent health condition detecting system
CN109480775A (en) A kind of icterus neonatorum identification device based on artificial intelligence, equipment, system
CN117038027A (en) Nurse station information management system
CN111150369A (en) Medical assistance apparatus, medical assistance detection apparatus and method
CN115329128A (en) Data processing method and device suitable for nutrition management system
CN114092974A (en) Identity recognition method, device, terminal and storage medium
CN114283948A (en) Child liver disease continuous nursing method, system and storage medium
Karacs et al. Bionic eyeglass: The first prototype A personal navigation device for visually impaired-A review
CN117315787B (en) Infant milk-spitting real-time identification method, device and equipment based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant