CN111227789A - Human health monitoring method and device - Google Patents
Human health monitoring method and device Download PDFInfo
- Publication number
- CN111227789A CN111227789A CN201811445238.XA CN201811445238A CN111227789A CN 111227789 A CN111227789 A CN 111227789A CN 201811445238 A CN201811445238 A CN 201811445238A CN 111227789 A CN111227789 A CN 111227789A
- Authority
- CN
- China
- Prior art keywords
- health state
- state detection
- monitored user
- trained
- health
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000036541 health Effects 0.000 title claims abstract description 233
- 238000012544 monitoring process Methods 0.000 title claims abstract description 126
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000001514 detection method Methods 0.000 claims abstract description 191
- 230000001815 facial effect Effects 0.000 claims abstract description 43
- 238000012806 monitoring device Methods 0.000 claims abstract description 10
- 230000003862 health status Effects 0.000 claims description 55
- 238000002372 labelling Methods 0.000 claims description 20
- 238000012549 training Methods 0.000 claims description 18
- 230000008921 facial expression Effects 0.000 claims description 17
- 238000013528 artificial neural network Methods 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 10
- 230000006870 function Effects 0.000 description 29
- 230000008569 process Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 7
- 230000000474 nursing effect Effects 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000010801 machine learning Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 210000004027 cell Anatomy 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 208000024891 symptom Diseases 0.000 description 2
- 206010010904 Convulsion Diseases 0.000 description 1
- 206010011224 Cough Diseases 0.000 description 1
- 206010019196 Head injury Diseases 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 208000006673 asthma Diseases 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000036461 convulsion Effects 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 208000019622 heart disease Diseases 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000035790 physiological processes and functions Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
Abstract
The embodiment of the application discloses a human health monitoring method and a human health monitoring device. One embodiment of the method comprises: acquiring a face image of a monitored user; detecting the facial image of the monitored user by using the trained health state detection model to obtain a health state detection result of the monitored user; and executing preset monitoring operation corresponding to the health state detection result of the monitored user. The embodiment realizes non-contact human health state detection, can quickly monitor the health state detection result, and expands the application range of health monitoring.
Description
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to the field of artificial intelligence, and particularly relates to a human health monitoring method and device.
Background
Face recognition is a non-contact biometric feature recognition technology, and has been applied to various identity authentication scenarios, such as access control systems, security monitoring, social security management, and the like. In medical care, face recognition may also be applied to rights management systems, such as the management of the rights of personnel in an intensive care unit. However, in addition to the identity information, the human face also includes expressive features formed by the facial reaction of the human body based on the stimulus of the external thing or the change of the physiological state.
In the current medical monitoring, a medical sensor is usually adopted to collect physiological parameters of a human body, and the collected physiological parameters are transmitted to a monitoring center. The medical monitoring device may issue an alarm after the physiological parameter is outside of a normal range. Most of these medical sensors are contact type and need to be worn on the body of the user. In the scene that the user does not wear the medical sensor, such as sudden diseases, the health state parameters of the user cannot be acquired in real time, and therefore the user cannot be rescued in time.
Disclosure of Invention
The embodiment of the application provides a human health monitoring method and device.
In a first aspect, an embodiment of the present application provides a human health monitoring method, including: acquiring a face image of a monitored user; detecting the facial image of the monitored user by using the trained health state detection model to obtain a health state detection result of the monitored user; and executing preset monitoring operation corresponding to the health state detection result of the monitored user.
In some embodiments, the detecting the facial image of the monitored user by using the trained health status detection model to obtain the health status detection result of the monitored user includes: inputting the facial image of the monitored user into a feature extraction network in a trained health state detection model to extract facial expression features of the monitored user; and classifying the facial expression features by using the recognition network in the trained health state detection model to obtain a health state detection result.
In some embodiments, the performing of the preset monitoring operation corresponding to the health status detection result of the monitored user includes: determining a rescue demand index of the monitored user according to the health state detection result of the monitored user; and determining a target monitoring operation corresponding to the rescue demand index of the monitored user according to the corresponding relation between the preset monitoring operation and the preset rescue demand index which are configured in advance, and executing the target monitoring operation.
In some embodiments, the above method further comprises: training based on a sample face image set to obtain a trained health state detection model, wherein the sample face image set comprises sample face images and corresponding health state labeling information of a user; the training based on the sample face image set to obtain the trained health state detection model comprises the following steps: constructing a health state detection model to be trained based on a neural network, inputting a sample face image set into the health state detection model to be trained, and obtaining a health state prediction result of the sample face image set; and iteratively adjusting parameters of the health state detection model to be trained based on a preset loss function so that the value of the loss function meets a preset convergence condition, wherein the value of the loss function is used for representing the confidence degree that the health state prediction result of the sample face image set deviates from the health state labeling information corresponding to the sample face image set.
In some embodiments, the above method further comprises: and generating a monitoring record of the monitored user based on the health state detection result of the monitored user and the executed preset monitoring operation.
In a second aspect, an embodiment of the present application provides a human health monitoring device, including: an acquisition unit configured to acquire a face image of a monitored user; the detection unit is configured to detect the face image of the monitored user by using the trained health state detection model to obtain a health state detection result of the monitored user; and the monitoring unit is configured to execute preset monitoring operation corresponding to the health state detection result of the monitored user.
In some embodiments, the detecting unit is further configured to detect, by using the trained health status detection model, a facial image of the monitored user as follows, and obtain a health status detection result of the monitored user: inputting the facial image of the monitored user into a feature extraction network in a trained health state detection model to extract facial expression features of the monitored user; and classifying the facial expression features by using the recognition network in the trained health state detection model to obtain a health state detection result.
In some embodiments, the monitoring unit is further configured to perform a preset monitoring operation corresponding to the health status detection result of the monitored user as follows: determining a rescue demand index of the monitored user according to the health state detection result of the monitored user; and determining a target monitoring operation corresponding to the rescue demand index of the monitored user according to the corresponding relation between the preset monitoring operation and the preset rescue demand index which are configured in advance, and executing the target monitoring operation.
In some embodiments, the above apparatus further comprises: the training unit is configured to train based on a sample face image set to obtain a trained health state detection model, wherein the sample face image set comprises sample face images and corresponding health state labeling information of a user; and the training unit is configured to train a trained health state detection model based on the sample face image set in the following manner: constructing a health state detection model to be trained based on a neural network, inputting a sample face image set into the health state detection model to be trained, and obtaining a health state prediction result of the sample face image set; and iteratively adjusting parameters of the health state detection model to be trained based on a preset loss function so that the value of the loss function meets a preset convergence condition, wherein the value of the loss function is used for representing the confidence degree that the health state prediction result of the sample face image set deviates from the health state labeling information corresponding to the sample face image set.
In some embodiments, the above apparatus further comprises: and the recording unit is configured to generate a monitoring record of the monitored user based on the health state detection result of the monitored user and the executed preset monitoring operation.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method of human health monitoring as provided in the first aspect.
In a fourth aspect, the present application provides a computer-readable medium, on which a computer program is stored, where the program, when executed by a processor, implements the human health monitoring method provided in the first aspect.
According to the human health monitoring method and the human health monitoring device, the human face image of the monitored user is obtained, the trained health state detection model is used for detecting the human face image of the monitored user, the health state detection result of the monitored user is obtained, the preset monitoring operation corresponding to the health state detection result of the monitored user is executed, non-contact human health state detection is achieved, the monitoring operation can be rapidly carried out according to the health state detection result, and the application range of health monitoring is expanded.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram to which embodiments of the present application may be applied;
FIG. 2 is a flow chart of one embodiment of a method of human health monitoring according to the present application;
FIG. 3 is a flow chart of another embodiment of a method of human health monitoring according to the present application;
FIG. 4 is a flow chart of yet another embodiment of a method of human health monitoring according to the present application;
FIG. 5 is a schematic diagram of an embodiment of the personal health monitoring device of the present application;
FIG. 6 is a schematic block diagram of a computer system suitable for use in implementing an electronic device according to embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which the method or apparatus of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include an image capture device 101, terminal devices 102, 103, a network 104, and a server 105. The network 104 serves to provide a medium of communication link between the image capturing device 101 and the server 105, while providing a medium of communication link between the server 105 and the terminal devices 102, 103. The network may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The image capturing apparatus 101 may capture a face image of a user within an imaging range thereof, and transmit the face image of the user to the server 105 through the network 104. The image capturing device 101 may be, for example, various surveillance cameras, and may also be a mobile electronic device such as a mobile phone, a tablet computer, a smart watch, etc. that includes an image capturing device.
The server 105 may be a server that provides monitoring services. The server 105 may receive the face image acquired by the image acquisition device 101, analyze the health condition of the user based on the face image, and send a rescue operation instruction to the corresponding terminal device 102, 103 according to the analysis result.
The terminal devices 102, 103 may perform data transmission with the server 105 through the network 104. The terminal devices 102 and 103 may receive the instruction sent by the server 105 and perform corresponding operations. The terminal devices 102, 103 may be medical care devices, such as alarm lights, beepers for wards, etc.; the terminal devices 102, 103 may also be electronic devices such as a cell phone, a tablet computer, a desktop computer, a smart watch, etc.
The human health monitoring method provided by the embodiment of the application can be executed by the server 105, and accordingly, the human health monitoring device can be disposed in the server 105. In some scenarios, other electronic devices connected to the image capturing device 101 via a network may have a processor (e.g., GPU, etc.) for performing complex operations, and the human health monitoring apparatus provided in the embodiments of the present application may be implemented by the electronic devices having the processor for performing complex operations.
It should be understood that the number of image capturing devices, terminal devices, networks, servers in fig. 1 is merely illustrative. Any number of image acquisition devices, terminal devices, networks, servers may be present, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method of human health monitoring according to the present application is shown. The human health monitoring method comprises the following steps:
In this embodiment, an executing main body (for example, the server shown in fig. 1) of the human health monitoring method may be connected to the image capturing device, and acquire a face image of the monitored user, which is acquired by the image capturing device. Here, the monitored user may preset a user in a scene, such as a patient in a hospital ward, an elderly user in an elderly care home, a family member in a house, a staff in an office building, and the like. The monitored user can be preset, and all people in the imaging range of the image acquisition device can be used as the monitored user.
In practice, facial images acquired by cameras in scenes such as families, hospitals, nursing homes and the like can be acquired, and the facial images can comprise facial images of one or more monitored users.
In some optional implementation manners of this embodiment, the executing body may determine a face region from an image acquired by the image acquisition device, and further extract a face image of the monitored user. For example, when the acquired image includes a plurality of faces, a face detection method may be adopted to extract face regions of different users from the acquired image, and the face regions are segmented from the original image to form face images of the corresponding users. Further optionally, the monitored user may be preset, face recognition may be performed based on the extracted multiple face images, whether a user to which a face in the face image belongs is a preset monitored user is determined, and if yes, the extracted face image of the user is used as the obtained face image of the monitored user.
The facial image of the monitored user acquired in step 201 may be input into the trained health status detection model. The trained health state detection model can predict the physical health state of the corresponding user based on the input face image. The trained health state detection model can extract the characteristics representing the health state of the body from the face image, and carries out classification and identification on the basis of the characteristics representing the health state of the body to obtain the health state detection result of the monitored user.
The health status detection result may be a preset health status level or category, such as good, normal, bad; the health status detection result may also be expressed by using results of facial expression categories, for example, the expression categories are "painful" and "relaxed", which correspond to the health status detection results of "needed help" and "good physical condition", respectively. In the training process of the health state detection model, the health state grade or category can be used as an output result, the expression category can also be used as an output result, an initial health state detection model is built based on the machine learning model, and parameters of the health state detection model are continuously adjusted in the training process, so that the health state detection result obtained by the health state detection model tends to the real health state of the user.
In some optional implementation manners of this embodiment, the step 202 of detecting the facial image of the monitored user by using the trained health status detection model to obtain the health status detection result of the monitored user may include: inputting the facial image of the monitored user into a feature extraction network in a trained health state detection model to extract facial expression features of the monitored user; and classifying the facial expression features by using the recognition network in the trained health state detection model to obtain a health state detection result.
Specifically, the health state detection model may be a model constructed based on a neural network, including a feature extraction network and a recognition network. The feature extraction network can be used for extracting expression features in the face image, and the expression features can include distances between facial feature points, colors of the facial feature points, sizes of facial feature parts and the like. The recognition network can classify based on the input expression characteristics, calculate the probability that the expression characteristics are classified into each preset health state category, and select a health state category with the highest probability as the detection result of the health state.
After the health state detection result of the monitored user is obtained, the monitoring operation corresponding to the health state of the monitored user can be inquired from a preset monitoring operation list according to the health state detection result and executed.
In this embodiment, the executing body may store a preset monitoring operation list, where the monitoring operation list includes a corresponding relationship between a preset monitoring operation and a preset health status.
As an example, the monitoring operation corresponding to the "bad" health status in the monitoring operation list may include: sending a medical prompt message to an electronic device (such as a mobile phone) of a monitored user or a monitor of the monitored user; the monitoring operation corresponding to the health status of "needing help" in the monitoring operation list may include: and sending a rescue request to the emergency center or sending an instruction for controlling the calling device to call to the calling device.
In some optional implementations of this embodiment, the executing entity may further make a monitoring operation decision on the health status detection result obtained in step 202 by using a trained monitoring operation determination model. Here, the trained monitoring operation determination model may be a machine learning model, for example, a neural network-based model, and may be trained by using a correspondence between the manually labeled sample health status label and the preset monitoring operation. In the training process, the monitoring operation determination model to be trained can be adjusted in an iterative manner, so that the monitoring operation decision result of the sample health state label and the preset monitoring operation corresponding to the labeled sample health state label tend to be consistent. Therefore, the monitoring operation corresponding to the health state detection result can be determined based on the trained model, and the accuracy and pertinence of the monitoring decision are improved.
Referring back to fig. 1, an exemplary application scenario of the above embodiment of the present application is that after the image acquisition device 101 acquires a face image of a patient in a ward, the face image is sent to the server 105 through the network 104. The server 105 may perform health status detection on the facial image of the patient, and specifically, may process the facial image of the patient by using a trained health status detection model to obtain a health status detection result of the patient. When the health status of the patient is detected as "needing help", the server 105 may send a call instruction to the caller 102 in the ward, and the caller 102 performs a rescue call operation after receiving the call instruction.
According to the human health monitoring method of the embodiment of the application, the human face image of the monitored user is obtained, the trained health state detection model is used for detecting the human face image of the monitored user, the health state detection result of the monitored user is obtained, the preset monitoring operation corresponding to the health state detection result of the monitored user is executed, the non-contact human health state detection is realized, the monitoring operation can be rapidly carried out according to the health state detection result, and the application range of the health monitoring is expanded.
With continued reference to fig. 3, a flow chart of another embodiment of a method of human health monitoring according to the present application is shown. As shown in fig. 3, the process 300 of the human health monitoring method of the present embodiment may include the following steps:
In this embodiment, an executing main body (for example, the server shown in fig. 1) of the human health monitoring method may be connected to the image capturing device, and acquire a face image of the monitored user, which is acquired by the image capturing device. The monitored user can preset users in the scene, such as patients in hospital wards, elderly users in nursing homes, family members in houses, staff in office buildings and the like. The monitored user can be preset, and all people in the imaging range of the image acquisition device can be used as the monitored user.
In practice, facial images acquired by cameras in scenes such as families, hospitals, nursing homes and the like can be acquired, and the facial images can comprise facial images of one or more monitored users.
And step 302, detecting the facial image of the monitored user by using the trained health state detection model to obtain the health state detection result of the monitored user.
The facial image of the monitored user acquired in step 301 may be input into the trained health status detection model. The trained health state detection model can extract the characteristics representing the health state of the body from the face image, and carries out classification and identification on the basis of the characteristics representing the health state of the body to obtain the health state detection result of the monitored user.
In some optional implementation manners of this embodiment, the step 302 of detecting a facial image of a monitored user by using a trained health status detection model to obtain a health status detection result of the monitored user may include: inputting the facial image of the monitored user into a feature extraction network in a trained health state detection model to extract facial expression features of the monitored user; and classifying the facial expression features by using the recognition network in the trained health state detection model to obtain a health state detection result.
Step 301 and step 302 of this embodiment are respectively the same as step 201 and step 202 of the foregoing embodiment, and specific implementation manners of step 301 and step 302 may refer to descriptions of step 201 and step 202 in the foregoing embodiment, which are not described herein again.
The rescue demand index may be the degree of urgency for which rescue is required. For example, for sudden heart disease and other diseases, head trauma, asthma, convulsion and other symptoms, the rescue needs to be immediately performed, and the rescue demand index is high; and the patient can be rescued in a delayed way for symptoms such as cough, sneeze and the like, and the index of rescue demand is low.
Here, the rescue demand indexes corresponding to different health states may be preset, and after the health state of the monitored user is determined in step 302, the rescue demand index of the monitored user may be determined. In some optional implementation manners, the health state detection result may be represented by a health state level, and the health state level of the monitored user may be input into a preset rescue demand index calculation formula according to the calculation formula, so as to obtain a rescue demand index of the monitored user.
In other alternative implementations, a trained rescue need model may be used to determine a rescue need index for a monitored user. The trained rescue demand model is used for predicting rescue demand indexes under various health states. During training, the rescue demand model to be trained can be trained based on labeling information of the rescue demand indexes of various health state labels by professional medical personnel, parameters of the rescue demand model to be trained are iteratively adjusted in the training process, and the rescue demand model is continuously optimized.
In this embodiment, the corresponding relationship between the preset monitoring operation and the preset rescue demand index may be configured in advance, for example, the corresponding relationship between the preset monitoring operation and the preset rescue demand index may be stored in a list or a key-value (key-value pair) manner. The preset monitoring operation corresponding to the rescue demand index of the monitored user can be searched from the stored corresponding relation between the preset monitoring operation and the preset rescue demand index to be used as the target monitoring operation, and then the target monitoring operation can be executed.
Illustratively, the preset monitoring operations may include, but are not limited to: pushing health status information to an associated electronic device (e.g., a cell phone) of a monitored user, sending a rescue request (e.g., dialing a rescue call) to a rescue center, sending an operation instruction to a monitoring device (e.g., a pager connected to a central control platform of a corresponding department of a hospital), sending a help call and playing the help call through the connected electronic device, and so on.
In some optional implementation manners of this embodiment, the corresponding relationship between the preset monitoring operation and the preset rescue demand index may also be a monitoring decision model. The guardian decision model may be trained based on a machine learning approach. After the rescue demand index is determined, the monitoring decision model can be input to obtain a corresponding decision result of the monitoring operation as the target monitoring operation.
As can be seen from fig. 3, the human health monitoring method of the embodiment can determine not only the physical health status of the user, but also the rescue demand index of the user, and perform a corresponding monitoring operation according to the rescue demand index, and can actively and timely detect the user needing rescue and provide a corresponding rescue operation on the premise that no contact type physiological parameter acquisition device is provided.
With continued reference to fig. 4, a flow chart of yet another embodiment of a method of human health monitoring according to the present application is shown. As shown in fig. 4, the process 400 of the human health monitoring method of the present embodiment includes the following steps:
In this embodiment, a sample face image set may be obtained first, where the sample face image set may include a sample face image and health status labeling information of a user corresponding to the sample face image. Here, the labeling information of the health state of the user corresponding to the sample face image may be obtained through manual labeling, and the labeling information may be a label for representing the health state.
Specifically, the step 401 of deriving a trained health state detection model based on the sample face image set training may include steps 4011 and 4012.
In step 4011, a health state detection model to be trained is constructed based on the neural network, and the sample face image set is input into the health state detection model to be trained, so as to obtain a health state prediction result of the sample face image set.
A health state detection model to be trained, which comprises a plurality of neurons, can be constructed, and then sample face images in the sample face image set are input into the health state monitoring model to be trained for health state detection. The health status detection model to be trained may be, for example, a convolutional neural network, a cyclic neural network, or the like, and a parameter of each layer in the neural network may be initialized as an initial parameter of the health status detection model to be trained. And then predicting the health state of the user corresponding to the sample face image by using the health state detection model to be trained.
In step 4012, based on a preset loss function, iteratively adjusting parameters of the health status detection model to be trained so that a value of the loss function satisfies a preset convergence condition.
The value of the loss function is used for representing the confidence degree that the health state prediction result of the sample face image set deviates from the health state labeling information corresponding to the sample face image set.
The loss function can be constructed based on the difference between the prediction result of the health state detection of the user corresponding to the sample face image by the health state detection model to be trained and the labeling information of the health state of the user corresponding to the sample face image. The higher the confidence of the prediction result of the health state detection of the user corresponding to the sample face image by the health state detection model to be trained deviating from the labeling information of the health state of the user corresponding to the sample face image, the larger the value of the loss function is.
After the health state prediction result of the current health state detection model to be trained on the sample face image set is obtained in step 4011, the value of the current loss function can be calculated, and then whether the value of the loss function reaches a preset convergence condition is judged. If the value of the loss function does not reach the preset convergence condition, a back propagation algorithm can be adopted to propagate the prediction error of the current health state detection model to be trained back to the health state detection model to be trained, so that the health state detection model to be trained can adjust the parameters based on the loss function. And after the parameters are adjusted, detecting the health state of the sample face image set by using the health state detection model to be trained again, calculating the value of the loss function, and if the value of the loss function does not meet the preset convergence condition, continuously and reversely transmitting and adjusting the parameters of the health state detection model to be trained. Therefore, the parameters of the health state detection model to be trained are adjusted through multiple iterations, so that the detection result of the health state detection model to be trained on the health state corresponding to the sample face image approaches to the corresponding labeling information. When the value of the loss function meets the preset convergence condition, the adjustment of the parameters of the health state detection model to be trained can be stopped, and the trained health state detection model is obtained. The preset convergence condition may be that the value of the loss function is smaller than a preset threshold, or that the value of the loss function is updated by a preset number of times, that is, the number of times of the iterative adjustment parameter reaches the preset number of times.
In this embodiment, an executing main body (for example, the server shown in fig. 1) of the human health monitoring method may be connected to the image capturing device, and acquire a face image of the monitored user, which is acquired by the image capturing device. The monitored user can preset users in the scene, such as patients in hospital wards, elderly users in nursing homes, family members in houses, staff in office buildings and the like. The monitored user can be preset, and all people in the imaging range of the image acquisition device can be used as the monitored user.
In practice, facial images acquired by cameras in scenes such as families, hospitals, nursing homes and the like can be acquired, and the facial images can comprise facial images of one or more monitored users.
And 403, detecting the facial image of the monitored user by using the trained health state detection model to obtain a health state detection result of the monitored user.
The facial image of the monitored user obtained in step 402 may be input into the health status detection model trained in step 401 for health status detection.
In some optional implementation manners of this embodiment, the step 403 of detecting a facial image of a monitored user by using a trained health status detection model to obtain a health status detection result of the monitored user may include: inputting the facial image of the monitored user into a feature extraction network in a trained health state detection model to extract facial expression features of the monitored user; and classifying the facial expression features by using the recognition network in the trained health state detection model to obtain a health state detection result.
In step 404, a preset monitoring operation corresponding to the health status detection result of the monitored user is performed.
In this embodiment, by adding step 401 of obtaining a trained health state detection model based on sample face image set training, a health state detection model can be quickly obtained in a supervised learning manner, and since the health state of the user corresponding to the sample face images in the sample face image set is labeled, the type of the health state of the user predicted by the trained health state detection model is the type included in the labeling information, and the health state of the user can be more accurately classified.
In some optional implementations of the embodiments described above with reference to fig. 2, 3, and 4, the flow of the human health monitoring method may further include: and generating a monitoring record of the monitored user based on the health state detection result of the monitored user and the executed preset monitoring operation. The health status detection result of the monitored user and the monitoring operation performed by the executing body in response to the health status detection result can be recorded. Therefore, the physical condition of the monitored user can be continuously and completely recorded, and the monitored record is provided as a reference when the physical condition is subsequently treated or analyzed, so that the diagnosis and analysis are accurately carried out.
With further reference to fig. 5, as an implementation of the methods shown in the above-mentioned figures, the present application provides an embodiment of a human health monitoring apparatus, which corresponds to the embodiments of the methods shown in fig. 2, fig. 3, and fig. 4, and which can be applied to various electronic devices.
As shown in fig. 5, the human health monitoring device 500 of the present embodiment includes: an acquisition unit 501, a detection unit 502 and a monitoring unit 503. Wherein, the acquiring unit 501 is configured to acquire a face image of a monitored user; the detection unit 502 is configured to detect a facial image of the monitored user by using the trained health status detection model, so as to obtain a health status detection result of the monitored user; the monitoring unit 503 is configured to perform a preset monitoring operation corresponding to the monitored health status detection result of the user.
In some embodiments, the detecting unit 502 may be further configured to detect the facial image of the monitored user by using the trained health status detection model as follows, and obtain the health status detection result of the monitored user: inputting the facial image of the monitored user into a feature extraction network in a trained health state detection model to extract facial expression features of the monitored user; and classifying the facial expression features by using the recognition network in the trained health state detection model to obtain a health state detection result.
In some embodiments, the monitoring unit 503 may be further configured to perform a preset monitoring operation corresponding to the monitored health status detection result of the user as follows: determining a rescue demand index of the monitored user according to the health state detection result of the monitored user; and determining a target monitoring operation corresponding to the rescue demand index of the monitored user according to the corresponding relation between the preset monitoring operation and the preset rescue demand index which are configured in advance, and executing the target monitoring operation.
In some embodiments, the apparatus 500 may further include: the training unit is configured to train based on a sample face image set to obtain a trained health state detection model, wherein the sample face image set comprises sample face images and corresponding health state labeling information of a user; and the training unit is configured to train a trained health state detection model based on the sample face image set as follows: constructing a health state detection model to be trained based on a neural network, inputting a sample face image set into the health state detection model to be trained, and obtaining a health state prediction result of the sample face image set; and iteratively adjusting parameters of the health state detection model to be trained based on a preset loss function so that the value of the loss function meets a preset convergence condition, wherein the value of the loss function is used for representing the confidence degree that the health state prediction result of the sample face image set deviates from the health state labeling information corresponding to the sample face image set.
In some embodiments, the apparatus 500 may further include: and the recording unit is configured to generate a monitoring record of the monitored user based on the health state detection result of the monitored user and the executed preset monitoring operation.
It should be understood that the elements recited in apparatus 500 correspond to various steps in the methods described with reference to fig. 2, 3, and 4. Thus, the operations and features described above for the method are equally applicable to the apparatus 500 and the units included therein, and are not described in detail here.
According to the human health monitoring device 500 of the embodiment of the application, the human face image of the monitored user is input into the trained health state detection model to obtain the health state detection result of the monitored user, and the preset monitoring operation of response is executed based on the health state detection result of the monitored user, so that the non-contact human health state detection is realized, the monitoring operation can be quickly performed according to the health state detection result, and the application range of health monitoring is expanded.
Referring now to FIG. 6, shown is a block diagram of a computer system 600 suitable for use in implementing the electronic device of an embodiment of the present application. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 601. It should be noted that the computer readable medium of the present application can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a detection unit, and a monitoring unit. The names of these units do not in some cases form a limitation on the unit itself, and for example, the acquiring unit may also be described as a "unit for acquiring a face image of a monitored user".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: acquiring a face image of a monitored user; detecting the facial image of the monitored user by using the trained health state detection model to obtain a health state detection result of the monitored user; and executing preset monitoring operation corresponding to the health state detection result of the monitored user.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.
Claims (12)
1. A human health monitoring method comprises the following steps:
acquiring a face image of a monitored user;
detecting the facial image of the monitored user by using the trained health state detection model to obtain a health state detection result of the monitored user;
and executing preset monitoring operation corresponding to the health state detection result of the monitored user.
2. The method of claim 1, wherein the detecting the facial image of the monitored user by using the trained health status detection model to obtain the health status detection result of the monitored user comprises:
inputting the facial image of the monitored user into a feature extraction network in a trained health state detection model to extract facial expression features of the monitored user;
and classifying the facial expression features by using the recognition network in the trained health state detection model to obtain a health state detection result.
3. The method according to claim 1, wherein the performing of the preset monitoring operation corresponding to the monitored health status detection result of the user comprises:
determining a rescue demand index of the monitored user according to the health state detection result of the monitored user;
and determining a target monitoring operation corresponding to the rescue demand index of the monitored user according to a preset corresponding relation between the preset monitoring operation and a preset rescue demand index, and executing the target monitoring operation.
4. The method according to any one of claims 1-3, wherein the method further comprises:
training based on a sample face image set to obtain the trained health state detection model, wherein the sample face image set comprises sample face images and corresponding health state labeling information of a user; and
the training based on the sample face image set to obtain the trained health state detection model comprises:
building a health state detection model to be trained based on a neural network, inputting the sample face image set into the health state detection model to be trained, and obtaining a health state prediction result of the sample face image set;
iteratively adjusting parameters of the health state detection model to be trained based on a preset loss function so that a value of the loss function meets a preset convergence condition, wherein the value of the loss function is used for representing a confidence degree that a health state prediction result of the sample face image set deviates from health state labeling information corresponding to the sample face image set.
5. The method according to any one of claims 1-3, wherein the method further comprises:
and generating a monitoring record of the monitored user based on the health state detection result of the monitored user and the executed preset monitoring operation.
6. A human health monitoring device, comprising:
an acquisition unit configured to acquire a face image of a monitored user;
the detection unit is configured to detect the facial image of the monitored user by using the trained health state detection model to obtain a health state detection result of the monitored user;
a monitoring unit configured to perform a preset monitoring operation corresponding to a health status detection result of the monitored user.
7. The apparatus according to claim 6, wherein the detecting unit is further configured to detect the facial image of the monitored user by using the trained health status detection model, and obtain the health status detection result of the monitored user as follows:
inputting the facial image of the monitored user into a feature extraction network in a trained health state detection model to extract facial expression features of the monitored user;
and classifying the facial expression features by using the recognition network in the trained health state detection model to obtain a health state detection result.
8. The apparatus of claim 6, wherein the monitoring unit is further configured to perform a preset monitoring operation corresponding to the monitored health status detection result of the user as follows:
determining a rescue demand index of the monitored user according to the health state detection result of the monitored user;
and determining a target monitoring operation corresponding to the rescue demand index of the monitored user according to a preset corresponding relation between the preset monitoring operation and a preset rescue demand index, and executing the target monitoring operation.
9. The apparatus of any of claims 6-8, wherein the apparatus further comprises:
a training unit configured to train to derive the trained health state detection model based on a sample face image set, where the sample face image set includes sample face images and corresponding health state labeling information of a user; and
the training unit is configured to train out the trained health state detection model based on a sample face image set as follows:
building a health state detection model to be trained based on a neural network, inputting the sample face image set into the health state detection model to be trained, and obtaining a health state prediction result of the sample face image set;
iteratively adjusting parameters of the health state detection model to be trained based on a preset loss function so that a value of the loss function meets a preset convergence condition, wherein the value of the loss function is used for representing a confidence degree that a health state prediction result of the sample face image set deviates from health state labeling information corresponding to the sample face image set.
10. The apparatus of any of claims 6-8, wherein the apparatus further comprises:
a recording unit configured to generate a monitoring record of the monitored user based on the health status detection result of the monitored user and the performed preset monitoring operation.
11. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
12. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811445238.XA CN111227789A (en) | 2018-11-29 | 2018-11-29 | Human health monitoring method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811445238.XA CN111227789A (en) | 2018-11-29 | 2018-11-29 | Human health monitoring method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111227789A true CN111227789A (en) | 2020-06-05 |
Family
ID=70866478
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811445238.XA Pending CN111227789A (en) | 2018-11-29 | 2018-11-29 | Human health monitoring method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111227789A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111951954A (en) * | 2020-08-10 | 2020-11-17 | 中国平安人寿保险股份有限公司 | Body health state detection method and device, readable storage medium and terminal equipment |
CN112395979A (en) * | 2020-11-17 | 2021-02-23 | 平安科技(深圳)有限公司 | Image-based health state identification method, device, equipment and storage medium |
CN112418022A (en) * | 2020-11-10 | 2021-02-26 | 广州富港万嘉智能科技有限公司 | Human body data detection method and device |
CN112861788A (en) * | 2021-03-10 | 2021-05-28 | 中电健康云科技有限公司 | Method for judging health condition based on face color recognition technology |
CN116487050A (en) * | 2023-06-21 | 2023-07-25 | 深圳市万佳安智能科技有限公司 | Human health monitoring method, device and computer equipment |
CN112395979B (en) * | 2020-11-17 | 2024-05-10 | 平安科技(深圳)有限公司 | Image-based health state identification method, device, equipment and storage medium |
Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102166118A (en) * | 2011-04-08 | 2011-08-31 | 常州康新电子科技有限公司 | Remote health surveillance system |
CN102831412A (en) * | 2012-09-11 | 2012-12-19 | 魏骁勇 | Teaching attendance checking method and device based on face recognition |
CN103106327A (en) * | 2011-11-15 | 2013-05-15 | 马欣 | Remote real-time family health monitoring system |
CN103984919A (en) * | 2014-04-24 | 2014-08-13 | 上海优思通信科技有限公司 | Facial expression recognition method based on rough set and mixed features |
CN104636580A (en) * | 2013-11-13 | 2015-05-20 | 广州华久信息科技有限公司 | Health monitoring mobile phone based on human face |
CN105868561A (en) * | 2016-04-01 | 2016-08-17 | 乐视控股(北京)有限公司 | Health monitoring method and device |
CN106096598A (en) * | 2016-08-22 | 2016-11-09 | 深圳市联合视觉创新科技有限公司 | A kind of method and device utilizing degree of depth related neural network model to identify human face expression |
CN106652341A (en) * | 2016-11-10 | 2017-05-10 | 深圳市元征软件开发有限公司 | Aged person monitoring and aiding method and device based on body area network |
CN106709468A (en) * | 2016-12-31 | 2017-05-24 | 北京中科天云科技有限公司 | City region surveillance system and device |
CN106778506A (en) * | 2016-11-24 | 2017-05-31 | 重庆邮电大学 | A kind of expression recognition method for merging depth image and multi-channel feature |
CN106778657A (en) * | 2016-12-28 | 2017-05-31 | 南京邮电大学 | Neonatal pain expression classification method based on convolutional neural networks |
CN107180225A (en) * | 2017-04-19 | 2017-09-19 | 华南理工大学 | A kind of recognition methods for cartoon figure's facial expression |
CN107491740A (en) * | 2017-07-28 | 2017-12-19 | 北京科技大学 | A kind of neonatal pain recognition methods based on facial expression analysis |
CN108062971A (en) * | 2017-12-08 | 2018-05-22 | 青岛海尔智能技术研发有限公司 | The method, apparatus and computer readable storage medium that refrigerator menu is recommended |
CN108268850A (en) * | 2018-01-24 | 2018-07-10 | 成都鼎智汇科技有限公司 | A kind of big data processing method based on image |
CN108304826A (en) * | 2018-03-01 | 2018-07-20 | 河海大学 | Facial expression recognizing method based on convolutional neural networks |
CN108491835A (en) * | 2018-06-12 | 2018-09-04 | 常州大学 | Binary channels convolutional neural networks towards human facial expression recognition |
CN108509905A (en) * | 2018-03-30 | 2018-09-07 | 百度在线网络技术(北京)有限公司 | Health state evaluation method, apparatus, electronic equipment and storage medium |
CN108511066A (en) * | 2018-03-29 | 2018-09-07 | 百度在线网络技术(北京)有限公司 | information generating method and device |
CN108629945A (en) * | 2018-05-29 | 2018-10-09 | 深圳来邦科技有限公司 | A kind of doctor supports the house formula endowment monitor system of combination |
-
2018
- 2018-11-29 CN CN201811445238.XA patent/CN111227789A/en active Pending
Patent Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102166118A (en) * | 2011-04-08 | 2011-08-31 | 常州康新电子科技有限公司 | Remote health surveillance system |
CN103106327A (en) * | 2011-11-15 | 2013-05-15 | 马欣 | Remote real-time family health monitoring system |
CN102831412A (en) * | 2012-09-11 | 2012-12-19 | 魏骁勇 | Teaching attendance checking method and device based on face recognition |
CN104636580A (en) * | 2013-11-13 | 2015-05-20 | 广州华久信息科技有限公司 | Health monitoring mobile phone based on human face |
CN103984919A (en) * | 2014-04-24 | 2014-08-13 | 上海优思通信科技有限公司 | Facial expression recognition method based on rough set and mixed features |
CN105868561A (en) * | 2016-04-01 | 2016-08-17 | 乐视控股(北京)有限公司 | Health monitoring method and device |
CN106096598A (en) * | 2016-08-22 | 2016-11-09 | 深圳市联合视觉创新科技有限公司 | A kind of method and device utilizing degree of depth related neural network model to identify human face expression |
CN106652341A (en) * | 2016-11-10 | 2017-05-10 | 深圳市元征软件开发有限公司 | Aged person monitoring and aiding method and device based on body area network |
CN106778506A (en) * | 2016-11-24 | 2017-05-31 | 重庆邮电大学 | A kind of expression recognition method for merging depth image and multi-channel feature |
CN106778657A (en) * | 2016-12-28 | 2017-05-31 | 南京邮电大学 | Neonatal pain expression classification method based on convolutional neural networks |
CN106709468A (en) * | 2016-12-31 | 2017-05-24 | 北京中科天云科技有限公司 | City region surveillance system and device |
CN107180225A (en) * | 2017-04-19 | 2017-09-19 | 华南理工大学 | A kind of recognition methods for cartoon figure's facial expression |
CN107491740A (en) * | 2017-07-28 | 2017-12-19 | 北京科技大学 | A kind of neonatal pain recognition methods based on facial expression analysis |
CN108062971A (en) * | 2017-12-08 | 2018-05-22 | 青岛海尔智能技术研发有限公司 | The method, apparatus and computer readable storage medium that refrigerator menu is recommended |
CN108268850A (en) * | 2018-01-24 | 2018-07-10 | 成都鼎智汇科技有限公司 | A kind of big data processing method based on image |
CN108304826A (en) * | 2018-03-01 | 2018-07-20 | 河海大学 | Facial expression recognizing method based on convolutional neural networks |
CN108511066A (en) * | 2018-03-29 | 2018-09-07 | 百度在线网络技术(北京)有限公司 | information generating method and device |
CN108509905A (en) * | 2018-03-30 | 2018-09-07 | 百度在线网络技术(北京)有限公司 | Health state evaluation method, apparatus, electronic equipment and storage medium |
CN108629945A (en) * | 2018-05-29 | 2018-10-09 | 深圳来邦科技有限公司 | A kind of doctor supports the house formula endowment monitor system of combination |
CN108491835A (en) * | 2018-06-12 | 2018-09-04 | 常州大学 | Binary channels convolutional neural networks towards human facial expression recognition |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111951954A (en) * | 2020-08-10 | 2020-11-17 | 中国平安人寿保险股份有限公司 | Body health state detection method and device, readable storage medium and terminal equipment |
CN112418022A (en) * | 2020-11-10 | 2021-02-26 | 广州富港万嘉智能科技有限公司 | Human body data detection method and device |
CN112418022B (en) * | 2020-11-10 | 2024-04-09 | 广州富港生活智能科技有限公司 | Human body data detection method and device |
CN112395979A (en) * | 2020-11-17 | 2021-02-23 | 平安科技(深圳)有限公司 | Image-based health state identification method, device, equipment and storage medium |
CN112395979B (en) * | 2020-11-17 | 2024-05-10 | 平安科技(深圳)有限公司 | Image-based health state identification method, device, equipment and storage medium |
CN112861788A (en) * | 2021-03-10 | 2021-05-28 | 中电健康云科技有限公司 | Method for judging health condition based on face color recognition technology |
CN116487050A (en) * | 2023-06-21 | 2023-07-25 | 深圳市万佳安智能科技有限公司 | Human health monitoring method, device and computer equipment |
CN116487050B (en) * | 2023-06-21 | 2023-12-22 | 深圳市万佳安智能科技有限公司 | Human health monitoring method, device and computer equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Kim et al. | Emergency situation monitoring service using context motion tracking of chronic disease patients | |
KR102133943B1 (en) | Devices and methods for providing home health care for senior health | |
US20190216333A1 (en) | Thermal face image use for health estimation | |
CN111227789A (en) | Human health monitoring method and device | |
EP3693966B1 (en) | System and method for continuous privacy-preserved audio collection | |
CN109492595B (en) | Behavior prediction method and system suitable for fixed group | |
US20210327562A1 (en) | Artificial intelligence driven rapid testing system for infectious diseases | |
US20160217260A1 (en) | System, method and computer program product for patient triage | |
US11631306B2 (en) | Methods and system for monitoring an environment | |
Pazienza et al. | Adaptive critical care intervention in the internet of medical things | |
CN111241883A (en) | Method and device for preventing remote detected personnel from cheating | |
Pourhomayoun et al. | Multiple model analytics for adverse event prediction in remote health monitoring systems | |
US10417484B2 (en) | Method and system for determining an intent of a subject using behavioural pattern | |
Alvarez et al. | Multimodal monitoring of Parkinson's and Alzheimer's patients using the ICT4LIFE platform | |
Mocanu et al. | AmIHomCare: A complex ambient intelligent system for home medical assistance | |
CN113990500A (en) | Vital sign parameter monitoring method and device and storage medium | |
CN113569671A (en) | Abnormal behavior alarm method and device | |
US11688264B2 (en) | System and method for patient movement detection and fall monitoring | |
Damre et al. | Smart Healthcare Wearable Device for Early Disease Detection Using Machine Learning | |
Elgendy et al. | Fog-based remote in-home health monitoring framework | |
US20220391760A1 (en) | Combining model outputs into a combined model output | |
Sundharamurthy et al. | Cloud‐based onboard prediction and diagnosis of diabetic retinopathy | |
KR102645192B1 (en) | Electronic device for managing bedsores based on artificial intelligence model and operating method thereof | |
US11922696B2 (en) | Machine learning based dignity preserving transformation of videos for remote monitoring | |
Mishra et al. | CURA: Real Time Artificial Intelligence and IoT based Fall Detection Systems for patients suffering from Dementia |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200605 |
|
RJ01 | Rejection of invention patent application after publication |