CN115245318A - Automatic identification method of effective IPPG signal based on deep learning - Google Patents

Automatic identification method of effective IPPG signal based on deep learning Download PDF

Info

Publication number
CN115245318A
CN115245318A CN202210070616.0A CN202210070616A CN115245318A CN 115245318 A CN115245318 A CN 115245318A CN 202210070616 A CN202210070616 A CN 202210070616A CN 115245318 A CN115245318 A CN 115245318A
Authority
CN
China
Prior art keywords
ippg
signal
effective
ippg signal
signals
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210070616.0A
Other languages
Chinese (zh)
Inventor
熊继平
陈泽辉
陈经纬
程汉权
李金红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Normal University CJNU
Original Assignee
Zhejiang Normal University CJNU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Normal University CJNU filed Critical Zhejiang Normal University CJNU
Priority to CN202210070616.0A priority Critical patent/CN115245318A/en
Publication of CN115245318A publication Critical patent/CN115245318A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/021Measuring pressure in heart or blood vessels
    • A61B5/02108Measuring pressure in heart or blood vessels from analysis of pulse wave characteristics
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0062Arrangements for scanning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • A61B5/02416Detecting, measuring or recording pulse rate or heart rate using photoplethysmograph signals, e.g. generated by infrared radiation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • A61B5/02416Detecting, measuring or recording pulse rate or heart rate using photoplethysmograph signals, e.g. generated by infrared radiation
    • A61B5/02427Details of sensor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/1455Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters
    • A61B5/14551Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters for measuring blood gases
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/1455Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters
    • A61B5/14551Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters for measuring blood gases
    • A61B5/14552Details of sensors specially adapted therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7225Details of analog processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/725Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Artificial Intelligence (AREA)
  • Physiology (AREA)
  • Signal Processing (AREA)
  • Cardiology (AREA)
  • Psychiatry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Optics & Photonics (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Vascular Medicine (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Power Engineering (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an automatic identification method of effective IPPG signals based on deep learning, which comprises the following steps: s1, collecting face video sample information; s2, carrying out face positioning, selecting an interested region, and extracting an IPPG signal from the interested region; s3, preprocessing the extracted IPPG signal; s4, selecting effective and typical error IPPG signals from the preprocessed IPPG signals to prepare a data set, and training the data set by adopting a convolutional neural network to obtain an effective IPPG signal recognition model based on the CNN; s5, acquiring video information of the face to be detected; s6, processing the acquired face video information to obtain a preprocessed IPPG signal, and inputting the preprocessed IPPG signal into a CNN-based effective IPPG signal identification model to finish the effectiveness judgment of the IPPG signal; compared with the existing identification method, the method provided by the invention has the advantages that the convolutional neural network is used for judging the IPPG signal, the effective IPPG signal in the signal can be identified and automatically extracted, and the accuracy and universality are higher.

Description

Automatic identification method of effective IPPG signal based on deep learning
Technical Field
The invention relates to the technical field of computer vision, in particular to an automatic identification method of an effective IPPG signal based on deep learning.
Background
In recent years, the advent of photoplethysmography (PPG) has provided a new direction for blood pressure measurement, and pulse waves can reflect much information about cardiovascular functions of a subject, and theoretically, pulse formation is closely related to blood pressure. Due to the demand for low cost, simple and portable technology, the wide application of small semiconductor elements, PPG, is widely used in vital signs monitoring. Photoplethysmography may be defined as a technique for measuring the change in blood volume in various parts of the body. Each time the myocardium contracts, blood is expelled from the ventricle, pressure pulses are transmitted through the circulatory system, photoplethysmography provides a descriptive analysis of the flow of blood through superficial arteries and can therefore be used to infer and estimate relevant vital signs. The see-through PPG can measure the transmitted light changes due to blood volume changes; for reflective PPG, the probe is placed on the same side as the light source and is pulsed by collecting reflected light measurements. PPG signals are typically monitored using laser doppler and Radar Vital Sign Monitors (RVSM). However, they all require the volunteer to remain still during operation, and the resulting PPG signals are acquired and recorded in contact with the human body by means of a light emitter and a light receiver, the accuracy of which is highly dependent on the distance between the light emitter and the light receiver, and furthermore, the position of the device on the body is critical. For the above reasons, there is a significant inconvenience in using PPG signals in long-term medical monitoring.
In view of this, in recent years, researchers have proposed a camera-based non-contact blood pressure measurement (NCBP) method called Imaging photoplethysmography (IPPG). IPPG signals refer to signals extracted by using camera images and processing algorithms. The IPPG signal is based on changes in facial skin color and therefore can show regional blood volume changes. However, due to interference factors such as light change and head movement, a large amount of noise exists in the IPPG signal acquired from the camera, a large number of invalid IPPG signals exist, and selecting an effective IPPG signal is an essential step for subsequent expansion application.
Most of the existing methods for selecting effective IPPG signals adopt manual IPPG feature extraction. Sunke proposes a non-contact blood pressure measurement method based on IPPG (based on the non-contact blood pressure measurement method of IPPG, information technology, 10 th of 2020, pages 31-38). This method locates the extreme values of the waveform by comparing the values before and after the ith signal, but this method is not suitable for dealing with multiple extreme values of IPPG, and it cannot ensure that the correct extreme value is located to select the waveform. In addition, the traditional PPG signal identification method mainly comprises the steps of setting a window to slide on a PPG signal, firstly detecting a main wave peak, then detecting the distance between two adjacent wave troughs of the signal, then detecting the distance between 2 continuous wave trough lengths, then calculating the kurtosis and skewness of each PPG signal, and finally realizing the rough judgment of a single effective waveform.
Disclosure of Invention
The present invention is directed to provide a method for automatically identifying an effective IPPG signal based on deep learning, so as to solve the problems mentioned in the background art.
In order to achieve the purpose, the invention provides the following technical scheme:
a method for automatically identifying effective IPPG signals based on deep learning comprises the following steps:
s1, collecting face video sample information;
s2, positioning the collected face video sample information, acquiring an interested area of each frame of picture of the face video, and extracting an IPPG signal from the interested area;
s3, preprocessing the extracted IPPG signal;
s4, selecting effective and typical error IPPG signals from the preprocessed IPPG signals to prepare a data set, and training the data set by adopting a convolutional neural network to obtain an effective IPPG signal recognition model based on the CNN;
s5, acquiring video information of the face to be detected;
and S6, after the acquired face video information is processed in the steps S2 and S3 in sequence, inputting the face video information into the CNN-based effective IPPG signal identification model obtained in the step S4 to finish the validity judgment of the IPPG signal.
Preferably, in step S2, the positioning of the collected face video sample information and the selection of the region of interest of each frame of image of the face video sample specifically include the following steps:
s21, acquiring image frames from the collected face video samples through a face detection module, identifying the unique number of faces in each image frame, positioning key points of the faces to obtain the coordinates of five sense organs, and completing face positioning;
and S22, selecting an interested area with strong light intensity distribution from the human face positioned by each frame of image by adopting an imaging type photoplethysmography.
Preferably, in step S22, the region of interest is a cheek region.
Preferably, in step S3, the preprocessing of the extracted IPPG signal employs a wavelet transform algorithm and a band-pass filter to remove a baseline shift phenomenon and noise processing.
Preferably, the preprocessing the extracted IPPG signal by using a wavelet transform algorithm and a band-pass filter specifically includes the following steps:
s31, performing 6-order wavelet transformation on the extracted IPPG signal;
s32, reconstructing the IPPG signal after wavelet transformation to remove baseline drift;
and S33, filtering the reconstructed IPPG signal by adopting a band-pass filter to remove noise.
Preferably, in step S4, the selecting an effective IPPG signal and a typical error IPPG signal from the preprocessed IPPG signals to make a data set, and training the data set by using a deep learning convolutional neural network CNN, and the specific steps are as follows: and selecting an effective IPPG signal and a typical error IPPG signal in a manual checking mode, making the obtained effective IPPG signal, the typical error IPPG signal and a corresponding label into a data set, and inputting the data set into a Convolutional Neural Network (CNN) model for training to obtain an effective IPPG signal identification model based on the CNN.
Preferably, in step S6, the obtained face video information is sequentially processed in steps S2 and S3, and then input to the CNN-based valid IPPG signal recognition model obtained in step S4 to complete validity determination of the IPPG signal, and the specific steps are as follows: and processing the acquired face video information in the steps S2 and S3 in sequence to obtain a preprocessed IPPG signal, inputting the preprocessed IPPG signal into a CNN-based effective IPPG signal identification model, and automatically identifying and storing the effective IPPG signal by the effective IPPG signal identification model.
Compared with the prior art, the invention has the following beneficial effects:
the invention relates to an effective IPPG signal automatic identification method based on deep learning, which comprises the steps of firstly collecting face video sample information and establishing an effective IPPG signal identification model based on CNN; when pulse wave signals need to be detected, face recognition is carried out on the obtained face video information to be detected, key points are located, an interested region is selected through the key points, an imaging type photoplethysmography method is adopted to extract IPPG signals from the interested region, then wavelet transformation and band-pass filtering pretreatment are carried out on the extracted IPPG signals, and then the pretreated IPPG signals are input into a CNN-based effective IPPG signal recognition model to automatically recognize effective IPPG signals and store the effective IPPG signals for subsequent application, for example.
(1) Compared with the traditional PPG signal identification method, the method adopts the Convolutional Neural Network (CNN) to extract the characteristics of the pulse wave signals, can ensure that rich information in the pulse wave signals cannot be lost, and has higher accuracy rate for identifying effective IPPG signals.
(2) Because the extraction of the artificial pulse wave characteristics depends on the positioning of the waveform characteristic points and is easily influenced by factors such as vascular diseases, aging and the like, the detection of different pulse wave waveforms does not have universality, compared with the extraction of the artificial pulse wave characteristics, the method utilizes a Convolutional Neural Network (CNN) to extract the characteristics of the pulse wave signals, and because the one-dimensional convolution operation in a convolutional neural network layer slides a convolution kernel on a pulse wave waveform sequence and extracts and abstracts information contained in the waveform into a characteristic diagram, the method does not depend on the positioning of the waveform characteristic points any more and has universality on the pulse wave waveforms in different forms.
In conclusion, the invention judges the IPPG signal by using the convolutional neural network, can identify the effective IPPG signal in the automatically extracted signal, and has higher accuracy and universality.
Drawings
Fig. 1 is a flowchart of an automatic identification method of an effective IPPG signal based on deep learning according to the present invention;
FIG. 2 is a flowchart of establishing a CNN-based effective IPPG signal recognition model according to an embodiment of the present invention
FIG. 3 is a schematic diagram of a region of interest of a human face selected according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a standard PPG signal, where A is called the dominant wave, B is called the tidal wave, C is called the dicrotic peak, and D is called the dicrotic trough;
FIG. 5 is a schematic diagram of an effective IPPG signal selected according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating an exemplary selected erroneous IPPG signal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the embodiments of the present invention and the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
The following describes a method for automatically identifying an effective IPPG signal based on deep learning according to an embodiment of the present invention with reference to fig. 1 to 6. As shown in fig. 1 and 2, a method for automatically identifying a valid IPPG signal based on deep learning includes the following steps:
s1, collecting face video sample information;
specifically, in this embodiment, a network camera with a resolution of 1920X1080 and a frame number of 60 frames is used to record facial videos of multiple volunteers;
s2, positioning the collected face video samples, selecting an interested area of each frame of picture of the face video, and extracting an IPPG signal from the interested area;
specifically, each frame of picture in a face video clip is extracted from an acquired face video sample through a face detection module, then the acquired each frame of picture is input into a face recognition model, the face recognition module is adopted to detect the number of faces in each frame of picture, if the number of the faces is not 1, the detection is stopped, the video recording is restarted, if the number of the faces is 1, a plurality of key points of the faces, including parts such as chin, eyes, nose, mouth and the like, are positioned, five sense organ coordinates are obtained, and the key points are connected to draw the whole outline of the faces; the region of interest with strong light intensity distribution is selected from the contour of the positioned face, mainly the forehead and the cheek, but because the hair of a female often covers the forehead, the region of interest in the embodiment of the invention selects the cheek region, and then the IPPG signal is extracted from the cheek region by adopting an imaging type photoplethysmography and is stored.
Wherein, the IPPG signal is obtained by averaging the pixel values of the region of interest of each frame according to the following formula:
Figure BDA0003481986850000071
where t is the number of sequences of a frame and W and H are the width and height of the region of interest.
As shown in fig. 3, signals of three channels, red (R), green (G) and blue (B), can be extracted from the cheek region, and since the optical absorption characteristic of hemoglobin peaks at 500-600nm, corresponding to the green channel signal, the embodiment of the present invention selects the extracted green channel signal as the IPPG signal.
S3, preprocessing the extracted IPPG signal;
specifically, the extracted IPPG signal is preprocessed by wavelet transform and band-pass filtering: firstly, sym6 is selected as a wavelet base to carry out six-layer decomposition on the extracted IPPG signal, the sixth layer is taken as a baseline drift signal to be removed, the low-frequency component of the sixth layer of the wavelet signal is subtracted from the original IPPG signal to remove the baseline drift, and as the frequency range of the heart rate is 0.7-4Hz, a Butterworth band-pass filter (0.7-4 Hz) is adopted to eliminate noise, so that the smooth waveform is ensured.
S4, selecting an effective IPPG signal and a typical error IPPG signal from the preprocessed IPPG signals to prepare a data set, and then training the data set by adopting a deep learning Convolutional Neural Network (CNN) to obtain an effective IPPG signal identification model based on the CNN;
specifically, in this embodiment, a large number of valid IPPG signals (as shown in fig. 5) and typical erroneous IPPG signals (as shown in fig. 6) are selected from the preprocessed IPPG signals in a manual checking manner, and then the selected valid IPPG signals, the selected typical erroneous IPPG signals, and corresponding label-made data sets are input into a Convolutional Neural Network (CNN) model for training, so as to obtain a CNN-based valid IPPG signal identification model.
The embodiment adopts a Convolutional Neural Network (CNN) model, which is inspired by the connectivity mode between visual cortical neurons, and adopts convolution operation to replace general matrix multiplication. Compared with a standard fully-connected neural network with the same number of layers, the Convolutional Neural Network (CNN) has fewer connections and parameters, is beneficial to training and further improves accuracy.
Specifically, in this embodiment, before inputting the preprocessed IPPG signal into the CNN model, a sliding window method is first used to slide on the IPPG signal, a group of signals is taken as 250 frame sequences, the CNN model mainly consists of a combination of a plurality of hidden convolutional layers and an average pooling layer, a binary cross entropy is used as a loss function, the last convolutional layer outputs an extracted feature vector, the output of the convolutional neural network model is a judgment basis of an effective IPPG signal, a one-dimensional convolution operation in the convolutional neural network layer slides a convolution kernel on a pulse wave waveform sequence and extracts and abstracts information contained in a waveform into a feature map, and finally, information of all features is combined to obtain a final feature; wherein, the binary cross entropy function used for Convolutional Neural Network (CNN) model training is as follows;
Figure BDA0003481986850000091
wherein y represents a label, p (y) i ) Indicating the probability of the current sample label occurring.
S5, acquiring video information of the face to be detected;
specifically, in the embodiment, the same network camera is used for recording the face video of the person to be detected, and the person to be detected is required not to wear an object which can shield the face; when the video is recorded, natural environment light is used as a light source, so that the face can receive illumination uniformly without obvious brightness difference; during video recording, the tested person should keep the body stable as much as possible, the head is prevented from shaking or shaking, the face of the tested person is kept over against the camera until the set acquisition time is reached, and the camera is acquired again if large shaking occurs.
And S6, sequentially processing the acquired face video information in the steps S2 and S3, and then inputting the processed face video information into the effective IPPG signal identification model based on the CNN obtained in the step S4 to finish the effectiveness judgment of the IPPG signal.
Specifically, in this embodiment, after a face video to be detected is obtained, a face is identified and key points are located through a face identification module, an area of interest is selected, an imaging photoplethysmography is adopted to extract an IPPG signal from the area of interest, then wavelet transformation and band-pass filtering preprocessing are performed on the extracted IPPG signal, the preprocessed IPPG signal is input into an effective IPPG signal identification model based on a CNN through a sliding window method, whether each segment of the input IPPG signal is an effective waveform is judged, if yes, the effective waveform is retained, and a cushion is laid for later expansion applications, such as non-contact blood pressure, heart rate, and blood oxygen measurement.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the equivalent replacement or change according to the technical solution and the modified concept of the present invention should be covered by the scope of the present invention.

Claims (6)

1. A method for automatically identifying effective IPPG signals based on deep learning is characterized by comprising the following steps:
s1, collecting face video sample information;
s2, positioning the collected face video sample information, acquiring an interested area of each frame of picture of the face video, and extracting an IPPG signal from the interested area;
s3, preprocessing the extracted IPPG signal;
s4, selecting effective and typical error IPPG signals from the preprocessed IPPG signals to prepare a data set, and training the data set by adopting a convolutional neural network to obtain an effective IPPG signal recognition model based on the CNN;
s5, acquiring video information of the face to be detected;
and S6, after the acquired face video information is processed in the steps S2 and S3 in sequence, inputting the face video information into the CNN-based effective IPPG signal identification model obtained in the step S4 to finish the validity judgment of the IPPG signal.
2. The method of claim 1 for automatic identification of valid IPPG signal based on deep learning, wherein: in the step S2, the positioning of the collected face video sample information and the selection of the region of interest of each frame of image of the face video sample specifically include the following steps:
s21, obtaining image frames from the collected face video samples through a face detection module, identifying faces in each image frame, wherein the number of the faces is unique, then positioning key points of the faces to obtain the coordinates of five sense organs, and finishing face positioning;
and S22, selecting an interested area with strong light intensity distribution from the human face positioned by each frame of image by adopting an imaging type photoplethysmography.
3. The method for automatic identification of valid IPPG signal based on deep learning according to claim 1 or 2, characterized by: in step S3, the extracted IPPG signal is preprocessed by using a wavelet transform algorithm and a band pass filter.
4. The method of claim 3 for automatic identification of valid IPPG signal based on deep learning, characterized in that: the method for preprocessing the extracted IPPG signal by adopting the wavelet transform algorithm and the band-pass filter specifically comprises the following steps of:
s31, performing 6-order wavelet transformation on the extracted IPPG signal;
s32, reconstructing the IPPG signal after wavelet transformation to remove baseline drift;
and S33, filtering the reconstructed IPPG signal by adopting a band-pass filter to remove noise.
5. The method of claim 4 for automatic identification of valid IPPG signal based on deep learning, characterized in that: in step S4, selecting an effective IPPG signal and a typical error IPPG signal from the preprocessed IPPG signals to prepare a data set, and training the data set by using a convolutional neural network, specifically including:
and selecting effective IPPG signals and typical error IPPG signals in a manual checking mode, and making the selected effective IPPG signals, the typical error IPPG signals and corresponding labels into a data set and inputting the data set into a convolutional neural network model for training.
6. The method of claim 1 for automatic identification of valid IPPG signal based on deep learning, wherein: in step S6, the obtained face video information is processed in steps S2 and S3 in sequence, and then input to the CNN-based valid IPPG signal recognition model obtained in step S4 to complete validity determination of the IPPG signal, and the specific steps are as follows: and processing the acquired face video information in the steps S2 and S3 in sequence to obtain a preprocessed IPPG signal, inputting the preprocessed IPPG signal into a CNN-based effective IPPG signal identification model, and automatically identifying and storing the effective IPPG signal by the effective IPPG signal identification model.
CN202210070616.0A 2022-01-21 2022-01-21 Automatic identification method of effective IPPG signal based on deep learning Pending CN115245318A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210070616.0A CN115245318A (en) 2022-01-21 2022-01-21 Automatic identification method of effective IPPG signal based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210070616.0A CN115245318A (en) 2022-01-21 2022-01-21 Automatic identification method of effective IPPG signal based on deep learning

Publications (1)

Publication Number Publication Date
CN115245318A true CN115245318A (en) 2022-10-28

Family

ID=83697815

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210070616.0A Pending CN115245318A (en) 2022-01-21 2022-01-21 Automatic identification method of effective IPPG signal based on deep learning

Country Status (1)

Country Link
CN (1) CN115245318A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118051809A (en) * 2024-04-15 2024-05-17 长春理工大学 Non-contact state identification method based on multi-feature fusion

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118051809A (en) * 2024-04-15 2024-05-17 长春理工大学 Non-contact state identification method based on multi-feature fusion

Similar Documents

Publication Publication Date Title
Wang et al. A comparative survey of methods for remote heart rate detection from frontal face videos
CN110269600B (en) Non-contact video heart rate detection method based on multivariate empirical mode decomposition and combined blind source separation
CN105147274B (en) A kind of method that heart rate is extracted in the face video signal from visible spectrum
JP6521845B2 (en) Device and method for measuring periodic fluctuation linked to heart beat
CN109350030B (en) System and method for processing human face video heart rate signal based on phase amplification
CN106073742A (en) A kind of blood pressure measuring system and method
Feng et al. Motion artifacts suppression for remote imaging photoplethysmography
Przybyło A deep learning approach for remote heart rate estimation
US11701015B2 (en) Computer-implemented method and system for direct photoplethysmography (PPG) with multiple sensors
Kossack et al. Automatic region-based heart rate measurement using remote photoplethysmography
CN112200099A (en) Video-based dynamic heart rate detection method
Cho et al. Reduction of motion artifacts from remote photoplethysmography using adaptive noise cancellation and modified HSI model
CN113591769B (en) Non-contact heart rate detection method based on photoplethysmography
CN115245318A (en) Automatic identification method of effective IPPG signal based on deep learning
CN111050638B (en) Computer-implemented method and system for contact photoplethysmography (PPG)
CN114387479A (en) Non-contact heart rate measurement method and system based on face video
Hu et al. Illumination robust heart-rate extraction from single-wavelength infrared camera using spatial-channel expansion
Comas et al. Turnip: Time-series U-Net with recurrence for NIR imaging PPG
Hu et al. Study on Real-Time Heart Rate Detection Based on Multi-People.
Sahin et al. Non-contact heart rate monitoring from face video utilizing color intensity
CN113576475B (en) Deep learning-based contactless blood glucose measurement method
Ben Salah et al. Contactless heart rate estimation from facial video using skin detection and multi-resolution analysis
CN104688199A (en) Non-contact type pulse measurement method based on skin pigment concentration difference
CN112597949B (en) Psychological stress measuring method and system based on video
CN114092855A (en) Non-contact intelligent human body heart rate prediction method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination