CN112507930B - Method for improving human face video heart rate detection by utilizing illumination equalization method - Google Patents

Method for improving human face video heart rate detection by utilizing illumination equalization method Download PDF

Info

Publication number
CN112507930B
CN112507930B CN202011489672.5A CN202011489672A CN112507930B CN 112507930 B CN112507930 B CN 112507930B CN 202011489672 A CN202011489672 A CN 202011489672A CN 112507930 B CN112507930 B CN 112507930B
Authority
CN
China
Prior art keywords
image
heart rate
face
illumination
face video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011489672.5A
Other languages
Chinese (zh)
Other versions
CN112507930A (en
Inventor
谢巍
吴少文
魏金湖
周延
陈定权
许练濠
卢永辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202011489672.5A priority Critical patent/CN112507930B/en
Publication of CN112507930A publication Critical patent/CN112507930A/en
Application granted granted Critical
Publication of CN112507930B publication Critical patent/CN112507930B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/15Biometric patterns based on physiological signals, e.g. heartbeat, blood flow

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method for improving human face video heart rate detection by using an illumination balancing method, which comprises the following steps: s1, acquiring a face video image by using a camera of visible light; s2, detecting and positioning the face by utilizing a multitasking convolutional neural network; s3, selecting a region of interest of the face video; s4, extracting scene illumination components by using a rapid guided filtering algorithm, constructing an improved two-dimensional gamma function, and balancing the illumination components of the face video image; s5, separating an independent source signal from the mixed signal by using a FastICA algorithm; s6, performing fast Fourier transform by using the independent source signals, and calculating a heart rate value. According to the invention, the illumination component is extracted through the rapid guided filtering algorithm, the brightness of the over-bright and over-dark areas of the face image is improved by using the improved light equalization method for adaptively correcting the illumination unevenness of the two-dimensional gamma function, the average error and standard deviation of heart rate measured values are reduced, and the measurement accuracy is improved.

Description

Method for improving human face video heart rate detection by utilizing illumination equalization method
Technical Field
The invention relates to the technical field of image processing and non-contact heart rate detection, in particular to a method for improving face video heart rate detection by using an illumination balancing method.
Background
With the improvement of modern living standard, people will be more concerned about their physical health condition, and heart rate is one of the most important vital signs of human body, so the detection of heart rate will be more important. In recent years, heart rate detection devices on the market have the advantages of rapid development, small volume and convenient measurement, but the heart rate detection devices are in direct physical contact with a measurer, the measurement process of a contact type sensor recording mode is complicated, discomfort of patients can be caused, and the heart rate detection devices are inapplicable to people such as infants just born. Therefore, the non-contact heart rate detection method utilizing the principle of photoplethysmography has wider application prospect in the medical field, household health prevention and other aspects. At present, the non-contact heart rate detection needs to detect the obtained result in a stable environment, and factors such as light change and the like can negatively influence the detection result.
Therefore, a method for adaptively correcting the uneven illumination of a face image, reducing noise generated by light fluctuation, and ensuring the accuracy of measurement results is required.
Disclosure of Invention
In order to solve the technical problems in the prior art, the invention provides a method for improving the heart rate detection of a face video by utilizing an illumination balancing method, which is characterized in that an illumination component is extracted by a rapid guide filtering algorithm, and the brightness of a too bright area and a too dark area of a face image is improved by utilizing an improved two-dimensional gamma function self-adaptive illumination uneven correction light balancing method, so that the average error and standard deviation of heart rate measured values are reduced, and the measurement precision is improved.
The method is realized by adopting the following technical scheme: a method for improving human face video heart rate detection by using an illumination balancing method comprises the following steps:
s1, acquiring a face video image by using a camera of visible light;
s2, detecting and positioning corners of a face, eyes, a nose and a mouth by utilizing a multitasking convolutional neural network;
s3, selecting a region of interest (ROI) of the face video image according to positioning information of the face, the eyes, the nose and the corners of the mouth;
s4, decomposing the region of interest ROI of each frame into a hue H, saturation S and brightness V color composition space according to the region of interest ROI of the face video; for a brightness V channel, extracting illumination components of a scene by using a rapid guided filtering algorithm, constructing an improved two-dimensional gamma function, and balancing the illumination components of a face video image;
s5, performing blind source separation, and separating an independent source signal from a R, G, B three-primary-color channel observation mixed signal of each frame of region of interest (ROI) by using an independent component analysis FastICA algorithm;
s6, performing fast Fourier transform by using the separated independent source signals, deducing the periodic variation of the blood volume according to the periodic variation of the skin reflected light intensity, thereby obtaining heart rate information, selecting the independent source signal with the maximum power spectrum amplitude in the independent source signals as a pulse source signal, and calculating a heart rate value according to the pulse source signal amplitude.
Compared with the prior art, the invention has the following advantages and beneficial effects:
according to the invention, the illumination component is extracted through the rapid guide filtering algorithm, the brightness of the over-bright and over-dark areas of the face image is improved by utilizing the improved light equalization method for adaptively correcting the illumination unevenness of the two-dimensional gamma function, the adjustment of the illumination unevenness is realized, the average error and standard deviation of heart rate measured values are reduced, and the measurement accuracy is improved.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a luminance histogram before correction of an optical equalization scheme;
FIG. 3 is a luminance histogram after correction for an optical equalization scheme;
fig. 4 is a graph of the spectrum of the independent source signal with the strongest pulse wave signal.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but embodiments of the present invention are not limited thereto.
Examples
As shown in fig. 1, the method for improving the detection of the heart rate of the face video by using the illumination balancing method comprises the following steps:
s1, acquiring a face video image by using a visible light camera, keeping still in front of the camera, reducing shaking, and keeping light brightness of an acquisition environment, wherein the camera uses a high-definition camera with 1200 ten thousand pixels and a maximum resolution of 1920 x 1080;
s2, according to the face video image acquired in the step S1, a Multi-task convolutional neural network (MTCNN, multi-task convolutional neural network) is utilized to complete detection and positioning of faces, eyes, noses and corners of the mouth;
s3, selecting a human face video image region of interest ROI with strong periodicity and less noise of the pulse original signal according to the positioning information of the human face, the eyes, the nose and the mouth corners obtained in the step S2;
s4, decomposing each frame of ROI into a hue H, saturation S and brightness V color composition space according to the region of interest of the face video image obtained in the step S3; for a brightness V channel, rapidly and accurately extracting illumination components of a scene through a rapid guide filtering algorithm, constructing an improved two-dimensional gamma function, reducing the brightness value of an illumination over-bright area and improving the brightness value of an over-dark area, and balancing the illumination components of a face image so as to eliminate the influence of uneven illumination and light fluctuation;
s5, performing blind source separation, and separating an independent source signal from a R, G, B three-primary-color channel observation mixed signal of each frame of region of interest (ROI) by using an independent component analysis FastICA algorithm;
s6, performing fast Fourier transform on the separated independent source signals according to the signals after blind source separation acquired in the step S5; according to the principle of photoplethysmography, the change of light intensity and the change of blood volume are in a proportional relationship, namely, the periodic change of the blood volume can be deduced by utilizing the periodic change of the skin reflected light intensity, so that heart rate information is indirectly obtained; and selecting an independent source signal with the maximum power spectrum amplitude in the independent source signals as a pulse source signal, and calculating the current heart rate value according to the pulse source signal amplitude.
In the embodiment, three cascaded networks are adopted in the multitasking convolutional neural network in the step S2, and the network structure of the multitasking convolutional neural network comprises three types of P-Net, R-Net and O-Net, wherein the P-Net is a region suggestion network for face detection and consists of three convolutional layers, so that a face candidate window can be quickly generated; R-Net has more than P-Net a full tie layer for further select and adjust the candidate face region window that P-Net generated, O-Net is more complicated in structure, and it has more a convolution layer than R-Net, can discern the face region through extracting more characteristics, and carries out regression to people's facial feature point, finally exports people's facial feature point. The multi-task convolutional neural network gives consideration to the face detection performance and the accuracy, and can reduce a large amount of performance consumption compared with the existing sliding window plus classifier and the like.
In this embodiment, the step S3 of selecting the region of interest ROI of the face video includes the following steps:
s31, when video images are acquired, the human face may incline, and in order to acquire a standardized human face, deflection angle adjustment is required to be performed on the human face; let the coordinates of the pixel points of the left and right eyes be (x) 1 ,y 1 ),(x 2 ,y 2 ) If left deflection occurs, i.e. y 1 >y 2 The deflection angle formula is as follows:
α=-[arctan((y 1 -y 2 )/(x 2 -x 1 ))/π]
if right deflection occurs, i.e. y 2 >y 1 The deflection angle formula is as follows:
α=[arctan((y 1 -y 2 )/(x 2 -x 1 ))/π];
s32, after the deflection angle is obtained, the deflection angle is adjusted to enable the left eye and the right eye to be at the same horizontal position, the distance between the left eye and the right eye is set to be 4d, a rectangular area with the length and the width of 8d and 3d respectively in the position 0.5d below the left eye and the right eye in the face image is selected to be set as an interested area.
In this embodiment, the specific step of extracting the illumination component of the scene by using the fast guided filtering algorithm in step S4 includes:
s401, using a fast guided filtering algorithm, and re-establishing the filtering window according to the local linear relation between the guided image and the input imageCalculating each pixel value, wherein the filtered output image is a local linear transformation of the guide image, and then extracting an illumination component; let p be the input image, I be the guide image, q be the filtered output image, and for any pixel point k in the image, at the filter window ω with r radius around it k The linear memory transformation relationship is as follows:
Figure BDA0002840375070000031
wherein ,ak and bk For linear transformation coefficients, in a filtering window omega k The middle is a constant; q i Outputting an image for the ith filter; i i For the ith guide image;
s402, utilizing a filter window omega k Calculating a linear transformation factor (a) k ,b k ) Obtaining a minimum difference value between the filtered output image q and the input image p; wherein, in the filtering window omega k The expression of the cost function used in (a) is:
Figure BDA0002840375070000041
wherein ,E(ak ,b k ) As a cost function; epsilon is the control linear transformation factor a k Solving a by linear regression method according to regularization parameters of the value range k and bk The optimal values of (2) are:
Figure BDA0002840375070000042
Figure BDA0002840375070000043
where ω is the filter window ω k The number of inner pixels; mu (mu) k For filtering window omega k The mean value of the middle guide image I; p is p i Is the ith input image; sigma (sigma) k For filtering window ωk k The variance of the middle guide image I; p is p k For filtering window omega k Is a picture of the input image;
Figure BDA0002840375070000047
for filtering window omega k Middle p k Is the average value of (2); due to a k and bk At different filter windows omega k The values may be different and the different filter windows omega k Will contain the same pixel point and take different filter windows omega centered on the pixel point k Inner a k and bk Mean value is taken as parameter to solve q i Is represented by the expression:
Figure BDA0002840375070000044
wherein ,ai For different filtering windows omega k Inner a k Is the average value of (2);
Figure BDA0002840375070000045
for different filtering windows omega k Inner b k Is a mean value of (c).
In this embodiment, the specific steps of constructing the modified two-dimensional gamma function in step S4 include:
s411, performing self-adaptive correction processing on the image with uneven illumination through an improved two-dimensional gamma function by utilizing the illumination component of the face video image; the improved two-dimensional gamma function expression is:
Figure BDA0002840375070000046
wherein I (x, y) is the brightness of the input image; o (x, y) is the output image; l (x, y) is the extracted illumination component value on the current pixel point (x, y); gamma is a gamma correction parameter that determines the effect of image enhancement; η is the illumination coefficient, let η be m/255; m is the average value of the illumination map of the whole face image;
s412, the corrected brightness V channel and the rest of unchanged hue H are utilized, and the saturation S channel is converted back to RGB color space through color space conversion.
As shown in fig. 2-3, the illumination component is obtained by using a fast guiding filtering algorithm, and then the illumination component of the face image is corrected by using an improved two-dimensional gamma function, so that the corrected histogram has obvious change in the brightest and darkest areas of light, the brightness low area in the original image is enhanced, the low brightness area in the original image is reduced, the whole histogram is distributed in the middle brightness position, and the adjustment of illumination non-uniformity is effectively realized.
In this embodiment, the specific step of separating the independent source signals in step S5 includes:
s51, respectively forming three groups of time sequences by using R, G, B three primary color channels of each frame region of interest (ROI) as observation mixed signals x 0 (t),x 1 (t),x 2 (t) providing three independent source signals s 0 (t),s 1 (t),s 2 (t) the observed mixed signal X (t) is a linear combination of the independent source signals S (t), i.e
X(t)=A·S(t)
Wherein A is a mixing matrix;
s52, the FastICA algorithm takes the negative entropy of the mixed signal as an objective function through observation, and the expression of the objective function is as follows:
J(W)=[E{G(W T Z)}-E{G(V)}] 2
wherein J (w) is an objective function; g (V) is a nonlinear function, G (V) =v 3 The method comprises the steps of carrying out a first treatment on the surface of the Similarly, G (W) T Z) is a nonlinear function, G (W T Z)=(W T Z) 3 The method comprises the steps of carrying out a first treatment on the surface of the Z is the observed mixed signal after whitening, z=vx (t); v is a whitening matrix; w is a separation matrix; w (W) T Transpose of the separation matrix W; e { G (V) } is a mathematical expectation of a nonlinear function G (V); e { G (W) T Z) } -E { G (V) } is a mathematical expectation;
s53, maximizing an objective function, solving a separation matrix W to enable the separation matrix W to be approximately equal to A -1 So that Y (t) =w×x (t) approximates the independent source signal S (t); where Y (t) is the independent source signal approximation signal.
In this embodiment, the separate independent source signals are subjected to fast fourier FFT, and the formula is as follows:
Figure BDA0002840375070000051
where F (w) is an independent source signal in the frequency domain, F (t) is an independent source signal in the time domain, j is an imaginary unit, t is time, and w is an angular frequency.
As shown in fig. 4, the current heart rate value is calculated according to the signal amplitude, and the heart rate calculation formula is as follows:
HR=F max *60
wherein HR is heart rate value, F max Is the frequency corresponding to the maximum peak.
In the embodiment, the illumination component is extracted through a rapid guided filtering algorithm, and the improved two-dimensional gamma function is utilized to correct the illumination imbalance phenomenon of the face picture. Five testers are selected to respectively test and detect heart rate with no light balance correction, a heart rate reference value is obtained by using a finger-clamping type blood sample heart rate detector, heart rate detection results corrected by a no light balance scheme are shown in table 1, and heart rate detection results corrected by a light balance scheme are shown in table 2:
table 1 heart rate detection results corrected with no light equalization scheme
Figure BDA0002840375070000061
Table 2 heart rate detection results corrected by the light equalization scheme
Figure BDA0002840375070000062
As can be seen from the measurement results in tables 1 and 2, the average error value and standard deviation of the heart rate detection result of the optical equalization scheme are obviously reduced, which proves that the optical equalization scheme has obvious effect on improving the heart rate measurement accuracy, and the effectiveness of the method provided by the invention is verified.
The above examples are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the above examples, and any other changes, modifications, substitutions, combinations, and simplifications that do not depart from the spirit and principle of the present invention should be made in the equivalent manner, and the embodiments are included in the protection scope of the present invention.

Claims (5)

1. The method for improving the detection of the heart rate of the face video by using the illumination balancing method is characterized by comprising the following steps of:
s1, acquiring a face video image by using a camera of visible light;
s2, detecting and positioning corners of a face, eyes, a nose and a mouth by utilizing a multitasking convolutional neural network;
s3, selecting a region of interest (ROI) of the face video image according to positioning information of the face, the eyes, the nose and the corners of the mouth;
s4, decomposing the region of interest ROI of each frame into a hue H, saturation S and brightness V color composition space according to the region of interest ROI of the face video; for a brightness V channel, extracting illumination components of a scene by using a rapid guided filtering algorithm, constructing an improved two-dimensional gamma function, and balancing the illumination components of a face video image;
s5, performing blind source separation, and separating an independent source signal from a R, G, B three-primary-color channel observation mixed signal of each frame of region of interest (ROI) by using an independent component analysis FastICA algorithm;
s6, performing fast Fourier transform by using the separated independent source signals, deducing the periodic variation of the blood volume according to the periodic variation of the skin reflected light intensity, thereby obtaining heart rate information, selecting the independent source signal with the maximum power spectrum amplitude in the independent source signals as a pulse source signal, and calculating a heart rate value according to the pulse source signal amplitude;
the specific step of extracting the illumination component of the scene by using the fast guided filtering algorithm in the step S4 includes:
s401, utilizing a fast guided filtering algorithm according toThe local linear relation between the guide image and the input image is calculated again in the filter window, and the illumination component is extracted, wherein the input image is p, the guide image is I, the filter output image is q, the pixel point in the image is k, and the filter window omega taking the pixel point k as the center and the radius of r is the filter window omega k The linear memory transformation relationship is as follows:
Figure FDA0004173123520000011
wherein ,ak and bk For linear transformation coefficients, in a filtering window omega k The middle is a constant; q i Outputting an image for the ith filter; i i For the ith guide image;
s402, utilizing a filter window omega k Calculating a linear transformation factor (a) k ,b k ) Obtaining a minimum difference value between the filtered output image q and the input image p; wherein the window ω is filtered k The expression of the cost function used in (a) is:
Figure FDA0004173123520000012
wherein ,E(ak ,b k ) As a cost function; epsilon is the control linear transformation factor a k Solving a by linear regression method according to regularization parameters of the value range k and bk The optimal values of (2) are:
Figure FDA0004173123520000021
Figure FDA0004173123520000022
where ω is the filter window ω k The number of inner pixels; mu (mu) k For filtering window omega k The mean value of the middle guide image I; p is p i Is the ith input image; sigma (sigma) k For filtering window omega k The variance of the middle guide image I; p is p k For filtering window omega k Is a picture of the input image;
Figure FDA0004173123520000027
for filtering window omega k Middle p k Is the average value of (2); take the filter window omega k Inner pixel point-centered filtering window omega k Inner a k and bk The mean value is the parameter to solve q i Is represented by the expression:
Figure FDA0004173123520000023
wherein ,
Figure FDA0004173123520000024
for different filtering windows omega k Inner a k Is the average value of (2); />
Figure FDA0004173123520000025
For different filtering windows omega k Inner b k Is the average value of (2);
in step S6, the separated independent source signals are subjected to fast fourier FFT, and the formula is as follows:
Figure FDA0004173123520000026
wherein F (w) is an independent source signal in a frequency domain, F (t) is an independent source signal in a time domain, j is an imaginary unit, t is time, and w is an angular frequency;
in step S6, the heart rate value is calculated according to the pulse source signal amplitude, and the calculation formula is as follows:
HR=F max *60
wherein HR is heart rate value, F max Is the frequency corresponding to the maximum peak.
2. The method of claim 1, wherein the multitasking convolutional neural network in step S2 is a plurality of cascaded networks, and the network structure comprises P-Net, R-Net, and O-Net.
3. The method of face video heart rate detection according to claim 1, wherein selecting a region of interest ROI of a face video in step S3 comprises the steps of:
s31, adjusting the face deflection angle, wherein the coordinates of pixel points of the left eye and the right eye are (x) 1 ,y 1 ),(x 2 ,y 2 ) If left deflection occurs, i.e. y 1 >y 2 The deflection angle formula is as follows:
α=-[arctan((y 1 -y 2 )/(x 2 -x 1 ))/π]
if right deflection occurs, i.e. y 2 >y 1 The deflection angle formula is as follows:
α=[arctan((y 1 -y 2 )/(x 2 -x 1 ))/π];
s32, after the deflection angle is obtained, the left eye and the right eye are positioned at the same horizontal position by adjusting the deflection angle, the distance between the left eye and the right eye is 4d, and a rectangular area with the length and the width of 8d and 3d respectively in the position 0.5d below the left eye and the right eye in the face image is selected as the region of interest.
4. The method of face video heart rate detection according to claim 1, wherein the specific step of constructing an improved two-dimensional gamma function in step S4 comprises:
s411, performing self-adaptive correction processing on the video image by using the illumination component of the face video image through an improved two-dimensional gamma function, wherein the improved two-dimensional gamma function expression is as follows:
Figure FDA0004173123520000031
wherein I (x, y) is the brightness of the input image; o (x, y) is the output image; l (x, y) is the extracted illumination component value on the current pixel point (x, y); gamma is a gamma correction parameter; η is the illumination coefficient; m is the average value of the illumination map of the whole face image;
s412, the corrected brightness V channel and hue H are utilized, and the saturation S channel is converted back to RGB color space through color space conversion.
5. The method of face video heart rate detection according to claim 1, wherein the specific step of separating the independent source signals in step S5 comprises:
s51, respectively forming three groups of time sequences by using R, G, B three primary color channels of each frame region of interest (ROI) as observation mixed signals x 0 (t),x 1 (t),x 2 (t) three independent source signals s 0 (t),s 1 (t),s 2 (t) observing that the mixed signal X (t) is a linear combination of the independent source signals S (t), and specifically expressed as:
X(t)=A·S(t)
wherein A is a mixing matrix;
s52, the FastICA algorithm takes the negative entropy of the mixed signal as an objective function through observation, and the expression of the objective function is as follows:
J(w)=[E{G(w T Z)}-E{G(V)}] 2
wherein J (w) is an objective function; g (V) is a nonlinear function, G (V) =v 3 ;G(W T Z) is a nonlinear function, G (W T Z)=(W T Z) 3 The method comprises the steps of carrying out a first treatment on the surface of the Z is the observed mixed signal after whitening, z=vx (t); v is a whitening matrix; w is a separation matrix; w (W) T Transpose of the separation matrix W; e { G (V) } is a mathematical expectation of a nonlinear function G (V); e { G (W) T Z) } -E { G (V) } is a mathematical expectation;
s53, maximizing an objective function and solving a separation matrix W.
CN202011489672.5A 2020-12-16 2020-12-16 Method for improving human face video heart rate detection by utilizing illumination equalization method Active CN112507930B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011489672.5A CN112507930B (en) 2020-12-16 2020-12-16 Method for improving human face video heart rate detection by utilizing illumination equalization method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011489672.5A CN112507930B (en) 2020-12-16 2020-12-16 Method for improving human face video heart rate detection by utilizing illumination equalization method

Publications (2)

Publication Number Publication Date
CN112507930A CN112507930A (en) 2021-03-16
CN112507930B true CN112507930B (en) 2023-06-20

Family

ID=74972783

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011489672.5A Active CN112507930B (en) 2020-12-16 2020-12-16 Method for improving human face video heart rate detection by utilizing illumination equalization method

Country Status (1)

Country Link
CN (1) CN112507930B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255585B (en) * 2021-06-23 2021-11-19 之江实验室 Face video heart rate estimation method based on color space learning
CN116823677B (en) * 2023-08-28 2023-11-10 创新奇智(南京)科技有限公司 Image enhancement method and device, storage medium and electronic equipment
CN117455780B (en) * 2023-12-26 2024-04-09 广东欧谱曼迪科技股份有限公司 Enhancement method and device for dark field image of endoscope, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105989357A (en) * 2016-01-18 2016-10-05 合肥工业大学 Human face video processing-based heart rate detection method
CN110384491A (en) * 2019-08-21 2019-10-29 河南科技大学 A kind of heart rate detection method based on common camera
CN110532849A (en) * 2018-05-25 2019-12-03 快图有限公司 Multi-spectral image processing system for face detection
CN111027485A (en) * 2019-12-11 2020-04-17 南京邮电大学 Heart rate detection method based on face video detection and chrominance model
CN111936040A (en) * 2018-03-27 2020-11-13 皇家飞利浦有限公司 Device, system and method for extracting physiological information indicative of at least one vital sign of a subject

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105989357A (en) * 2016-01-18 2016-10-05 合肥工业大学 Human face video processing-based heart rate detection method
CN111936040A (en) * 2018-03-27 2020-11-13 皇家飞利浦有限公司 Device, system and method for extracting physiological information indicative of at least one vital sign of a subject
CN110532849A (en) * 2018-05-25 2019-12-03 快图有限公司 Multi-spectral image processing system for face detection
CN110384491A (en) * 2019-08-21 2019-10-29 河南科技大学 A kind of heart rate detection method based on common camera
CN111027485A (en) * 2019-12-11 2020-04-17 南京邮电大学 Heart rate detection method based on face video detection and chrominance model

Also Published As

Publication number Publication date
CN112507930A (en) 2021-03-16

Similar Documents

Publication Publication Date Title
CN112507930B (en) Method for improving human face video heart rate detection by utilizing illumination equalization method
CN107735015B (en) Method and system for laser speckle imaging of tissue using a color image sensor
Zhang et al. Color correction and adaptive contrast enhancement for underwater image enhancement
Zhou et al. Color retinal image enhancement based on luminosity and contrast adjustment
CN105147274B (en) A kind of method that heart rate is extracted in the face video signal from visible spectrum
JP5856960B2 (en) Method and system for obtaining a first signal for analysis to characterize at least one periodic component of the first signal
CN104240194B (en) A kind of enhancement algorithm for low-illumination image based on parabolic function
CN111986120A (en) Low-illumination image enhancement optimization method based on frame accumulation and multi-scale Retinex
CN109325922A (en) A kind of image self-adapting enhancement method, device and image processing equipment
CN106780417A (en) A kind of Enhancement Method and system of uneven illumination image
CN106491117A (en) A kind of signal processing method and device based on PPG heart rate measurement technology
CN114972067A (en) X-ray small dental film image enhancement method
CN111444797A (en) Non-contact heart rate detection method
Zeng et al. High dynamic range infrared image compression and denoising
CN111667446B (en) image processing method
CN110674737B (en) Iris recognition enhancement method
CN108734674B (en) OCT image blind restoration method for improving NAS-RIF
CN110852977B (en) Image enhancement method for fusing edge gray level histogram and human eye visual perception characteristics
US20100061656A1 (en) Noise reduction of an image signal
CN115153473A (en) Non-contact heart rate detection method based on multivariate singular spectrum analysis
CN111076815B (en) Hyperspectral image non-uniformity correction method
JP6251272B2 (en) Fixed pattern noise reduction
CN110010228A (en) A kind of facial skin rendering algorithm based on image analysis
Lenka et al. A study on retinex theory and illumination effects–i
Ranjitham et al. A Study of anImproved Edge Detection Algorithm for MRI Brain Tumor Images Based on Image Quality Parameters

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant