CN112617835B - Multi-feature fusion fatigue detection method based on transfer learning - Google Patents
Multi-feature fusion fatigue detection method based on transfer learning Download PDFInfo
- Publication number
- CN112617835B CN112617835B CN202011492334.7A CN202011492334A CN112617835B CN 112617835 B CN112617835 B CN 112617835B CN 202011492334 A CN202011492334 A CN 202011492334A CN 112617835 B CN112617835 B CN 112617835B
- Authority
- CN
- China
- Prior art keywords
- data
- volunteer
- fatigue
- electroencephalogram
- electrocardio
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/18—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state for vehicle drivers or machine operators
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/162—Testing reaction times
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2503/00—Evaluating a particular growth phase or type of persons or animals
- A61B2503/20—Workers
- A61B2503/22—Motor vehicles operators, e.g. drivers, pilots, captains
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Public Health (AREA)
- Molecular Biology (AREA)
- Veterinary Medicine (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Animal Behavior & Ethology (AREA)
- Surgery (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Child & Adolescent Psychology (AREA)
- Psychology (AREA)
- Social Psychology (AREA)
- Hospice & Palliative Care (AREA)
- Educational Technology (AREA)
- Developmental Disabilities (AREA)
- Mathematical Physics (AREA)
- Fuzzy Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physiology (AREA)
- Evolutionary Computation (AREA)
- Signal Processing (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The multi-feature fusion fatigue detection method based on the transfer learning improves the existing fatigue detection method based on single physiological features, acquires electroencephalogram, electrocardio-electrocardio signals closest to the essence of a fatigue state, fuses facial image features, further improves the model recognition rate, trains models respectively according to the 4 sensor data, and uses a weighted average method to carry out decision-level fusion, thereby ensuring that the models have certain robustness under the condition of sensor failure. Meanwhile, the invention introduces a transfer learning strategy, and reduces the influence of individual difference of different drivers on the stability of the fatigue detection model.
Description
Technical Field
The invention relates to the field of safe driving of automobiles, in particular to a multi-feature fusion fatigue detection method based on transfer learning.
Background
Fatigue driving is an important cause of traffic accidents, and the number of traffic accidents caused by fatigue driving is countless every year, so that various domestic and foreign enterprises and research institutions successively develop research on driving fatigue detection, and the current fatigue detection methods mainly include three types:
1. and detection methods based on the behavior characteristics of the running vehicle, such as the deflection angle of a steering wheel, the acceleration of the steering wheel, the grip force of the steering wheel, the transverse position of the vehicle, the change of the running speed and the like. Although the characteristics based on the vehicle behaviors are convenient to obtain and do not influence the driving operation of a driver, the vehicle behavior characteristics are influenced by different vehicle types, different driving habits and different road conditions, so that the stability of a detection result under different conditions is often difficult to ensure by using a fatigue detection model of the vehicle behavior characteristics.
2. Detection methods based on the facial image features of the driver, such as head position, eye behavior change, mouth state, and the like; the eye features are important features reflecting fatigue states, after a driver enters the fatigue states, the blinking frequency of the driver is reduced, the eye closing time is obviously increased compared with the normal state, the eye opening time is reduced, the eye opening degree is also reduced to a certain degree, if the driver enters the deep fatigue states, the serious condition that the eyes of the driver are in the closed state for a long time can occur, and therefore the facial image features, particularly the eye features, can well reflect the states of the driver.
3. Based on physiological characteristics of the driver, such as electroencephalogram (EEG), electrocardiogram (ECG), electromyogram (EMG), electrooculogram (EOG), and other physiological indicators. The physiological characteristics are known as the most accurate and reliable indexes for detecting fatigue, particularly electroencephalograms, and are known as the 'gold standard' for detecting fatigue, so that the fatigue state of a driver can be detected with high precision by processing and analyzing the physiological indexes.
Many fatigue driving methods based on single-feature or multi-feature fusion have been proposed in recent years, but these methods have not considered the influence of individual differences of drivers on the model effect in practical situations.
Disclosure of Invention
In order to solve the problems that the accuracy of the single-feature fatigue detection method is low, and the stability of a fatigue detection model is affected by individual differences among different drivers, the invention provides the fatigue detection method which integrates electroencephalogram, electrocardio, electrooculogram and facial image features and combines a transfer learning strategy.
The invention discloses a multi-feature fusion fatigue detection method based on transfer learning, which comprises the following steps of:
step 1: selecting a plurality of volunteers;
step 2: carrying out laboratory simulation driving on each volunteer, acquiring real-time electroencephalogram, electrocardio, electrooculogram and facial image signals, and carrying out reaction time test on each volunteer at intervals to finish data acquisition;
step 3, dividing the electroencephalogram, electrocardio, electrooculogram and facial image signals of each volunteer according to time windows, respectively extracting features, setting labels according to corresponding reaction time, and forming a labeled data set D by the data and the labels of all the volunteers s ={x 1 ,x 2 ,x 3 …x n },Y={Y 1 ,Y 2 ,Y 3 …Y n In which x is i Characteristic data, Y, representing the i-th volunteer i Status tag data representing the ith volunteer;
and 4, step 4: the driver characteristic data x is obtained by performing data pre-collection and characteristic extraction on the driver tq And volunteer status label Y, will x tq And D in step 3 s Respectively solving the maximum average value difference of the data of each volunteer in the data set, and screening out m volunteers with the minimum maximum average value difference with the physiological data of the driver;
and 5: using the labeled physiological data of m volunteers obtained in step 4 and the driver characteristic data x of step 4 tq Respectively training 4 migration learning models TLDA based on a depth self-encoder according to the electroencephalogram data, the electrocardio data, the electrooculogram data and the facial image number of each volunteer, training m multiplied by 4 TLDA models in total, inputting the characteristic data of the driver into the trained models, and obtaining the evaluation result P (y) of each TLDA model on the fatigue state of the driver ij ),P(y ij ) The TLDA model representing the jth sensor data of the ith volunteer outputs a probability that the result is fatigue;
step 6: and 5, integrating the average values output by all the sensor models by using a weighted average method to obtain the final evaluation result of the electroencephalogram, electrocardio, electrooculogram and facial image models of each volunteer in the step 5And the conditional probability P (y) is counted i |Y),p(y i Y) represents the fatigue probability of the output of the ith TLDA model with or without fatigue of the real tag;
and 7: evaluation result P (y) of each volunteer model using step 6 i ) And conditional probability p (y) i Y) to calculate the final evaluation result
Further, in the step 2, electroencephalogram, electrocardio and eye electric signals are collected at a sampling frequency of 512Hz, and a facial video of the subject is recorded at a frequency of 30fps.
Further, in step 3, the processing of the electroencephalogram signal is as follows: wavelet threshold denoising is firstly carried out on the electroencephalogram signals, then alpha waves, beta waves and theta waves are obtained through wavelet decomposition, and then energy, sample entropy and sample entropy combinations of all frequency bands are calculated to serve as electroencephalogram characteristics.
Further, the processing of the electrocardiosignal comprises: firstly, marking R wave peak points, then calculating R-R intervals, and then calculating R-R interval mean values, R-R interval standard deviations and the proportion of R-R intervals larger than 50ms to the total number of R-R intervals according to the R-R intervals to serve as the electrocardio characteristics.
Further, the processing of the ocular electrical signal is: searching the wave crest and the left-right zero point of each blink, calculating the eye opening and closing time length, the eye opening and closing time length of each blink, the average blink time length in a time window, the blink frequency and the combined characteristic PAVR, namely the ratio of the maximum amplitude of the electric eye signal to the blink time length in each blink process.
Further, the processing for the face image is: the CLM positioning model is used for marking human eyes, the upper eyelid distance and the lower eyelid distance are obtained to calculate the eye characteristic PERCLOS, namely the ratio of the time of the upper eyelid distance and the lower eyelid distance being smaller than 30% of the eye opening state to the total time window length, and meanwhile, the corresponding reaction time is used as a label.
The invention has the beneficial effects that: the invention adds decision fusion and transfer learning strategies of multiple physiological characteristics. The decision integration of multiple physiological characteristics can improve the accuracy of fatigue detection and the robustness of the model, and the migration learning strategy can effectively reduce the influence of individual differences of different drivers on the evaluation effect of the model, so that the method has stronger stability.
Drawings
In order that the present invention may be more readily and clearly understood, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments that are illustrated in the appended drawings.
FIG. 1 is a flow chart of data acquisition for the method of the present invention;
FIG. 2 is an eye landmark position of a facial feature;
FIG. 3 is a flow chart of multi-feature decision fusion of the method of the present invention;
fig. 4 is a flowchart of a transfer learning strategy of the method of the present invention.
Detailed Description
As shown in fig. 1-4, the multi-feature fusion fatigue detection method based on transfer learning according to the present invention includes the following steps:
step 1: selecting 20 volunteers with different ages, different professions and driving ages of more than 1 year, wherein 10 volunteers are selected for each male and female;
the 20 volunteers in step 1 must cover three age groups 20-29,30-39,40-49 while ensuring that all volunteers are engaged in different industries and 10 men and women each.
Step 2: each volunteer carries out laboratory simulation driving and collects real-time electroencephalogram, electrocardio, electrooculogram and facial image signals, reaction time testing is carried out every 10 seconds, each person carries out experiments for 10 minutes, and the time interval of each experiment is not less than 24 hours;
in step 2, collecting electroencephalogram, electrocardio and eye electric signals at a sampling frequency of 512HZ, and recording a facial video of a human subject at a frequency of 30fps, wherein the position of an electroencephalogram signal sampling electrode comprises Fz: midline frontal electrode, cz: center line central electrode, C3: left hemisphere central electrode, C4: right hemisphere central electrode, pz: a midline top electrode; the ocular electrical signals include horizontal ocular electrical: EOG-V and vertical electro-oculogram: EOG-H; a single-channel grayscale image of face image 512x424 resolution, at a frame rate of 30fps.
The reaction time is measured by displaying a button key on the computer screen in front of the tested person every 10 seconds, and recording the time t of the button s And the time t when the subject presses the button e Recording the reaction time t = t s -t e . Since the minimum reaction time required by the driver before the braking is started is 0.4 seconds, the minimum total time for the braking effect to occur is 0.3 seconds in the traffic accident, and the minimum total time for the braking effect to occur is also 0.7 seconds in total, the reaction time of 0.7 seconds is taken as the threshold value of waking and fatigue, and more than 0.7 seconds is marked as fatigue, and less than 0.7 seconds is marked as waking. Each test was carried out for 10 minutes, and each volunteer was subjected to 3 tests at different time periods, with each test interval not less than 24 hours.
And 3, step 3: dividing the EEG, ECG, EMG and facial image signals of each experiment according to a time window of 10 seconds, respectively extracting features, using corresponding reaction time as a label, and forming a labeled data set D by the data and the labels of all volunteers s ={x 1 ,x 2 ,x 3 …x n },Y={Y 1 ,Y 2 ,Y 3 …Y n In which x i Characteristic data representing the i volunteer, Y i Status label data representing the ith volunteer.
The feature extraction and feature selection of the invention are shown in figure 1, in step 2, wavelet threshold denoising is firstly used for processing electroencephalogram signals, then alpha waves, beta waves and theta waves are obtained by using wavelet decomposition, and then energy E of each frequency band is calculated α 、E β 、E θ And sample entropy SE α 、SE β 、SE θ Meter for measuringCalculating F θ/β And F θ+α/β As a feature, wherein F θ/β And F θ+α/β The calculation formula is as follows:
wherein, E α 、E β 、E θ Respectively represent the energy of alpha wave, beta wave and theta wave,
the processing of the electrocardiosignal is carried out by marking the peak point of the R wave and then calculating the R-R interval
RR i =R i+1 -R i
Wherein R is i+1 ,R i Time stamps of the (i + 1) th R wave peak and the ith R wave peak, respectively.
The following features were calculated from the R-R interval: mean value of R-R intervalsStandard deviation of R-R intervalAnd the proportion of R-R intervals to the total number of R-R intervals greater than 50ms (PNN 50).
The processing of the eye electric signal is performed by first searching for a peak and left and right zero points of each blink, and calculating an eye-open closing time period, a blink open time period, and an average blink time period in a time window, and a blink frequency of each blink. Calculation processing for the eye electric signal first looks for the peak and left and right zero points of each blink, calculates the eye-open closing time length, the blink open time length, and the average blink time length in the time window for each blink, and the blink frequency. Calculating combined features pAVR:
where the maximum amplitude is the maximum amplitude of the ocular signal during each blink.
For the processing of the facial image signal, the human eye is labeled with a human face feature point location model CLM, and the eye contour is labeled with 6 points as shown in fig. three, wherein 2 are in the corner of the eye, 2 are in the upper eyelid, and 2 are in the lower eyelid. The upper and lower eyelid distance d is calculated. And further obtaining a characteristic PERCLOS:
packaging the EEG, ECG, EEG and facial image features extracted by the features into model input data D s ={x 1 ,x 2 ,x 3 …x n }。
And 4, step 4: when the model is used, a driver firstly carries out data pre-collection and feature extraction to obtain driver feature data x tq And volunteer status label Y, will x tq And D in step 3 s And (4) respectively solving the maximum average value difference of each volunteer data of the data set, and screening out m volunteers with the minimum maximum average value difference with the physiological data of the driver.
In step 4, data pre-acquisition x is performed for the driver tq ={z 1 ,z 2 ,z 3 …z m And volunteer input characteristic data D collected in previous experiments s ={x 1 ,x 2 ,x 3 …x n And respectively calculating the maximum mean difference MMD, wherein the formula is as follows:
whereinRepresenting a χ → H, a mapping from the original space to the hilbert space, K (x, x') being the specific spatial mapping kernel, the gaussian kernel formula used in the present invention is as follows:
wherein sigma is a hyper-parameter of the kernel function, and defines a characteristic length scale of similarity between learning samples, namely the ratio of distance between samples before and after characteristic space mapping under a weight space view angle, and x' are input of the space mapping kernel function.
The resulting MMDs are sorted. And screening m volunteers with the minimum MMD value, namely the minimum difference with the physiological data distribution of the driver.
And 5: using the labeled physiological data of m volunteers obtained in step 4 and x of step 4 tq Respectively training 4 migration learning models TLDA based on a depth self-encoder according to the electroencephalogram data, the electrocardio data, the electrooculogram data and the face image number of each volunteer, training m multiplied by 4 TLDA models in total, and obtaining the evaluation result P (y) of each TLDA model on the fatigue state of the driver ij ),P(y ij ) The TLDA model representing the jth sensor data of the ith volunteer outputs the probability that the result is fatigue.
In step 5, in order to make the model have certain robustness and ensure that the model can still output results after a certain sensor fails, 4 TLDA transfer learning models are respectively trained on electroencephalogram, electrocardio, electrooculogram and facial image data of each volunteer.
The training process of the TLDA model is to train the screened volunteer data and the data of the target domain together to form a depth self-encoder, and to integrate the KL divergence difference of the data of the source domain and the data of the target domain in the hidden layer feature space into an optimized target function of the depth self-encoder, so that the output features are more abstract and higher through the encoding and decoding processes, and the difference of data distribution of different domains is reduced.
The output layer of the TLDA models uses Softmax regression, so that each TLDA model evaluates the driver fatigue state P (y) ij ) The TLDA model output of the jth sensor data representing the ith volunteer output a probability of determining fatigue.
Step 6. Comparing the results of each volunteer in step 5The evaluation results of the electroencephalogram, electrocardio, electrooculogram and facial image models are fused by decision levels to output the final evaluation resultAnd the conditional probability P (y) is counted i |Y)。
The flow chart of the step 6 is shown in fig. 4, in order to ensure the robustness of the 4 sensors in the working process in the step 6, namely, the model can still normally carry out fatigue detection when one sensor fails, the invention uses a weighted average method to synthesize the average value of the output results of the 4 models of the electroencephalogram, electrocardio, opthalmic and facial images of each volunteer as the final judgment resultAnd the conditional probability p (y) is counted i |Y),p(y i Y) represents the probability that the ith volunteer model output is tired with or without the real tag tired.
And 7: final output assessment result P (y) for each volunteer using step 6 i ) And conditional probability P (y) i Y) calculating the final evaluation result
In step 7, a posterior probability can be obtained using the Bayesian formula because the final output evaluation results of each volunteer are independent of each otherThe posterior probability was used as the final evaluation result Y' fusing the m volunteer models.
The invention improves the existing fatigue detection method based on single physiological characteristics, acquires the electroencephalogram, electrocardio and electro-oculogram signals closest to the essence of the fatigue state, fuses facial image characteristics, further improves the recognition rate of the method, trains models respectively according to the 4 sensor data, and fuses decision levels by using a weighted average method, thereby ensuring that the method has certain robustness under the condition of sensor failure. Meanwhile, the invention introduces a transfer learning strategy, and reduces the influence of individual differences of different drivers on the stability of the fatigue detection model.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention further, and all equivalent variations made by using the contents of the present specification and the drawings are within the scope of the present invention.
Claims (6)
1. A multi-feature fusion fatigue detection method based on transfer learning comprises the following steps:
step 1: selecting a plurality of volunteers;
step 2: carrying out laboratory simulation driving on each volunteer, acquiring real-time electroencephalogram, electrocardio, electrooculogram and facial image signals, and carrying out reaction time test on each volunteer at intervals to finish data acquisition;
step 3, dividing the electroencephalogram, electrocardio, electrooculogram and facial image signals of each volunteer according to time windows, respectively extracting features, setting labels according to corresponding reaction time, and forming a labeled data set D by the data and the labels of all the volunteers s ={x 1 ,x 2 ,x 3 …x i …x n },Y={Y 1 ,Y 2 ,Y 3 …Y i …Y n In which x i Characteristic data representing the i volunteer, Y i Status tag data representing the ith volunteer;
and 4, step 4: the driver characteristic data x is obtained by performing data pre-collection and characteristic extraction on the driver tq X is to be tq And D in step 3 s Respectively solving the maximum average value difference of the data of each volunteer in the data set, and screening out m volunteers with the minimum maximum average value difference with the physiological data of the driver;
and 5: using the labeled physiological data of m volunteers obtained in step 4 and the driver characteristic data x of step 4 tq Respectively training 4 migration learning models TLDA (total learning model) based on depth self-encoders on electroencephalogram data, electrocardio data, electrooculogram data and facial image number of each volunteer, and training mx4TLDA model, inputting the driver characteristic data into the trained model to obtain the evaluation result P (y) of each TLDA model to the fatigue state of the driver ij ),P(y ij ) The TLDA model representing the jth sensor data of the ith volunteer outputs a probability that the result is fatigue;
step 6: and 5, integrating the average value output by all the sensor models by using a weighted average method according to the evaluation results of the electroencephalogram, electrocardio, electrooculogram and facial image models of each volunteer in the step 5 to serve as the fusion evaluation result corresponding to each volunteerAnd the conditional probability P (y) is counted i |Y),P(y i Y) represents the fusion evaluation result P (Y) of the TLDA model of the i-th volunteer in case the true label is fatigue i ) Probability of fatigue;
2. The method for detecting fatigue through multi-feature fusion based on transfer learning of claim 1, wherein in the step 2, electroencephalogram, electrocardio-electro-oculogram signals are collected at a sampling frequency of 512HZ, and a facial video of the subject is recorded at a frequency of 30fps.
3. The method for detecting fatigue based on multi-feature fusion of transfer learning according to claim 1, wherein in the step 3, the processing of the electroencephalogram signal is as follows: wavelet threshold denoising is firstly carried out on the electroencephalogram signals, then alpha waves, beta waves and theta waves are obtained through wavelet decomposition, and then energy, sample entropy and sample entropy combination of all frequency bands are calculated to serve as electroencephalogram characteristics.
4. The multi-feature fusion fatigue detection method based on transfer learning according to claim 1, wherein the processing of the electrocardiosignal is as follows: firstly, marking R wave peak points, then calculating R-R intervals, and then calculating R-R interval mean values, R-R interval standard deviations and the proportion of R-R intervals larger than 50ms to the total number of R-R intervals according to the R-R intervals to serve as the electrocardio characteristics.
5. The method for detecting fatigue based on multi-feature fusion of transfer learning according to claim 1, wherein the processing of the electro-ocular signals is as follows: firstly searching the peak and the left and right zero points of each blink, calculating the eye-opening closing time length and the blink opening time length of each blink, the average blink time length in a time window, the blink frequency and a combined characteristic PAVR, wherein the combined characteristic PAVR is the ratio of the maximum amplitude of an eye electrical signal to the blink time length in each blink process.
6. The method for detecting fatigue of multi-feature fusion based on transfer learning according to claim 1, wherein the facial image is processed by: the CLM positioning model is used for marking human eyes, the upper eyelid distance and the lower eyelid distance are obtained to calculate eye characteristics PERCLOS, the eye characteristics PERCLOS is the ratio of the time of the upper eyelid distance and the lower eyelid distance being smaller than 30% in the eye opening state to the total time window length, and meanwhile, the corresponding reaction time is used as a label.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011492334.7A CN112617835B (en) | 2020-12-17 | 2020-12-17 | Multi-feature fusion fatigue detection method based on transfer learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011492334.7A CN112617835B (en) | 2020-12-17 | 2020-12-17 | Multi-feature fusion fatigue detection method based on transfer learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112617835A CN112617835A (en) | 2021-04-09 |
CN112617835B true CN112617835B (en) | 2022-12-13 |
Family
ID=75316231
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011492334.7A Active CN112617835B (en) | 2020-12-17 | 2020-12-17 | Multi-feature fusion fatigue detection method based on transfer learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112617835B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114224344B (en) * | 2021-12-31 | 2024-05-07 | 杭州电子科技大学 | Fatigue state real-time detection system based on EEG and migration learning |
CN114343661B (en) * | 2022-03-07 | 2022-05-27 | 西南交通大学 | Method, device and equipment for estimating reaction time of driver in high-speed rail and readable storage medium |
CN117079255B (en) * | 2023-10-17 | 2024-01-05 | 江西开放大学 | Fatigue driving detection method based on face recognition and voice interaction |
CN117290781A (en) * | 2023-10-24 | 2023-12-26 | 中汽研汽车检验中心(宁波)有限公司 | Driver KSS grade self-evaluation training method for DDAW system test |
CN117636488A (en) * | 2023-11-17 | 2024-03-01 | 中国科学院自动化研究所 | Multi-mode fusion learning ability assessment method and device and electronic equipment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110772268A (en) * | 2019-11-01 | 2020-02-11 | 哈尔滨理工大学 | Multimode electroencephalogram signal and 1DCNN migration driving fatigue state identification method |
WO2020226696A1 (en) * | 2019-12-05 | 2020-11-12 | Huawei Technologies Co. Ltd. | System and method of generating a video dataset with varying fatigue levels by transfer learning |
-
2020
- 2020-12-17 CN CN202011492334.7A patent/CN112617835B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110772268A (en) * | 2019-11-01 | 2020-02-11 | 哈尔滨理工大学 | Multimode electroencephalogram signal and 1DCNN migration driving fatigue state identification method |
WO2020226696A1 (en) * | 2019-12-05 | 2020-11-12 | Huawei Technologies Co. Ltd. | System and method of generating a video dataset with varying fatigue levels by transfer learning |
Non-Patent Citations (2)
Title |
---|
Cross-subject driver status detection from physiological signals based on hybrid feature selection and transfer learning;Lan-lan Chen 等;《Expert Systems with Applications》;20190204;全文 * |
Multisource domain adaptation and its application to early detection of fatigue;RITA CHATTOPADHYAY;《ACM Transactions on Knowledge Discovery from Data》;20121231;第6卷(第4期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112617835A (en) | 2021-04-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112617835B (en) | Multi-feature fusion fatigue detection method based on transfer learning | |
Picot et al. | Drowsiness detection based on visual signs: blinking analysis based on high frame rate video | |
CN112241658B (en) | Fatigue driving early warning method based on depth camera | |
CN109124625B (en) | Driver fatigue state level grading method | |
Ueno et al. | Development of drowsiness detection system | |
CN110859609B (en) | Multi-feature fusion fatigue driving detection method based on voice analysis | |
CN109460703A (en) | A kind of non-intrusion type fatigue driving recognition methods based on heart rate and facial characteristics | |
CN111753674A (en) | Fatigue driving detection and identification method based on deep learning | |
CN113743471B (en) | Driving evaluation method and system | |
Bittner et al. | Detecting of fatigue states of a car driver | |
CN114358194A (en) | Gesture tracking based detection method for abnormal limb behaviors of autism spectrum disorder | |
Liu et al. | A review of driver fatigue detection: Progress and prospect | |
CN113627740A (en) | Driving load evaluation model construction system and construction method | |
Wei et al. | Driver's mental workload classification using physiological, traffic flow and environmental factors | |
Singh et al. | Physical and physiological drowsiness detection methods | |
CN110097012B (en) | Fatigue detection method for monitoring eye movement parameters based on N-range image processing algorithm | |
Dehzangi et al. | Unobtrusive driver drowsiness prediction using driving behavior from vehicular sensors | |
Ukwuoma et al. | Deep learning review on drivers drowsiness detection | |
CN117272155A (en) | Intelligent watch-based driver road anger disease detection method | |
Wang et al. | A fatigue driving detection method based on deep learning and image processing | |
Chen et al. | Deep learning approach for detection of unfavorable driving state based on multiple phase synchronization between multi-channel EEG signals | |
CN116955943A (en) | Driving distraction state identification method based on eye movement sequence space-time semantic feature analysis | |
CN111281382A (en) | Feature extraction and classification method based on electroencephalogram signals | |
Haupt et al. | Steering wheel motion analysis for detection of the driver’s drowsiness | |
CN115736920A (en) | Depression state identification method and system based on bimodal fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |