CN109308445B - A kind of fixation post personnel fatigue detection method based on information fusion - Google Patents

A kind of fixation post personnel fatigue detection method based on information fusion Download PDF

Info

Publication number
CN109308445B
CN109308445B CN201810823443.9A CN201810823443A CN109308445B CN 109308445 B CN109308445 B CN 109308445B CN 201810823443 A CN201810823443 A CN 201810823443A CN 109308445 B CN109308445 B CN 109308445B
Authority
CN
China
Prior art keywords
face
input picture
image
mouth
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810823443.9A
Other languages
Chinese (zh)
Other versions
CN109308445A (en
Inventor
朱伟
贺超
李嘉琦
杜瀚宇
王寿峰
马浩
白俊奇
苗锋
刘�文
张瑞全
王扬红
张禹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Lesi Electronic Equipment Co Ltd
Original Assignee
Nanjing Lesi Electronic Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Lesi Electronic Equipment Co Ltd filed Critical Nanjing Lesi Electronic Equipment Co Ltd
Priority to CN201810823443.9A priority Critical patent/CN109308445B/en
Publication of CN109308445A publication Critical patent/CN109308445A/en
Application granted granted Critical
Publication of CN109308445B publication Critical patent/CN109308445B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of fixation post personnel fatigue detection method based on information fusion, solve the problems, such as fatigue characteristic detection and the fusion of decision level information under single channel video.First using tree method is returned to input picture progress face calibration, high-precision Face datection is realized;Then, the face detected is used and is based on concatenated convolutional neural network human eye feature point location technology, the accurate detection of eye and mouth feature point is realized, is analyzed by the multiple image characteristic point to video, calculate separately PERCLOS parameter, frequency of wink and yawn frequency.Then, head pose estimation is realized using based on ASM local positioning and the method for face characteristic triangle, to calculate frequency of nodding.Finally, realizing the fatigue detecting of fixed post personnel using the information fusion algorithm based on rough set theory.

Description

A kind of fixation post personnel fatigue detection method based on information fusion
Technical field
The invention belongs to technical field of image processing more particularly to a kind of fixation post personnel fatigues based on information fusion Detection method.
Background technique
With the development of computer and artificial intelligence technology, more and more people are more engaged in mental work, such as IT industry staff, driver, intelligence agent etc..When facing to dry as dust, when pressure or responsible work, hold It is also easy to produce dispersion attention, mental fatigue, working efficiency, which reduces or even operation error occurs, leads to security risk.Fatigue detecting skill Art effectively can make warning to this kind of operating personnel's fatigue states, in order to improve working efficiency, reduce security risk.
Currently, most fatigue detecting algorithms especially eye strain feature detection algorithm is using edge detection or threshold value point The method cut calculates eye closure situation.Although these algorithms calculate simply, it is illuminated by the light and is affected, robustness is poor. Therefore, final fatigue detection result false alarm rate is excessively high, and it is impossible to meet actual product demands.
Summary of the invention
In view of the deficiencies of the prior art, the invention discloses a kind of fixation post personnel fatigue detectings based on information fusion Method includes the following steps:
Step 1, input picture is pre-processed;
Step 2, the detection of face is realized using the Face datection algorithm based on HOG feature cascade classifier;Gradient direction Histogram (HOG, Histogram of Oriented Gradient) is strong by calculating image local gradient direction and gradient Degree distribution obtains, and then realizes Face datection using trained cascade classifier.Algorithm flow is realized in HOG feature extraction: 1) Gray processing;2) standardization (normalization) of color space is carried out to input picture using Gamma correction method;Purpose is to adjust image Contrast, reduce image local shade and illumination variation caused by influence, while the interference of noise can be inhibited;3) it counts The gradient (including size and Orientation) of each pixel of nomogram picture;Primarily to capture profile information, while further weakened light According to interference.4) small cellular is divided an image into;5) histogram of gradients (numbers of different gradients) of each cellular is counted Form the description operator of each cellular;6) block will be formed per several cellulars, the feature of all cellulars describes to calculate in a block Son, which is together in series, just to be obtained the HOG feature of the block and describes operator.7) operator series connection is described into all pieces of HOG feature in image Get up can be obtained by the HOG feature of image object.
Step 3, on the basis of Face datection, using core correlation filtering (KCF) target tracking algorism (Henriques J F,Caseiro R,Martins P,et al.High-Speed Tracking with Kernelized Correlation Filters[J].IEEE Transactions on Pattern Analysis&Machine Intelligence,2015,37 (3): 583-596. the tracking for) realizing face, obtains tracing area image;
Step 4, for the tracing area image of acquisition, regression tree set based algorithm (Kazemi V, Sullivan are utilized J.One millisecond face alignment with an ensemble of regression trees[C]// IEEE Conference on Computer Vision and Pattern Recognition.IEEE Computer Society, 2014:1867-1874.), face is demarcated and is divided;
Step 5, to the face of segmentation use based on concatenated convolutional neural network human eye feature point location technology to human eye with And mouth feature point is detected that (Zhao Xuepeng, the first month of spring is peaceful, Feng Mingkui, waits based on the fatigue detecting of concatenated convolutional neural network [J] optoelectronic laser, 2017 (5): 497-502.);
Step 6, closure degree at this time is calculated according to the human eye portion detected and mouth feature point, is sentenced according to given threshold Break face eyes closed state and mouth in this frame image is yawned state;
Step 7, in step 4 on the basis of face calibration, using locally fixed based on ASM (Active Shape Model) Method (Van Ginneken B, Frangi A F, Staal J J, the et al.Active of position and face characteristic triangle shape model segmentation with optimal features[J].IEEE Transactions on Medical Imaging, 2002,21 (8): 924-933.) realize head pose estimation, to judge head point head at this time State;
Step 8, every x minutes statistics frequencies of wink, yawn frequency, frequency of nodding and PERCLOS parameter, to this four Using the information fusion algorithm based on rough set theory, (Wang Qian, Miao De China, Deng Sanpeng wait to drive based on the automobile of rough set to parameter Research [J] the vehicle and power technology of the person's of sailing fatigue monitoring method, 2011 (4): 18-21.) realize fatigue state detection.
Step 1 includes:
Step 1-1, the image that the size to input is 1280*960 carry out scaling operation, are reduced into original image Half, i.e. 640*480, to improve processing speed;
Step 1-2 enhances image using the method for histogram equalization;
Step 1-3 is denoised (E.Spjtvoll. " A to enhanced image using the method for gaussian filtering nonlinear gaussian filter applied to images with discontinuities."Journal of Nonparametric Statistics 8.1(1997):21-43.)。
Step 1-2 includes:
Step 1-2-1, the gray level of image after listing input picture and reducing, L is the number of gray level;
Step 1-2-2 counts each gray-scale number of pixels in input picture;
Step 1-2-3, calculating input image histogram P (i)=Ni/N, wherein P (i) is intensity profile density, and Ni is every The corresponding number of pixels of one gray level, N are the total number of pixels of input picture;
Step 1-2-4, calculating accumulative histogram P (j)=P (1)+P (2)+P (3)+...+P (i), wherein P (i) is gray scale point Cloth density, P (j) are accumulative intensity profile density;
Step 1-2-5 is utilized gray value transforming function transformation function j=int [(L-1) P (j)+0.5], wherein and int is floor operation, Calculate transformed gray value j, and round;
Step 1-2-6 determines greyscale transformation relationship i → j, and gray value f (m, n)=i of input picture is modified to g accordingly (m, n)=j, wherein f (m, n) is former input gray level function, and g (m, n) is gamma function after transformation, and i is input picture gray scale Value, j are the transformed gray value of step 1-2-5, and m, n are the cross of input picture, ordinate;
Step 1-2-7, the number of pixels Nj of gray level after statistics transformation;
Step 1-2-8 calculates histogram P (j)=Nj/N of image after transformation, wherein Nj is after converting in step 1-2-7 The number of pixels of a gray level.
Step 2 includes:
Step 2-1 does gray processing processing to input picture;
Step 2-2, using Gamma correction method (Poynton C.Digital Video and HDTV Algorithms And Interfaces [M] .Morgan Kaufmann Publishers Inc.2003.) place is normalized to input picture Reason reduces interference caused by uneven illumination;
Step 2-3, the gradient of calculating input image pixel do convolution fortune to input picture using [- 1,0,1] gradient operator Calculation obtains horizontal direction gradient component, then uses [1,0, -1]TGradient operator does convolution algorithm to image and obtains vertical direction Gradient component, while input picture is divided into cellular the pixel of every 8*8 (form a cellular), counts the ladder of each cellular Spend histogram;
3*3 cellular is formed a fritter by step 2-4, all cellular Feature Descriptor series connection in each fritter Obtain the HOG Feature Descriptor of fritter;
The HOG Feature Descriptor series connection of fritter all in input picture is obtained whole picture input picture by step 2-5 HOG Feature Descriptor;
Step 2-6, using the Adaboost cascade classifier training positive and negative sample of face, (the positive negative sample of face comes from CMU- PIE face database) generate HOG feature cascade classifier (Mu Chunlei based on HOG feature face identification system study [D] University of Electronic Science and Technology, 2013.) Face datection, carried out to present frame input picture using trained classifier, and by its Original template as face tracking.
Step 4 includes:
After obtaining tracing area image, regression tree set based algorithm can generate an initial estimation characteristic point, using gradient Increased algorithm reduces the square error summation of initial estimation characteristic point and background, minimizes error with least square method, obtains To cascade regression vectors (Kazemi V, the Sullivan J.One millisecond face alignment of every level-one with an ensemble of regression trees[C]//IEEE Conference on Computer Vision And Pattern Recognition.IEEE Computer Society, 2014:1867-1874.), finally, according to detection To human face characteristic point obtain the segmentation of accurate face;
Wherein, the formula of regression tree set based algorithm is as follows:
WhereinFor the signature point for returning device when prime, t indicates cascade serial number, rtIndicate returning when prime Return device, the input parameter for returning device is input picture I and the signature point of upper level recurrence device
Step 5 includes: that the concatenated convolutional neural network that is based on is formed by three convolutional layers and two pond layer buildings, For extracting 4 key feature points of the eye i.e. central point of two canthus points and upper palpebra inferior, (Zhao Xuepeng, the first month of spring is peaceful, Feng Ming Kui waits fatigue detecting [J] the optoelectronic laser of based on concatenated convolutional neural network, 2017 (5): 497-502.).
Step 6 includes:
Step 6-1 calculates face eyes closed degree P using following formula:
Wherein, e1, e2Respectively represent upper eyelid center point coordinate and palpebra inferior center point coordinate, e3, e4It respectively represents simple eye The left coordinate in canthus (corners of the mouth), right coordinate;
Step 6-2, eyes closed state decision condition are as follows:
Mouth closed state decision condition is as follows:
Wherein, t represents 0.5 state duration of P >.
Step 7 includes: to carry out head modeling (Cootes T F, Taylor C J.Active shape using ASM algorithm Models [J] .Proc British Machine Vision Conf, 1992:266--275.), setting head model eyes, Mouth coordinate points are compared with model to eyes and mouth characteristic point by actually detected, analyze the posture on head, wherein mouth It is obtained with eyes key feature points coordinate by step 5, the estimation of human face posture is realized according to affine transformation: when people is in positive planar When state, two centers and mouth center can constitute isosceles triangle, and when human face posture rotates left and right, swing and pitching become When changing, corresponding change can equally occur for the feature triangle of face, conclude therefrom that the deflection angle of face, paper " Zhao Lei, Wang Zeng , Wang Xiaojin waits train driver head pose estimation [J] the railway society of based on ASM local positioning and feature triangle, 2016,38 (9): complete algorithm is provided in 52-58. " and is realized.
In step 8, every 1 minute statistics frequency of wink, yawn frequency, frequency of nodding and PERCLOS parameter, using such as Lower formula calculates PECLOS parameter:
Wherein, PECLOS parameter refers to ratio shared by closed-eye time in the unit time, and K1 indicates that eye closing frame number, K2 indicate Totalframes.
The utility model has the advantages that being solved tired the invention discloses a kind of fixation post personnel fatigue detecting algorithm of information fusion Labor degree judges the problem that false alarm rate is high, robustness is low.Face calibration is carried out to input picture using recurrence tree method first, it is real Existing high-precision Face datection;Then, the face detected is used and is based on concatenated convolutional neural network human eye feature point location Technology realizes the accurate detection of eye and mouth feature point, is analyzed by the multiple image characteristic point to video, counted respectively Calculate PERCLOS parameter, frequency of wink and yawn frequency.Then, using based on ASM local positioning and face characteristic triangle Method realizes head pose estimation, to calculate frequency of nodding.Finally, real using the information fusion algorithm based on rough set theory Now fix the fatigue detecting of post personnel.The present invention has carried out performance test under several scenes, and tired judging nicety rate reaches 90% or more, single frames average operating time 40ms sufficiently demonstrate effectiveness of the invention.
Detailed description of the invention
The present invention is done with reference to the accompanying drawings and detailed description and is further illustrated, it is of the invention above-mentioned or Otherwise advantage will become apparent.
Fig. 1 is system flow schematic diagram of the invention.
Fig. 2 is that the method for face characteristic triangle realizes head pose estimation schematic diagram.
Fig. 3 is fatigue state comprehensive descision schematic diagram.
Specific embodiment
The present invention will be further described with reference to the accompanying drawings and embodiments.
The invention discloses a kind of fixation post personnel fatigue detection methods based on information fusion, as shown in Figure 1, including Following steps:
S1: image preprocessing:
One of an important factor for size of image is influence image processing speed.By zooming in and out behaviour to input picture Make, can effectively improve Face datection efficiency and accuracy rate in video image;Secondly, using the method pair of histogram equalization Image enhance and is denoised using the method for gaussian filtering to image, convenient for mentioning later to the characteristic point of image It takes.
S2: Face datection:
Face datection is realized by the way of HOG feature+cascade classifier.The extraction of HOG feature is first to input picture Gray processing processing is done, image is normalized using Gamma correction method, reduces interference caused by uneven illumination;It calculates The gradient of image pixel captures objective contour information.Meanwhile cellular is divided an image into, count the gradient histogram of each cellular Figure;Then, 3*3 cellular is formed into a fritter, all cellular Feature Descriptors connect to obtain fritter in each fritter HOG Feature Descriptor;Entire image finally is can be obtained into the HOG Feature Descriptor series connection of fritter all in image HOG Feature Descriptor.
The HOG feature cascade classifier generated using the positive and negative sample of Adaboost cascade classifier training face, utilizes instruction The classifier perfected carries out Face datection to current frame image, and as the original template of face tracking.
S3: face tracking is realized using KCF target tracking algorism.When face generation significantly deflects, Face datection is calculated Method is easy failure and speed is relatively slow.KCF target tracking algorism speed is faster and robustness is more preferable.With other most of Target tracker is compared, and KCF target tracking algorism also belongs to the classifier with recognition capability, can be by the target in image It is distinguished with background, but since majority uses sliding window and multiple dimensioned sample mode, causes sample set to have a large amount of superfluous Remaining information.And KCF track algorithm be then realized using circular matrix discrete Fourier transform and kernel regression filtering mode target with Track, and operand is reduced, arithmetic speed is improved, requirement of real-time is met.
S4: utilizing regression tree set based algorithm, carries out calibration and fine segmentation to the face traced into.When acquisition tracing area After image, algorithm can generate an initial estimation characteristic point, then reduce initial characteristics point and back using the increased algorithm of gradient The square error summation of scape.Error is minimized with least square method, obtains the cascade regression vectors of every level-one.Finally, according to The available accurate face segmentation of the human face characteristic point detected.
Regression tree set based algorithm core formula is as follows:
Wherein t indicates cascade serial number, rtIndicate the recurrence device for working as prime.The input parameter for returning device is image I and upper one The signature point S of grade recurrence device.
S5: human eye and mouth key feature points are carried out using based on concatenated convolutional neural network characteristics point location technology Detection.The network is formed by three convolutional layers and two pond layer buildings, for extracting 4 key feature points of eye i.e. two The central point of canthus point and upper palpebra inferior.Due to using deep layer convolutional network structure, more abstract eye feature is extracted, Wherein texture information is for positioning key point.The detection method of mouth key feature points is consistent with eye method.S6: according to detection Eye and the mouth feature point arrived calculates closure degree at this time, judges that face eyes close in current frame image according to given threshold Conjunction state and mouth are yawned state.It is implemented as follows:
Wherein, P represents closure, e1, e2Palpebra inferior (lip) center point coordinate, e in representative3, e4Represent canthus (corners of the mouth) Coordinate.
Eyes closed state decision condition:
Mouth closed state decision condition:
Wherein, t represents 0.5 state duration of P >.
S7: realize that head pose estimation, this method are built firstly the need of head is carried out using the method for face characteristic triangle Mould (setting head model eyes, mouth coordinate points), is compared actually detected to eyes and mouth characteristic point with model, so The posture on post analysis head.Wherein mouth and eyes key feature points coordinate are obtained by step S5.It then can according to affine transformation Realize the estimation of human face posture.When people is in frontal state, as shown in Fig. 2, two centers (A, B two o'clock) and mouth center (C Point) isosceles triangle can be constituted, when human face posture rotates left and right, and swing and pitching convert, the feature triangle of face Corresponding change can equally occur, thus it can be inferred that the deflection angle of face.
S8: the judgement of degree of fatigue needs four technical indicators in total in the present invention:
(1) PERCLOS fatigue detecting judgment technology index
PECLOS refers to ratio shared by closed-eye time in the unit time, and a normal wink time is big under people's waking state It is generally 150ms or so, that is, the time of 3~4 frames;When people is in a state of fatigue, one time wink time can be slack-off, passes through meter The ratio that calculation closed-eye time accounts for total time detects the fatigue state of people, and PECLOS is the important finger of real-time analysis people's degree of fatigue One of mark.
(2) frequency of wink fatigue detecting technology index
When people is in a state of fatigue, it can be acted by blink and alleviate frequency of blinking under eye strain, that is, fatigue state Rate can rise, and can be judged to the fatigue state of people by the number of winks in the unit of analysis time, be with 1 point in the invention Clock is that unit counts number of winks, and calculates frequency of wink.Compared with PECLOS index, since its measurement period is long, Therefore real-time is insufficient, but is still to judge one of important indicator of fatigue state.Primary blink process is that eyes are opened -> closed The process closed -> opened, the Euclidean distance by calculating eyelid key point can calculate whether eyes are in closed state.
(3) yawn frequency fatigue detecting technology index
Yawn is that people's one of important behaviour in a state of fatigue can by number of yawning in the unit of account time Effectively to judge the degree of fatigue of people.It yawns and belongs to a time-continuing process, judgement needs from mouth opening width and continues It is distinguished on time with the state of speaking.In general, when once yawning the time probably between 3~5 seconds, and normally speaking The opening and closing time of mouth is within 1 second.In addition, another important Rule of judgment of yawning is exactly opening width, compare It normally speaks, opening width is bigger when yawning.According to the two Rule of judgment can yawn frequency, it is corresponding thus to make fatigue judgement Decision.
(4) when head up and down motion frequency is excessively high is more than certain threshold value, it is judged as that frequency of nodding is excessively high and makes fatigue State corresponding decision.
Finally, comprehensive four parameters is needed to make corresponding fatigue state decision, melted using the information based on rough set theory Hop algorithm realizes fatigue state comprehensive descision, and increase judges fatigue detecting precision.Overall structure process is as shown in Figure 3.
The present invention provides a kind of fixation post personnel fatigue detection methods based on information fusion, implement the technology There are many method and approach of scheme, the above is only a preferred embodiment of the present invention, it is noted that for the art Those of ordinary skill for, various improvements and modifications may be made without departing from the principle of the present invention, these change It also should be regarded as protection scope of the present invention into retouching.The available prior art of each component part being not known in the present embodiment adds To realize.

Claims (7)

1. a kind of fixation post personnel fatigue detection method based on information fusion, which comprises the steps of:
Step 1, input picture is pre-processed;
Step 2, the detection of face is realized using the Face datection algorithm based on HOG feature cascade classifier;
Step 3, on the basis of Face datection, the tracking of face is realized using core correlation filtering KCF target tracking algorism, is obtained Tracing area image;
Step 4, the tracing area image of acquisition is demarcated face and is divided using regression tree set based algorithm;
Step 5, the face of segmentation is used based on concatenated convolutional neural network human eye feature point location technology to human eye and mouth Portion's characteristic point is detected;
Step 6, closure degree at this time is calculated according to the human eye portion detected and mouth feature point, this is judged according to given threshold Face eyes closed state and mouth are yawned state in one frame image;
Step 7, in step 4 on the basis of face calibration, using the method based on ASM local positioning and face characteristic triangle Head pose estimation is realized, to judge head point head state at this time;
Step 8, every x minutes statistics frequencies of wink, yawn frequency, frequency of nodding and PERCLOS parameter, to this four parameters Fatigue state detection is realized using the information fusion algorithm based on rough set theory;
Step 1 includes:
Step 1-1, the image that the size to input is 1280*960 carry out scaling operation, are reduced into two points of original image One of, i.e. 640*480;
Step 1-2 enhances image using the method for histogram equalization;
Step 1-3 denoises enhanced image using the method for gaussian filtering;
Step 1-2 includes:
Step 1-2-1, the gray level of image after listing input picture and reducing, L is the number of gray level;
Step 1-2-2 counts each gray-scale number of pixels in input picture;
Step 1-2-3, calculating input image histogram P (i)=Ni/N, wherein P (i) is intensity profile density, and Ni is each The corresponding number of pixels of gray level, N are the total number of pixels of input picture;
Step 1-2-4, calculating accumulative histogram P (j)=P (1)+P (2)+P (3)+...+P (i), wherein P (i) is that intensity profile is close Degree, P (j) are accumulative intensity profile density;
Step 1-2-5 is utilized gray value transforming function transformation function j=int [(L-1) P (j)+0.5], wherein int is floor operation, is calculated Transformed gray value j, and round;
Step 1-2-6 determines greyscale transformation relationship i → j, accordingly by gray value f (m, n)=i of input picture be modified to g (m, N)=j, wherein i is input picture gray value, and j is the transformed gray value of step 1-2-5, and m, n are the cross of input picture, is indulged Coordinate, f (m, n) are former input gray level function, and g (m, n) is gamma function after transformation;
Step 1-2-7, the number of pixels Nj of gray level after statistics transformation;
Step 1-2-8 calculates histogram P (j)=Nj/N of image after transformation, wherein Nj is an ash after converting in step 1-2-7 Spend the number of pixels of grade.
2. the method according to claim 1, wherein step 2 includes:
Step 2-1 does gray processing processing to input picture;
Step 2-2 is normalized input picture using Gamma correction method;
Step 2-3, the gradient of calculating input image pixel are done convolution algorithm to input picture and are obtained using [- 1,0,1] gradient operator To horizontal direction gradient component, then [1,0, -1] is usedTGradient operator does convolution algorithm to image and obtains the ladder of vertical direction Component is spent, while input picture is divided into cellular, the pixel of every 8*8 forms a cellular, and the gradient for counting each cellular is straight Fang Tu;
3*3 cellular is formed a fritter by step 2-4, and all cellular Feature Descriptors connect to obtain in each fritter The HOG Feature Descriptor of fritter;
The HOG Feature Descriptor series connection of fritter all in input picture is obtained the HOG of whole picture input picture by step 2-5 Feature Descriptor;
Step 2-6, the HOG feature cascade classifier generated using the positive and negative sample of Adaboost cascade classifier training face, benefit Face datection is carried out to present frame input picture with trained classifier, and as the original template of face tracking.
3. according to the method described in claim 2, it is characterized in that, step 4 includes: to return after obtaining tracing area image Tree set based algorithm can generate an initial estimation characteristic point, reduce initial estimation characteristic point and background using the increased algorithm of gradient Square error summation, minimize error with least square method, the cascade regression vectors of every level-one obtained, finally, according to inspection The human face characteristic point measured obtains accurate face segmentation;
Wherein, the formula of regression tree set based algorithm is as follows:
WhereinFor the signature point for returning device when prime, t indicates cascade serial number, rtIndicate the recurrence device for working as prime, The input parameter for returning device is input picture I and the signature point of upper level recurrence device
4. according to the method described in claim 3, it is characterized in that, step 5 include: it is described based on concatenated convolutional neural network by Three convolutional layers and two pond layer buildings form, for extract 4 key feature points of eye i.e. two canthus points and The central point of palpebra inferior.
5. according to the method described in claim 4, it is characterized in that, step 6 includes:
Step 6-1 calculates face eyes closed degree P using following formula:
Wherein, e1,e2Respectively represent upper eyelid center point coordinate and palpebra inferior center point coordinate, e3,e4Respectively represent simple eye canthus Left coordinate, right coordinate;
Step 6-2, eyes closed state decision condition are as follows:
Mouth closed state decision condition is as follows:
Wherein, t represents the state duration of P > 0.5.
6. according to the method described in claim 5, it is characterized in that, step 7 include: using ASM algorithm carry out head modeling, if Determine head model eyes, mouth coordinate points, be compared actually detected to eyes and mouth characteristic point with model, analyzes head Posture, wherein mouth and eyes key feature points coordinate are obtained by step 5, realize that human face posture is estimated according to affine transformation Meter: when people is in frontal state, two centers and mouth center can constitute isosceles triangle, when left-right rotary occurs for human face posture Turn, when swing and pitching convert, corresponding change can equally occur for the feature triangle of face, conclude therefrom that the deflection of face Angle.
7. according to the method described in claim 6, it is characterized in that, in step 8, every 1 minute statistics frequency of wink, yawn frequency Rate, frequency of nodding and PERCLOS parameter calculate PECLOS parameter using following formula:
Wherein, PECLOS parameter refers to ratio shared by closed-eye time in the unit time, and K1 indicates that eye closing frame number, K2 indicate total frame Number.
CN201810823443.9A 2018-07-25 2018-07-25 A kind of fixation post personnel fatigue detection method based on information fusion Active CN109308445B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810823443.9A CN109308445B (en) 2018-07-25 2018-07-25 A kind of fixation post personnel fatigue detection method based on information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810823443.9A CN109308445B (en) 2018-07-25 2018-07-25 A kind of fixation post personnel fatigue detection method based on information fusion

Publications (2)

Publication Number Publication Date
CN109308445A CN109308445A (en) 2019-02-05
CN109308445B true CN109308445B (en) 2019-06-25

Family

ID=65225991

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810823443.9A Active CN109308445B (en) 2018-07-25 2018-07-25 A kind of fixation post personnel fatigue detection method based on information fusion

Country Status (1)

Country Link
CN (1) CN109308445B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109784313A (en) * 2019-02-18 2019-05-21 上海骏聿数码科技有限公司 A kind of blink detection method and device
CN109919049A (en) * 2019-02-21 2019-06-21 北京以萨技术股份有限公司 Fatigue detection method based on deep learning human face modeling
CN109977820A (en) * 2019-03-14 2019-07-05 重庆邮电大学 A kind of fatigue driving determination method
CN110148092B (en) * 2019-04-16 2022-12-13 无锡海鸿信息技术有限公司 Method for analyzing sitting posture and emotional state of teenager based on machine vision
CN110097012B (en) * 2019-05-06 2022-11-08 苏州国科视清医疗科技有限公司 Fatigue detection method for monitoring eye movement parameters based on N-range image processing algorithm
CN110210382A (en) * 2019-05-30 2019-09-06 上海工程技术大学 A kind of face method for detecting fatigue driving and device based on space-time characteristic identification
CN110197169B (en) * 2019-06-05 2022-08-26 南京邮电大学 Non-contact learning state monitoring system and learning state detection method
CN110807351A (en) * 2019-08-28 2020-02-18 杭州勒格网络科技有限公司 Intelligent vehicle-mounted fatigue detection system, method and device based on face recognition
CN110728241A (en) * 2019-10-14 2020-01-24 湖南大学 Driver fatigue detection method based on deep learning multi-feature fusion
CN111507244B (en) * 2020-04-15 2023-12-08 阳光保险集团股份有限公司 BMI detection method and device and electronic equipment
CN111626628A (en) * 2020-06-01 2020-09-04 梅和珍 Network teaching system for extraclass tutoring
CN112052775A (en) * 2020-08-31 2020-12-08 同济大学 Fatigue driving detection method based on gradient histogram video recognition technology
CN112767359B (en) * 2021-01-21 2023-10-24 中南大学 Method and system for detecting corner points of steel plate under complex background
CN116362933B (en) * 2023-05-30 2023-09-26 南京农业大学 Intelligent campus management method and system based on big data
CN117351470B (en) * 2023-12-04 2024-03-19 西北工业大学 Operator fatigue detection method based on space-time characteristics

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101090482A (en) * 2006-06-13 2007-12-19 唐琎 Driver fatigue monitoring system and method based on image process and information mixing technology
CN104013414A (en) * 2014-04-30 2014-09-03 南京车锐信息科技有限公司 Driver fatigue detecting system based on smart mobile phone
US9198575B1 (en) * 2011-02-15 2015-12-01 Guardvant, Inc. System and method for determining a level of operator fatigue
CN105769120A (en) * 2016-01-27 2016-07-20 深圳地平线机器人科技有限公司 Fatigue driving detection method and device
CN106372621A (en) * 2016-09-30 2017-02-01 防城港市港口区高创信息技术有限公司 Face recognition-based fatigue driving detection method
CN107194346A (en) * 2017-05-19 2017-09-22 福建师范大学 A kind of fatigue drive of car Forecasting Methodology

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101090482A (en) * 2006-06-13 2007-12-19 唐琎 Driver fatigue monitoring system and method based on image process and information mixing technology
US9198575B1 (en) * 2011-02-15 2015-12-01 Guardvant, Inc. System and method for determining a level of operator fatigue
CN104013414A (en) * 2014-04-30 2014-09-03 南京车锐信息科技有限公司 Driver fatigue detecting system based on smart mobile phone
CN105769120A (en) * 2016-01-27 2016-07-20 深圳地平线机器人科技有限公司 Fatigue driving detection method and device
CN106372621A (en) * 2016-09-30 2017-02-01 防城港市港口区高创信息技术有限公司 Face recognition-based fatigue driving detection method
CN107194346A (en) * 2017-05-19 2017-09-22 福建师范大学 A kind of fatigue drive of car Forecasting Methodology

Also Published As

Publication number Publication date
CN109308445A (en) 2019-02-05

Similar Documents

Publication Publication Date Title
CN109308445B (en) A kind of fixation post personnel fatigue detection method based on information fusion
Chirra et al. Deep CNN: A Machine Learning Approach for Driver Drowsiness Detection Based on Eye State.
CN108108684B (en) Attention detection method integrating sight detection
CN104616438B (en) A kind of motion detection method of yawning for fatigue driving detection
CN107704805A (en) method for detecting fatigue driving, drive recorder and storage device
Ji et al. Fatigue state detection based on multi-index fusion and state recognition network
Tipprasert et al. A method of driver’s eyes closure and yawning detection for drowsiness analysis by infrared camera
Eriksson et al. Driver fatigue: a vision-based approach to automatic diagnosis
CN105719431A (en) Fatigue driving detection system
CN108460345A (en) A kind of facial fatigue detection method based on face key point location
CN109840565A (en) A kind of blink detection method based on eye contour feature point aspect ratio
CN108614999B (en) Eye opening and closing state detection method based on deep learning
CN108876879A (en) Method, apparatus, computer equipment and the storage medium that human face animation is realized
Batista A drowsiness and point of attention monitoring system for driver vigilance
CN109740477A (en) Study in Driver Fatigue State Surveillance System and its fatigue detection method
CN106919913A (en) Method for detecting fatigue driving and device based on computer vision
CN105117681A (en) Multi-characteristic fatigue real-time detection method based on Android
CN109977930A (en) Method for detecting fatigue driving and device
CN113989788A (en) Fatigue detection method based on deep learning and multi-index fusion
Luo et al. The driver fatigue monitoring system based on face recognition technology
CN103729646B (en) Eye image validity detection method
Li et al. Eye/eyes tracking based on a unified deformable template and particle filtering
Sun et al. Real-time driver fatigue detection based on eye state recognition
Devi et al. Fuzzy based driver fatigue detection
Li et al. A method of driving fatigue detection based on eye location

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant