CN117935231A - Non-inductive fatigue driving monitoring and intervention method - Google Patents

Non-inductive fatigue driving monitoring and intervention method Download PDF

Info

Publication number
CN117935231A
CN117935231A CN202410320081.7A CN202410320081A CN117935231A CN 117935231 A CN117935231 A CN 117935231A CN 202410320081 A CN202410320081 A CN 202410320081A CN 117935231 A CN117935231 A CN 117935231A
Authority
CN
China
Prior art keywords
fatigue
point
monitoring
driving
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410320081.7A
Other languages
Chinese (zh)
Other versions
CN117935231B (en
Inventor
阮系真
张博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Zhenxi Biotechnology Co ltd
Original Assignee
Hangzhou Zhenxi Biotechnology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Zhenxi Biotechnology Co ltd filed Critical Hangzhou Zhenxi Biotechnology Co ltd
Priority to CN202410320081.7A priority Critical patent/CN117935231B/en
Publication of CN117935231A publication Critical patent/CN117935231A/en
Application granted granted Critical
Publication of CN117935231B publication Critical patent/CN117935231B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B31/00Predictive alarm systems characterised by extrapolation or other computation using updated historic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/18Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state for vehicle drivers or machine operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/06Alarms for ensuring the safety of persons indicating a condition of sleep, e.g. anti-dozing alarms

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Psychology (AREA)
  • Biomedical Technology (AREA)
  • Emergency Management (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Social Psychology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Business, Economics & Management (AREA)
  • Ophthalmology & Optometry (AREA)
  • Computing Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a non-inductive fatigue driving monitoring early warning intervention method, which is characterized in that the change of the point-to-point distance of fatigue monitoring characteristic point position dimension reduction in an adaptive rectangular frame of a designated area of a plane space template is monitored, the fatigue degree of each monitoring point is calculated according to the change, the continuous monitoring on whether fatigue driving occurs is realized, and the fatigue driving is realized byThe method has the advantages that the relativity and the difference among the fatigue characteristics representing the fatigue state under different driving environments and different driving states are fused, and the accuracy of fatigue driving judgment is improved; the fatigue threshold is dynamically updated according to the difference of people, so that the possibility of misjudgment is reduced, and the pertinence of an algorithm is improved; by arranging a plurality of rectangular frames with fixed sizes and unique codes in each appointed area with clear space relative position relation, the positioning positions of the fatigue monitoring feature points of the plane space template can be calculated quickly, the fatigue state features can be amplified, and the calculation speed of fatigue monitoring can be improved.

Description

Non-inductive fatigue driving monitoring and intervention method
Technical Field
The invention relates to the technical field of data processing, in particular to a non-inductive fatigue driving monitoring and intervention method.
Background
At present, there are two main monitoring principles for fatigue driving: the first is to monitor the frequency of steering operation, and the second is to monitor steering acceleration. The basic principle of monitoring the steering operation frequency to identify whether fatigue driving occurs is: when the steering operation frequency becomes low, the machine judges that the vehicle is suspected to enter a fatigue driving state, but the vehicle is driven on a highway, the steering operation frequency naturally becomes low, the method is not suitable for fatigue monitoring of the highway driving, and the vehicle is applied to the urban road for fatigue driving monitoring, and the accuracy is low due to single monitoring index.
The principle of monitoring steering acceleration to identify whether to fatigue drive is as follows: when the machine judges that fatigue driving occurs after the slight but abrupt steering action is monitored, the monitoring accuracy of the method is obviously improved compared with that of the first method. However, the method has the following technical problems:
1. the monitoring time is late. The time point of monitoring the fatigue driving is the time point when dangerous actions are generated due to the fatigue driving, and the continuous monitoring process of whether the fatigue driving occurs is lacked, and the slight but abrupt steering actions can cause driving out of control.
2. A slight but abrupt steering action may also be an abrupt behavior of avoiding an abrupt obstacle, and thus it is not highly accurate to determine whether fatigue driving occurs in this monitoring manner.
3. All fatigue driving monitoring algorithms in the market at present are universal algorithms and are suitable for all drivers, but the fatigue states of different people are different, so that the monitoring of the fatigue states of different people is not accurate due to the universality of the algorithms.
Disclosure of Invention
The invention aims to improve the safety of a fatigue driving monitoring algorithm, the monitoring accuracy and pertinence, and provides a non-inductive fatigue driving monitoring early warning intervention method.
To achieve the purpose, the invention adopts the following technical scheme:
The method for monitoring, early warning and intervening the non-inductive fatigue driving comprises the following steps:
s1, monitoring the inter-point distance change of fatigue monitoring feature points in a dimension reduction virtual space in real time to calculate the fatigue degree of each monitoring point;
s2, judging whether the fatigue degree is larger than a fatigue degree threshold corresponding to the current time point of dynamic updating,
If yes, intervening in fatigue driving;
if not, generating prompt information to prompt the driver to confirm the fatigue state, and turning to step S3 after the driver confirms;
and S3, recording confirmation information and updating the fatigue threshold.
Preferably, in step S1, the method for calculating the fatigue degree includes the steps of:
S11, acquiring a corresponding fatigue monitoring feature set according to the current driving environment and driving state;
s12, extracting each fatigue monitoring feature point recorded in the fatigue monitoring feature set from each frame of face image acquired in real time, and mapping the fatigue monitoring feature point into an adaptive rectangular frame of a corresponding designated area in a plane space template in a dimension-reducing manner, wherein the designated areas of different types are distributed in a delta shape;
s13, acquiring the current monitoring time point Correlated distance change history feature timing sequence,/>Representation sequence/>/>The distance variation of each element is the average value of the distance variation between every two fatigue monitoring feature points in different designated areas, and i is more than 1 and less than n;
s14, extracting the distance change history characteristic time sequence Front of (1)/>The elements form a first array, the remaining elements form a second array, and then the average value of the distance variation of each element in the first array and the second array is calculated respectively and recorded as/>, respectivelyAnd will/>And/>Absolute value of difference of (2) as/>The degree of fatigue at the time point.
Preferably, in step S11, when the driving environment is daytime and/or the driving state is that the driving speed is lower than a preset speed threshold, the fatigue monitoring feature set correspondingly acquired includes a pupil key point and a face key point,
When the driving environment is at night and/or the driving state is that the driving speed is greater than or equal to the preset speed threshold, the fatigue monitoring feature set correspondingly obtained comprises the face key points,
The facial key points include eyelid key points, cheekbone key points, and chin key points.
Preferably, the planar space template in step S12 includes at least 2 specified areas, each specified area includes a plurality of rectangular frames with different sizes and unique numbers, and different types of fatigue monitoring feature points are dimension-down mapped into the adapted rectangular frames in the corresponding specified areas, and the specific method includes the steps of:
s121, intercepting an area image containing the same type of fatigue monitoring feature points, calculating the area intersection ratio of the area image and the rectangular frame after each rectangular frame in the appointed area corresponding to the type is accommodated in the area image, and taking the rectangular frame with the largest intersection ratio as an adaptation object;
S122, calculating three-dimensional coordinates of the fatigue monitoring feature points and performing two-dimensional dimension reduction;
And S123, after aligning the dimension-reduced two-dimensional coordinate points with the reference points in the adaptive object, realizing dimension-reducing mapping of the fatigue monitoring feature points and the adaptive spatial relationship of the rectangular frame.
Preferably, in step S122, the method for calculating the three-dimensional coordinates of the fatigue monitoring feature point includes the steps of:
S1221, acquiring face images at different visual angles at the same time, extracting an earlobe key point and a chin key point in the face images acquired at each visual angle, taking the chin key point closest to the ground as a first reference point calculated by three-dimensional coordinates, and taking the earlobe key point farthest from the ground in a second visual angle or a third visual angle as a second reference point calculated by three-dimensional coordinates;
S1222, calculating a first virtual point and/or a second virtual point corresponding to the first reference point and/or the second reference point under the second view angle or the third view angle, respectively, for the face image in which the first reference point and/or the second reference point are not detected;
S1223, setting the z-axis coordinate of the first reference point in the three-dimensional coordinate system as a value of 0, taking the vertical distance between the second reference point and the first reference point as the coordinate value of the second reference point on the z-axis, and then solving the three-dimensional coordinate of the fatigue monitoring feature point according to the spatial distance relation between each fatigue monitoring feature point and the first reference point or the first virtual point and the spatial distance relation between each fatigue monitoring feature point and the second reference point or the second virtual point.
Preferably, in step S1222, the method for calculating the first virtual point location includes the steps of:
A1, acquiring a rotation angle of a head in an initial state of a current frame relative to non-rotation;
A2, acquiring a positioning position of another chin key point which is not used as the first reference point in a face image acquired under the current visual angle;
A3, calculating the first virtual point of the first reference point in the face image under the current visual angle according to the positioning position of the other chin key point, the rotation angle obtained in the step A1 and the spatial position relation between the other chin key point and the first reference point.
Preferably, in step S1223, the x-axis coordinate value of the fatigue monitoring feature point in the three-dimensional space is: the horizontal distance between the fatigue monitoring feature point and the second reference point or the second virtual point under the first visual angle;
The y-axis coordinate values are: the fatigue monitoring feature point is horizontally distant from the second reference point or the second virtual point under a second view angle or a third view angle;
The z-axis coordinate values are: and the vertical distance between the fatigue monitoring characteristic point and the first reference point or the first virtual point under any view angle.
Preferably, in step S122, the method for two-dimensional dimension reduction of the fatigue monitoring feature point is as follows:
In the three-dimensional coordinates of each fatigue monitoring feature point in the same face image, the number of the corresponding shaft types and the same shaft coordinate values is at least 1;
In step S123, the method for aligning the two-dimensional coordinate point and the reference point is as follows:
Extracting any one of the fatigue monitoring feature points of the same type as an alignment object;
Matching the reference points corresponding to the alignment objects in the rectangular frame;
After the alignment object is aligned with the matched reference point, the area image is accommodated in the rectangular frame;
And calculating the accommodating coordinates of the accommodated fatigue monitoring feature points.
Preferably, the types of the fatigue monitoring feature points comprise any one or more of eyelid types, pupil types and mouth types, wherein the fatigue monitoring feature points of eyelid types comprise upper eyelid key points and/or lower eyelid key points;
The fatigue monitoring feature points of the pupil type comprise any one or more of an upper pupil key point, a lower pupil key point, a left pupil key point and a right pupil key point;
the fatigue monitoring feature points of the mouth type are chin key points;
when the fatigue threshold value which is dynamically updated is judged to be larger than a first threshold value or smaller than a second threshold value, the fatigue threshold value is set as an initial value by the machine, or the vehicle is flameout as an instruction, and the fatigue threshold value is set as the initial value by the machine; the first threshold is greater than the second threshold.
Preferably, when the driving environment is at night and/or the driving state is a driving speed equal to or greater than a preset speed threshold,The calculation method of (1) comprises the following steps:
b1, designating the upper eyelid key point and the chin key point as Is a calculation basis of (1);
b2, judging whether the fatigue trend of the upper eyelid key point is the same as that of the chin key point,
If yes, calculating coordinate symmetry points after the anti-fatigue trend movement of the key points of the upper eyelid to amplify fatigue characteristics, and then transferring to the step B3;
If not, not calculating the coordinate symmetry site;
B3, definition of 、/>The distance between the chin key point and the coordinate symmetry point or the upper eyelid key point at the moment is/>, respectively、/>Calculation/>And/>Absolute value of difference of (2) as/>
The invention has the following beneficial effects:
1. The fatigue monitoring feature point position dimension reduction is monitored to change the point distance in the adaptive rectangular frame of the specified area of the plane space template, so that the fatigue degree of each monitoring time point is calculated, continuous monitoring on whether fatigue driving occurs is realized, and the point distance change features are fused with the correlation and the difference among the fatigue features representing the fatigue states in different driving environments and different driving states, so that the judgment on whether the fatigue driving occurs is more accurate.
2. The fatigue threshold value used for judging whether fatigue driving happens varies from person to person, and in the driving process, the machine can confirm instructions according to the fatigue state of the driver to continuously update the fatigue threshold value, so that the inter-point distance change characteristic calculated at each monitoring time point has a more reasonable fatigue threshold value, and further the fatigue driving monitoring algorithm provided by the invention has stronger pertinence to different drivers, different driving environments and different driving time points.
3. When the fatigue degree is calculated, a plurality of designated areas corresponding to the types of fatigue monitoring feature points are arranged in the plane space template, and a plurality of rectangular frames with fixed sizes and unique codes are arranged in each designated area, so that the rectangular frames with different sizes can meet the accommodation of face images with different sizes, and the pertinence of a fatigue driving monitoring algorithm to different drivers is guaranteed; in addition, the coordinates of the reference points in the rectangular frame are determined in advance, after the fatigue monitoring feature points are subjected to two-dimensional dimension reduction and aligned with the reference points in the corresponding rectangular frame, the coordinates of the reference points can be quickly obtained according to the unique codes of the rectangular frame, and then the positioning positions of the fatigue monitoring feature points in the plane space template can be quickly calculated according to the coordinate relative relation between the fatigue monitoring feature points in the rectangular frame and the coordinate relative relation between the fatigue monitoring feature points and the reference points, so that the fatigue degree calculation speed is greatly improved, and the high requirement of the non-inductive fatigue driving application scene on the monitoring instantaneity is well met.
4. The method comprises the steps of providing a set of simple and efficient dimension reduction algorithm, rapidly calculating three-dimensional coordinates of fatigue monitoring feature points detected in real time, rapidly reducing dimension to a two-dimensional space, aligning the designated fatigue monitoring feature points with reference points in corresponding rectangular frames, and rapidly calculating the positioning positions of the fatigue monitoring feature points in the planar space template according to the coordinate relative position relation of the fatigue monitoring feature points in the two-dimensional space and the coordinate positions of the reference points in the planar space template, wherein the positioning positions of the fatigue monitoring feature points in the planar space template are determined in advance, and accordingly, the fatigue monitoring feature points detected in real time and in the three-dimensional space are mapped to the corresponding rectangular frames in the planar space template, so that the calculation of real-time fatigue degree is possible.
5. When the fatigue trends of the fatigue monitoring feature points in different designated areas are the same, the fatigue features are amplified by calculating the coordinate symmetry points of the anti-fatigue trend movements of the designated fatigue monitoring feature points, so that richer fatigue features can be captured, and the judgment accuracy of whether fatigue driving occurs can be improved.
6. The method has the advantages that only eyelid key points, cheekbone key points, chin key points and pupil key points are relied on, the coordinate dimension reduction mapping between the physical three-dimensional space and the plane space template is carried out on the points, and the accurate judgment of whether different drivers have fatigue driving in different driving environments or driving states is realized through the calculation of the fatigue degree and the dynamic update of the fatigue degree threshold value, so that the number of the monitored points of the algorithm is small, the calculation method of the fatigue degree is simple, and the high requirement of the non-inductive fatigue driving monitoring and early warning scene on the algorithm response instantaneity is ensured.
7. To be used forThe fatigue state of the fatigue monitoring characteristic points of different types is not used as the fatigue state of final judgment, and the judgment on whether fatigue occurs is more accurate.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present invention, the drawings that are required to be used in the embodiments of the present invention will be briefly described below. It is evident that the drawings described below are only some embodiments of the present invention and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a step diagram of an implementation of a method for monitoring and early warning intervention of non-inductive fatigue driving according to an embodiment of the present invention;
FIG. 2 is an exemplary diagram of a planar spatial template and calculation of fatigue;
FIG. 3 is an exemplary diagram of a face image acquired at a first perspective;
Fig. 4 is an exemplary view of face images simultaneously acquired at a second viewing angle for the face shown in fig. 3;
Fig. 5 is a schematic diagram of calculating a first virtual point corresponding to a first reference point by another chin key point.
Detailed Description
The technical scheme of the invention is further described below by the specific embodiments with reference to the accompanying drawings.
Wherein the drawings are for illustrative purposes only and are shown in schematic, non-physical, and not intended to limit the invention; for the purpose of better illustrating embodiments of the invention, certain elements of the drawings may be omitted, enlarged or reduced and do not represent the size of the actual product; it will be appreciated by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numbers in the drawings of embodiments of the invention correspond to the same or similar components; in the description of the present invention, it should be understood that, if the terms "upper", "lower", "left", "right", "inner", "outer", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, only for convenience in describing the present invention and simplifying the description, rather than indicating or implying that the apparatus or elements being referred to must have a specific orientation, be constructed and operated in a specific orientation, so that the terms describing the positional relationships in the drawings are merely for exemplary illustration and are not to be construed as limiting the present invention, and that the specific meanings of the terms described above may be understood by those of ordinary skill in the art according to specific circumstances.
In the description of the present invention, unless explicitly stated and limited otherwise, the term "coupled" or the like should be interpreted broadly, as it may be fixedly coupled, detachably coupled, or integrally formed, as indicating the relationship of components; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between the two parts or interaction relationship between the two parts. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
The method for monitoring, early warning and intervening the non-inductive fatigue driving provided by the embodiment of the invention, as shown in fig. 1, comprises the following steps:
s1, monitoring the inter-point distance change of fatigue monitoring feature points in a dimension reduction virtual space in real time to calculate the fatigue degree of each monitoring point;
s2, judging whether the fatigue degree is larger than a fatigue degree threshold corresponding to the current time point of dynamic update,
If yes, intervening in fatigue driving;
If not, generating prompt information to prompt the driver to confirm the fatigue state;
And S3, recording confirmation information and updating the fatigue threshold.
In step S1, the method for calculating the fatigue degree specifically includes the following steps:
S11, acquiring a corresponding fatigue monitoring feature set according to a current driving environment and a driving state, wherein the driving environment comprises daytime and night, the driving state is divided by a speed threshold, the vehicle state with the driving speed lower than a preset speed threshold is defined as a first driving state, and the vehicle state with the driving speed higher than or equal to the speed threshold is defined as a second driving state. When in fatigue driving or at night, the pupils of a human body are enlarged to different degrees, so that whether the fatigue driving occurs can be judged through the enlarged characteristics of the pupils, but the pupils are naturally enlarged at night, and the light is insufficient, so that the enlarged characteristics of the pupils are used as fatigue monitoring characteristics in daytime in driving environment. When the driving speed is too high, the pupil is usually naturally enlarged, and when the driving speed is lower than a preset value, the pupil is not enlarged in a non-fatigue state, so that when the driving speed is lower than the preset value, the pupil is monitored to have an enlarged characteristic or an enlarged trend, and the pupil can be used as an index for judging whether fatigue driving occurs. Therefore, in the invention, when the driving environment is daytime and/or the driving state is the first driving state, the pupil key points are included in the fatigue monitoring feature set correspondingly acquired. However, the contraction feature of the pupil key point is difficult to capture, and whether fatigue driving accuracy is not ideal is judged only by the pupil key point feature, so that the corresponding obtained fatigue monitoring feature set also comprises facial key points, wherein the facial key points comprise eyelid key points, cheekbone key points and chin key points. The more the number of the fatigue monitoring feature points is, the larger the data quantity required to be calculated by the algorithm is, and the more complex the algorithm is, so that one technical advantage of the method is that the accurate judgment on whether fatigue driving occurs in the daytime or in the first driving state can be realized by only relying on the eyelid key points, the cheekbone key points, the chin key points and the pupil key points.
In addition, when the driving environment is at night and/or the driving state is the driving speed equal to or greater than the preset speed threshold, the contraction feature of the pupil has difficulty in truly reflecting the fatigue state, and therefore, when the driving environment is at night and/or the driving state is the driving speed equal to or greater than the preset speed threshold, the face key point is included in the fatigue monitoring feature set correspondingly obtained, but the pupil key point is not included. In order to ensure the accuracy of the judgment of whether the fatigue driving occurs, the steering acceleration monitoring and other methods in the prior art can be combined.
S12, extracting each fatigue monitoring feature point recorded in the obtained fatigue monitoring feature set from each frame of face image acquired in real time, and mapping the fatigue monitoring feature point into an adaptive rectangular frame of a corresponding designated area in a plane space template in a dimension-reducing manner;
As shown in fig. 2, the planar space template includes at least 2 designated areas 100, each designated area 100 includes a plurality of rectangular boxes 200 with different sizes and unique numbers, and different types of fatigue monitoring feature points are dimension-reduced mapped into the adapted rectangular boxes 200 in the corresponding designated areas 100, and the specific method includes the steps of:
S121, intercepting an area image containing the same type of fatigue monitoring feature points, then calculating the area image of each rectangular frame in a specified area corresponding to the type (the type of the fatigue monitoring feature points corresponds to the type of the specified area), and taking the rectangular frame with the largest area cross-over ratio as an adaptation object after the area image is accommodated in the rectangular frame;
The meaning of the type of fatigue monitoring feature point is explained here as follows:
the types of fatigue monitoring feature points include eyelid type, pupil type, and mouth type types, wherein the fatigue monitoring feature points of eyelid type include upper eyelid keypoint 1, lower eyelid keypoint 2 shown in fig. 2;
The fatigue monitoring feature points of the pupil type include an upper pupil key point 3, a lower pupil key point 4, a left pupil key point 5 and a right pupil key point 6 shown in fig. 2;
The fatigue monitoring feature point of the mouth type is the chin key point 7 shown in fig. 2.
The method for intercepting the regional image containing the same type of fatigue monitoring feature points is exemplified as follows:
For example, when an eyelid image is to be intercepted, the identified upper eyelid key point, lower eyelid key point, left eyelid key point and right eyelid key point are taken as boundaries, and a rectangular area surrounded by the 4 key points is taken as the eyelid image to be intercepted in a rectangular frame selection mode. The principle of the cutting mode of the image containing the key points of the pupils is the same as that of the eyelid image, and the description is omitted. In addition, the region image of the mouth type is the region image from the tip of the nose to the chin, and the principle of interception is the same as that of the eyelid image.
It should be specifically noted here that whether the setting of the position relationship between the specified areas of different types directly affects the real-time performance of the fatigue driving monitoring algorithm response is reasonable, in order to amplify the variability of the distance change characteristics between the fatigue monitoring feature points between the specified areas, the specified areas of different types are arranged in a triangular manner, that is, as shown in fig. 2, the specified areas for accommodating the pupil area image, the specified areas for accommodating the eyelid area image and the specified areas for accommodating the mouth area image are arranged in a triangular manner.
S122, calculating three-dimensional coordinates of fatigue monitoring feature points and performing two-dimensional dimension reduction, wherein the method for calculating the three-dimensional coordinates specifically comprises the following steps:
S1221, acquiring face images at different visual angles at the same time, extracting an earlobe key point and a chin key point in the face images acquired at each visual angle, taking the chin key point closest to the ground as a first reference point calculated by three-dimensional coordinates, and taking the earlobe key point farthest from the ground in a second visual angle or a third visual angle as a second reference point calculated by three-dimensional coordinates;
The distance between the chin key point and the earlobe key point and the ground can be judged by setting the ground reference point, for example, setting a certain point of the chest as the ground reference point, and the distance between the chin key point or the earlobe key point and the ground reference point can be judged by calculating the distance between the chin key point or the earlobe key point and the ground reference point, wherein the distance is shorter, the distance is closer to the ground, and the distance is farther from the ground. As shown in fig. 3, the chin keypoints include a left chin keypoint 101 and a right chin keypoint 102, with the lobe keypoints being referenced by numeral 103. Here, the face images are acquired at 3 views at the same time, and the second view and the third view are opposite to each other and each form 90 ° with the first view.
S1222, calculating a first virtual point and/or a second virtual point corresponding to the first reference point and/or the second reference point under the second view angle or the third view angle for the face image in which the first reference point and/or the second reference point is not detected;
For example, in the three-dimensional space, a face image is acquired from the left side of the face shown in fig. 3 at a second view angle from left to right, at this time, a right lobe key point is not acquired at the second view angle, and, in the assumption that the right lobe key point is farther from the ground than the left lobe key point, the right lobe key point is used as a second reference point, at this time, a second virtual point corresponding to the second reference point at the view angle needs to be calculated.
Also for example, in a three-dimensional space, the face image is acquired from left to right from the left side of the face shown in fig. 3 at a second viewing angle, and at this time, the left chin key point 101 shown in fig. 3 is not acquired at the second viewing angle. Assuming that the left chin key point 101 is closest to the ground at this time, the left chin key point is taken as a first reference point, and the edge needs to calculate a first virtual point corresponding to the first reference point under the second viewing angle.
The method for calculating the virtual point location according to the present invention will be briefly described below by taking the first virtual point location as an example:
the method for calculating the first virtual point position comprises the following steps:
A1, acquiring a rotation angle of a head in an initial state of relative non-rotation of a current frame, wherein the rotation angle comprises a left rotation angle, a right rotation angle, a overlook angle and a look-up angle (head spherical rotation angle);
A2, acquiring a positioning position of another chin key point which is not used as a first reference point in a face image acquired under the current view angle, for example, the chin key point 102 in fig. 4 is not used as the first reference point, namely, acquiring the positioning position of the chin key point 102 in the face image acquired under the view angle, and the method for positioning the face image of the chin key point 102 under the current view angle adopts the existing method, so that the specific description is omitted;
and A3, calculating a first virtual point in the face image of the first reference point at the current view angle according to the positioning position of the other chin key point, the rotation angle obtained in the step A1 and the spatial position relation between the other chin key point and the first reference point.
In fig. 5, the position of another chin key point 102 is known, the rotation angleThe spatial position relationship between the other chin key point and the first reference point is known, that is, the straight line distance between the other chin key point 102 and the first reference point is fixed and known, so that the first virtual point 300 shown in fig. 5 can be rapidly calculated according to the 3 known relationships.
The calculation principle of the second virtual point location is the same as that of the first virtual point location, and will not be described again.
S1223, setting the z-axis coordinate of the first reference point in the three-dimensional coordinate system as a value of 0, taking the vertical distance between the second reference point and the first reference point as the coordinate value of the second reference point on the z-axis, and solving the three-dimensional coordinate of each fatigue monitoring feature point according to the spatial distance relation between each fatigue monitoring feature point and the first reference point or the first virtual point and the spatial distance relation between each fatigue monitoring feature point and the second reference point or the second virtual point.
Specifically, the x-axis coordinate value of the fatigue monitoring feature point is as follows: the horizontal distance between the fatigue monitoring feature point and the second reference point or the second virtual point under the first visual angle;
the y-axis coordinate values are: the horizontal distance between the fatigue monitoring feature point and the second reference point or the second virtual point under the second view angle or the third view angle;
the z-axis coordinate values are: and the vertical distance between the fatigue monitoring characteristic point and the first reference point or the first virtual point under any view angle.
In step S122, the method for two-dimensional dimension reduction of the fatigue monitoring feature points includes:
In the three-dimensional coordinates of each fatigue monitoring feature point in the same face image, the number of the corresponding shaft types and the same shaft coordinate values is at least 1. For example, the three-dimensional coordinates of the two points are (x 1, y1, z 1) and (x 2, y2, z 2), respectively, and the axis types of x1 and x2 correspond to each other, the axis types of y1 and y2 correspond to each other, the axis types correspond to each other, and the axis coordinate values are the same, for example, the values of z1 and z2 are all "0".
After the three-dimensional coordinates of each fatigue monitoring feature point are calculated and the two-dimensional dimension reduction is carried out, the method for mapping the different types of fatigue monitoring feature point positions into the adaptive rectangular frame of the appointed area of the planar space template is carried out in the following steps:
And S123, after aligning the dimension-reduced two-dimensional coordinate points with the reference points in the adaptive object, realizing the dimension-reduced mapping of the fatigue monitoring feature points and the spatial relationship of the adaptive rectangular frame.
The method for aligning the two-dimensional coordinate point and the reference point is specifically as follows:
extracting any one of the fatigue monitoring feature points in the same type as an alignment object;
Matching corresponding reference points of the alignment objects in the rectangular frame;
after aligning the alignment object with the matched reference point, accommodating the area image containing each fatigue monitoring characteristic point of the type into a rectangular frame;
And calculating the accommodating coordinates of the accommodated fatigue monitoring feature points.
In the invention, the reference points to be aligned of the upper eyelid key point 1 and the upper pupil key point 3 shown in fig. 2 are the midpoints of the upper long sides of the matched rectangular frame;
the reference points to be aligned of the lower eyelid key point 2, the lower pupil key point 4 and the chin key point 7 are the midpoints of the lower long sides of the adaptation objects;
The reference point to be aligned of the left pupil key point 5 is the midpoint of the left short side of the adaptive object;
the reference point to be aligned of the right pupil key point 6 is the midpoint of the right short side of the adaptation object.
The calculation principle of the accommodated coordinates of the fatigue monitoring feature points is briefly described as follows:
The positioning coordinates of each reference point in the adaptation object in the plane space template are determined and known in advance, the two-dimensional coordinates of each fatigue monitoring feature point after dimension reduction from the three-dimensional space are calculated in the steps, and after a specified fatigue monitoring feature point is aligned with the reference point, the coordinates of each fatigue monitoring feature point in the rectangular frame, namely the accommodating coordinates, can be obtained by rapid mapping according to the coordinate space relation between other fatigue monitoring feature points and the aligned reference point, so that complex calculation processes are not needed, and the high requirement on the real-time performance of fatigue driving monitoring response in the application scene of the invention is further met.
After each fatigue monitoring feature point bit in the fatigue monitoring feature set is mapped into an adaptive rectangular frame of a corresponding designated area in the planar space template in a dimension-reducing manner, the fatigue degree calculation method is transferred to the steps of:
s13, acquiring the current monitoring time point Associated distance change history feature timing sequence/>,/>Representation sequence/>/>The distance variation of the individual elements is the average value of the distance variation between every two fatigue monitoring feature points in different designated areas;
ordered in accordance with a time sequence of the order, For the current monitoring time point/>Last frame of/>With the current monitoring time point/>The distance is the farthest.
The following pairThe calculation method of (1) is described as follows:
When the driving environment is at night and/or the driving state is that the driving speed is equal to or greater than the preset speed threshold, The calculation method of (1) comprises the following steps:
B1, designating upper eyelid key point and chin key point Is a calculation basis of (1);
B2, judging whether the fatigue trend of the upper eyelid key point and the chin key point is the same,
If yes, calculating coordinate symmetry points after the anti-fatigue trend movement of the key points of the upper eyelid to amplify fatigue characteristics, and then turning to the step B3;
If not, jumping to the step B4;
B3, definition of The distance between the moment chin key point and the coordinate symmetry point is/>Definition/>Moment chin key points and upper eyelid key points or are at/>The distance of the coordinate symmetry locus corresponding to the moment is/>Calculation/>And/>Absolute value of difference of (2) as/>
B4, calculatingDistance between moment chin key point and upper eyelid key point/>And calculate/>Moment chin key points and upper eyelid key points or are at/>The distance of the coordinate symmetry locus corresponding to the moment is/>Then calculate/>And/>Absolute value of difference of (2) as/>
Here, the upper eyelid key point is designated as the upper eyelid key point 1 shown in fig. 2, and the lower eyelid key point 2 is designated as the lower eyelid key point 2The chin key point may be any one of the left chin key point or the right chin key point.
The method for judging whether the fatigue trends of the key points of different types are the same is briefly described as follows:
In the step S11, the obtained fatigue monitoring feature set includes the cheekbone key points, the change of the distance between the cheekbone key points and the chin key points may represent the fatigue trend represented by the chin key points, for example, the distance between the cheekbone key points and the chin key points is longer than the initial value (lip closed state), which indicates that the face may be in a tired state with a yawning state, and the length of the current frame is increased compared with the previous frame, which indicates that the fatigue trend is from weak to strong.
The principle of judging the fatigue trend through the eyelid key points is also the same, for example, if the interval between the upper eyelid key point 1 and the lower eyelid key point 2 shown in fig. 2 is shorter than the previous frame, the fatigue trend represented by the eyelid key points is from weak to strong. In order to avoid erroneous determination of fatigue state by distance change between upper eyelid and lower eyelid key points of normal blink, the present invention adopts a time sequence of distance change history characteristicsThe average value of the distance change within a certain duration is represented, and the occurrence of misjudgment is reduced.
The principle of judging the fatigue trend through the pupil key point aging is the same, and is not repeated.
It should also be noted here that, when the same fatigue trend occurs for different types of fatigue monitoring feature points accommodated in different designated areas of the planar spatial template,Comparison/>May not change much, such as shown in FIG. 2, if the chin key 7 is changed from/>First position 10 moves to/>Is the second position 20 of the upper eyelid keypoint 1 from/>Is moved to/>The fourth position 40 of (2), chin key point 7 and upper eyelid key point 1 at/>Distance of time/>Compared with the/>Distance of time/>In order to solve the problem, the invention adopts the coordinate symmetry point of the fatigue trend movement of the upper eyelid key point and calculates the distance between the chin key point and the coordinate symmetry point to amplify the variation characteristic of the fatigue state.
The calculation mode of the coordinate symmetry point of the upper eyelid key point is briefly described as follows:
To be used for The positioning position of the eyelid key point in the plane space template at the moment is a symmetrical reference point so as to/>And turning the original locus around the symmetrical reference point to form a coordinate symmetrical point, wherein the coordinate symmetrical point, the symmetrical reference point and the original locus form a straight line, the coordinate symmetrical point and the symmetrical reference point are the same in distance from the original locus to the symmetrical reference point.
It is also to be noted here that, when the driving environment is daytime and/or the driving state is the driving speed is lower than the preset speed threshold, it is assumed that the right pupil key point 6, the upper eyelid key point 1, and the chin key point 7 shown in fig. 2 are taken asThe basis of the calculation of/>For right pupil keypoint 6 and upper eyelid keypoint 1 at/>Time of day comparison/>The distance change of the moment, the right pupil key point 6 and the chin key point 7 are at/>Time of day comparison/>Distance change of time of day, upper eyelid keypoint 1 and chin keypoint 7 at/>Time of day comparison/>Distance change amount at time, average value of the 3 distance change amounts. When calculating the 3 distance variation amounts, if the fatigue trend of two fatigue monitoring feature points of a certain distance variation amount is the same, calculating the coordinate symmetry points of the specified fatigue monitoring feature points by the method, and then calculating the distance variation amount to amplify the distance variation difference. It is emphasized that the invention considers the relativity and the variability of fatigue states of different types of fatigue monitoring feature points, namely/>, the fatigue monitoring feature pointsThe average value of the distance variation between every two fatigue monitoring feature points in different designated areas is obtained.
Acquiring a time sequence of the distance change history featuresThen, the fatigue degree calculation method is transferred to the steps of:
s14, extracting the time sequence of the distance change history characteristic Front of (1)/>The elements form a first array, the rest elements form a second array, and then the average value of the distance variation of each element in the first array and the second array is calculated respectively and is recorded as/>, respectivelyAnd will/>And/>Absolute value of difference of (2) as/>Fatigue at time point.
The method of updating the fatigue threshold in step S3 is briefly described as follows:
The fatigue degree calculated in the step S2 has corresponding fatigue characteristics (the distance change relation of every two fatigue monitoring characteristic points crossing a designated area in a plane space template), and the invention provides a fatigue degree threshold value comparison table in which the corresponding relation between a fatigue characteristic sample and a fatigue degree threshold value is recorded. If the driver confirms that the fatigue state is not entered in the step S2, the machine calculates the feature similarity of each fatigue feature sample in the current fatigue feature and fatigue threshold comparison table, and updates the fatigue threshold corresponding to the fatigue feature sample with the highest similarity as the basis for judging whether fatigue occurs at the next monitoring moment; if the driver confirms that the vehicle has entered a fatigue state in step S2, the fatigue threshold is not updated.
In order to avoid this problem, in the present invention, when the machine determines that the dynamically updated fatigue threshold is greater than the first threshold or less than the second threshold, the machine sets the fatigue threshold as an initial value or takes the vehicle flameout as an instruction, and the machine sets the fatigue threshold as an initial value, and the first threshold is greater than the second threshold.
In addition, the non-inductive fatigue driving monitoring can be combined with road conditions to carry out accompanying analysis so as to improve the accuracy of judging whether fatigue exists. For example, for different road conditions, such as road condition 1 and road condition 2, road condition 1 is worse than road condition 2, and if the same fatigue is calculated in road condition 1 and road condition 2 by the same user, the fatigue threshold set for road condition 1 is lower than the fatigue threshold set for road condition 2. Because of worse road conditions, the vehicle owners are more easy to concentrate, and therefore, if the same fatigue threshold is set for different road conditions, the judgment accuracy of fatigue driving can be affected.
In summary, the invention monitors the change of the point-to-point distance of the fatigue monitoring characteristic point bit dimension reduction in the adaptive rectangular frame of the appointed area of the plane space template, calculates the fatigue degree of each monitoring time point, realizes the continuous monitoring on whether the fatigue driving occurs or not, and controls the fatigue driving byThe method has the advantages that the relativity and the difference among the fatigue characteristics representing the fatigue state under different driving environments and different driving states are fused, and the accuracy of fatigue driving judgment is improved; the fatigue threshold is dynamically updated according to the difference of people, so that the possibility of misjudgment is reduced, and the pertinence of an algorithm is improved; by arranging a plurality of rectangular frames with fixed sizes and unique codes in each appointed area with clear space relative position relation, the positioning positions of the fatigue monitoring feature points of the plane space template can be calculated quickly, the fatigue state features can be amplified, and the calculation speed of fatigue monitoring can be improved.
It should be understood that the above description is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be apparent to those skilled in the art that various modifications, equivalents, variations, and the like can be made to the present application. Such variations are intended to be within the scope of the application without departing from the spirit thereof. In addition, some terms used in the description and claims of the present application are not limiting, but are merely for convenience of description.

Claims (10)

1. A non-inductive fatigue driving monitoring and early warning intervention method is characterized by comprising the following steps:
s1, monitoring the inter-point distance change of fatigue monitoring feature points in a dimension reduction virtual space in real time to calculate the fatigue degree of each monitoring point;
s2, judging whether the fatigue degree is larger than a fatigue degree threshold corresponding to the current time point of dynamic updating,
If yes, intervening in fatigue driving;
if not, generating prompt information to prompt the driver to confirm the fatigue state, and turning to step S3 after the driver confirms;
and S3, recording confirmation information and updating the fatigue threshold.
2. The method for intervention in sensorless fatigue driving monitoring and early warning according to claim 1, wherein in step S1, the method for calculating the fatigue degree comprises the steps of:
S11, acquiring a corresponding fatigue monitoring feature set according to the current driving environment and driving state;
s12, extracting each fatigue monitoring feature point recorded in the fatigue monitoring feature set from each frame of face image acquired in real time, and mapping the fatigue monitoring feature point into an adaptive rectangular frame of a corresponding designated area in a plane space template in a dimension-reducing manner, wherein the designated areas of different types are distributed in a delta shape;
s13, acquiring the current monitoring time point Associated distance change history feature timing sequence/>,/>Representation sequence/>/>The distance variation of each element is the average value of the distance variation between every two fatigue monitoring feature points in different designated areas, and i is more than 1 and less than n;
s14, extracting the distance change history characteristic time sequence Front of (1)/>The elements form a first array, the remaining elements form a second array, and then the average value of the distance variation of each element in the first array and the second array is calculated respectively and recorded as/>, respectivelyAnd will/>And/>Absolute value of difference of (2) as/>The degree of fatigue at the time point.
3. The method of claim 2, wherein in step S11, when the driving environment is daytime and/or the driving state is that the driving speed is lower than a preset speed threshold, the fatigue monitoring feature set correspondingly obtained includes pupil key points and face key points,
When the driving environment is at night and/or the driving state is that the driving speed is greater than or equal to the preset speed threshold, the fatigue monitoring feature set correspondingly obtained comprises the face key points,
The facial key points include eyelid key points, cheekbone key points, and chin key points.
4. The method of intervention of sensorless fatigue driving monitoring and early warning according to claim 2, wherein the planar space template in step S12 includes at least 2 specified regions, each specified region includes a plurality of rectangular frames with different sizes and unique numbers, and different types of fatigue monitoring feature points are dimension-down mapped into the adapted rectangular frames in the corresponding specified regions, and the specific method includes the steps of:
s121, intercepting an area image containing the same type of fatigue monitoring feature points, calculating the area intersection ratio of the area image and the rectangular frame after each rectangular frame in the appointed area corresponding to the type is accommodated in the area image, and taking the rectangular frame with the largest intersection ratio as an adaptation object;
S122, calculating three-dimensional coordinates of the fatigue monitoring feature points and performing two-dimensional dimension reduction;
And S123, after aligning the dimension-reduced two-dimensional coordinate points with the reference points in the adaptive object, realizing dimension-reducing mapping of the fatigue monitoring feature points and the adaptive spatial relationship of the rectangular frame.
5. The method for intervention in sensorless fatigue driving monitoring and early warning according to claim 4, wherein in step S122, the method for calculating the three-dimensional coordinates of the fatigue monitoring feature points comprises the steps of:
S1221, acquiring face images at different visual angles at the same time, extracting an earlobe key point and a chin key point in the face images acquired at each visual angle, taking the chin key point closest to the ground as a first reference point calculated by three-dimensional coordinates, and taking the earlobe key point farthest from the ground in a second visual angle or a third visual angle as a second reference point calculated by three-dimensional coordinates;
S1222, calculating a first virtual point and/or a second virtual point corresponding to the first reference point and/or the second reference point under the second view angle or the third view angle, respectively, for the face image in which the first reference point and/or the second reference point are not detected;
S1223, setting the z-axis coordinate of the first reference point in the three-dimensional coordinate system as a value of 0, taking the vertical distance between the second reference point and the first reference point as the coordinate value of the second reference point on the z-axis, and then solving the three-dimensional coordinate of the fatigue monitoring feature point according to the spatial distance relation between each fatigue monitoring feature point and the first reference point or the first virtual point and the spatial distance relation between each fatigue monitoring feature point and the second reference point or the second virtual point.
6. The method of intervention for sensorless fatigue driving monitoring and early warning according to claim 5, wherein in step S1222, the method for calculating the first virtual point location includes the steps of:
A1, acquiring a rotation angle of a head in an initial state of a current frame relative to non-rotation;
A2, acquiring a positioning position of another chin key point which is not used as the first reference point in a face image acquired under the current visual angle;
A3, calculating the first virtual point of the first reference point in the face image under the current visual angle according to the positioning position of the other chin key point, the rotation angle obtained in the step A1 and the spatial position relation between the other chin key point and the first reference point.
7. The method for intervention in sensorless fatigue driving monitoring and early warning according to claim 5 or 6, wherein in step S1223, the x-axis coordinate value of the fatigue monitoring feature point in the three-dimensional space is: the horizontal distance between the fatigue monitoring feature point and the second reference point or the second virtual point under the first visual angle;
The y-axis coordinate values are: the fatigue monitoring feature point is horizontally distant from the second reference point or the second virtual point under a second view angle or a third view angle;
The z-axis coordinate values are: and the vertical distance between the fatigue monitoring characteristic point and the first reference point or the first virtual point under any view angle.
8. The method for intervention of sensorless fatigue driving monitoring and early warning according to claim 4, wherein in step S122, the method for two-dimensional dimension reduction of the fatigue monitoring feature points is as follows:
In the three-dimensional coordinates of each fatigue monitoring feature point in the same face image, the number of the corresponding shaft types and the same shaft coordinate values is at least 1;
In step S123, the method for aligning the two-dimensional coordinate point and the reference point is as follows:
Extracting any one of the fatigue monitoring feature points of the same type as an alignment object;
Matching the reference points corresponding to the alignment objects in the rectangular frame;
After the alignment object is aligned with the matched reference point, the area image is accommodated in the rectangular frame;
And calculating the accommodating coordinates of the accommodated fatigue monitoring feature points.
9. The sensorless fatigue driving monitoring early warning intervention method of claim 8, wherein the types of fatigue monitoring feature points comprise any one or more of eyelid types, pupil types and mouth types, wherein the fatigue monitoring feature points of eyelid types comprise upper eyelid key points and/or lower eyelid key points;
The fatigue monitoring feature points of the pupil type comprise any one or more of an upper pupil key point, a lower pupil key point, a left pupil key point and a right pupil key point;
the fatigue monitoring feature points of the mouth type are chin key points;
when the fatigue threshold value which is dynamically updated is judged to be larger than a first threshold value or smaller than a second threshold value, the fatigue threshold value is set as an initial value by the machine, or the vehicle is flameout as an instruction, and the fatigue threshold value is set as the initial value by the machine; the first threshold is greater than the second threshold.
10. The method for monitoring and early-warning intervention for non-inductive fatigue driving according to any one of claims 2-6 and 8-9, wherein when the driving environment is at night and/or the driving state is that the driving speed is greater than or equal to a preset speed threshold,The calculation method of (1) comprises the following steps:
b1, designating the upper eyelid key point and the chin key point as Is a calculation basis of (1);
b2, judging whether the fatigue trend of the upper eyelid key point is the same as that of the chin key point,
If yes, calculating coordinate symmetry points after the anti-fatigue trend movement of the key points of the upper eyelid to amplify fatigue characteristics, and then transferring to the step B3;
If not, not calculating the coordinate symmetry site;
B3, definition of 、/>The distance between the chin key point and the coordinate symmetry point or the upper eyelid key point at the moment is/>, respectively、/>Calculation/>And/>Absolute value of difference of (2) as/>
CN202410320081.7A 2024-03-20 2024-03-20 Non-inductive fatigue driving monitoring and intervention method Active CN117935231B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410320081.7A CN117935231B (en) 2024-03-20 2024-03-20 Non-inductive fatigue driving monitoring and intervention method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410320081.7A CN117935231B (en) 2024-03-20 2024-03-20 Non-inductive fatigue driving monitoring and intervention method

Publications (2)

Publication Number Publication Date
CN117935231A true CN117935231A (en) 2024-04-26
CN117935231B CN117935231B (en) 2024-06-07

Family

ID=90754128

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410320081.7A Active CN117935231B (en) 2024-03-20 2024-03-20 Non-inductive fatigue driving monitoring and intervention method

Country Status (1)

Country Link
CN (1) CN117935231B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101032405A (en) * 2007-03-21 2007-09-12 汤一平 Safe driving auxiliary device based on omnidirectional computer vision
CN108830240A (en) * 2018-06-22 2018-11-16 广州通达汽车电气股份有限公司 Fatigue driving state detection method, device, computer equipment and storage medium
CN109902560A (en) * 2019-01-15 2019-06-18 浙江师范大学 A kind of fatigue driving method for early warning based on deep learning
WO2020078464A1 (en) * 2018-10-19 2020-04-23 上海商汤智能科技有限公司 Driving state detection method and apparatus, driver monitoring system, and vehicle
CN112241658A (en) * 2019-07-17 2021-01-19 青岛大学 Fatigue driving early warning system and method based on depth camera
WO2022142997A1 (en) * 2020-12-30 2022-07-07 微网优联科技(成都)有限公司 Autonomous vehicle driving method based on internet of things, and terminal
CN114898341A (en) * 2022-07-14 2022-08-12 苏州魔视智能科技有限公司 Fatigue driving early warning method and device, electronic equipment and storage medium
CN115227247A (en) * 2022-07-20 2022-10-25 中南大学 Fatigue driving detection method and system based on multi-source information fusion and storage medium
WO2023241358A1 (en) * 2022-06-17 2023-12-21 京东方科技集团股份有限公司 Fatigue driving determination method and apparatus, and electronic device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101032405A (en) * 2007-03-21 2007-09-12 汤一平 Safe driving auxiliary device based on omnidirectional computer vision
CN108830240A (en) * 2018-06-22 2018-11-16 广州通达汽车电气股份有限公司 Fatigue driving state detection method, device, computer equipment and storage medium
WO2020078464A1 (en) * 2018-10-19 2020-04-23 上海商汤智能科技有限公司 Driving state detection method and apparatus, driver monitoring system, and vehicle
CN109902560A (en) * 2019-01-15 2019-06-18 浙江师范大学 A kind of fatigue driving method for early warning based on deep learning
CN112241658A (en) * 2019-07-17 2021-01-19 青岛大学 Fatigue driving early warning system and method based on depth camera
WO2022142997A1 (en) * 2020-12-30 2022-07-07 微网优联科技(成都)有限公司 Autonomous vehicle driving method based on internet of things, and terminal
WO2023241358A1 (en) * 2022-06-17 2023-12-21 京东方科技集团股份有限公司 Fatigue driving determination method and apparatus, and electronic device
CN114898341A (en) * 2022-07-14 2022-08-12 苏州魔视智能科技有限公司 Fatigue driving early warning method and device, electronic equipment and storage medium
CN115227247A (en) * 2022-07-20 2022-10-25 中南大学 Fatigue driving detection method and system based on multi-source information fusion and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PRADNYA N. BHUJBAL 等: "Lane departure warning system based on Hough transform and Euclidean distance", 2015 THIRD INTERNATIONAL CONFERENCE ON IMAGE INFORMATION PROCESSING (ICIIP), 31 December 2015 (2015-12-31) *
汪旭;陈仁文;黄斌;: "基于Android***的司机驾驶安全监测***的实现", 电子测量技术, no. 08, 23 April 2019 (2019-04-23) *

Also Published As

Publication number Publication date
CN117935231B (en) 2024-06-07

Similar Documents

Publication Publication Date Title
JP5137833B2 (en) Gaze direction detection device and gaze direction detection method
EP3033999B1 (en) Apparatus and method for determining the state of a driver
US7657079B2 (en) Single constraint at a time (SCAAT) tracking of a virtual reality (VR) display
CN108983982B (en) AR head display equipment and terminal equipment combined system
JP2018532199A (en) Eye pose identification using eye features
CN112257696B (en) Sight estimation method and computing equipment
WO2020042542A1 (en) Method and apparatus for acquiring eye movement control calibration data
CN109263637B (en) Collision prediction method and device
CN110865704B (en) Gesture interaction device and method for 360-degree suspended light field three-dimensional display system
JP2023504207A (en) Systems and methods for operating head mounted display systems based on user identification
EP1443416A1 (en) Information processing system and information processing apparatus
WO2017179279A1 (en) Eye tracking device and eye tracking method
CN114424147A (en) Determining eye rotation center using one or more eye tracking cameras
JP2021532464A (en) Display systems and methods for determining vertical alignment between the left and right displays and the user's eyes.
CN112183160A (en) Sight estimation method and device
CN109711239A (en) Based on the visual attention detection method for improving mixing increment dynamic bayesian network
CN117935231B (en) Non-inductive fatigue driving monitoring and intervention method
JPH10198506A (en) System for detecting coordinate
JP6906943B2 (en) On-board unit
Cai et al. Gaze estimation driven solution for interacting children with ASD
EP2261772A1 (en) Method for controlling an input device based on the detection of attitude or eye gaze
WO2022183372A1 (en) Control method, control apparatus, and terminal device
TWI758717B (en) Vehicle-mounted display device based on automobile a-pillar, method, system and storage medium
JP3686418B2 (en) Measuring device and method
RU2444275C1 (en) Method and apparatus for determining spatial position of eyes for calculating line of sight

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant