CN111797654A - Driver fatigue state detection method and device, storage medium and mobile terminal - Google Patents

Driver fatigue state detection method and device, storage medium and mobile terminal Download PDF

Info

Publication number
CN111797654A
CN111797654A CN201910282019.2A CN201910282019A CN111797654A CN 111797654 A CN111797654 A CN 111797654A CN 201910282019 A CN201910282019 A CN 201910282019A CN 111797654 A CN111797654 A CN 111797654A
Authority
CN
China
Prior art keywords
face
driver
image
images
adjacent frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910282019.2A
Other languages
Chinese (zh)
Inventor
陈仲铭
何明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910282019.2A priority Critical patent/CN111797654A/en
Publication of CN111797654A publication Critical patent/CN111797654A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Ophthalmology & Optometry (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a method, a device, a storage medium and a mobile terminal for detecting a fatigue state of a driver, wherein the method comprises the steps of calculating the offset of the face outline of the driver in two adjacent frames of face images according to the face images of the two adjacent frames of the driver, wherein the face images of the two adjacent frames are associated with two adjacent moments; determining the change of the face contour of the driver according to the offset of the face contour of the driver; respectively extracting preset position images from the face images of the two adjacent frames to obtain the preset position images of the two adjacent frames; determining the change of a preset part in the face of the driver according to the preset part images of the two adjacent frames; and determining the fatigue state of the driver according to the change of the face contour of the driver and the change of a preset part in the face of the driver. The method and the device for judging the fatigue state of the driver can improve the accuracy of judging the fatigue state of the driver.

Description

Driver fatigue state detection method and device, storage medium and mobile terminal
Technical Field
The present disclosure relates to the field of electronic technologies, and in particular, to a method and an apparatus for detecting a fatigue state of a driver, a storage medium, and a mobile terminal.
Background
With the development of artificial intelligence technology and computer technology, intelligent driving technology is becoming mature, and detection of fatigue state of drivers is a key technology of intelligent driving. The existing intelligent driving system or the auxiliary driving system mainly detects the fatigue state of the driver by utilizing facial expression characteristics of the driver, such as blinking, eye closing, yawning and the like, has a single detection means, and cannot accurately judge the fatigue state of the driver.
Disclosure of Invention
The embodiment of the application provides a driver detection method, a driver detection device, a storage medium and a mobile terminal, which can improve the accuracy of judging the fatigue state of a driver.
The embodiment of the application provides a method for detecting a fatigue state of a driver, which comprises the following steps:
calculating the offset of the face contour of the driver in the face images of two adjacent frames according to the face images of the two adjacent frames of the driver, wherein the face images of the two adjacent frames are associated with two adjacent moments;
determining the change of the face contour of the driver according to the offset of the face contour of the driver;
respectively extracting preset position images from the face images of the two adjacent frames to obtain the preset position images of the two adjacent frames;
determining the change of a preset part in the face of the driver according to the preset part images of the two adjacent frames;
and determining the fatigue state of the driver according to the change of the face contour of the driver and the change of a preset part in the face of the driver.
The embodiment of the present application further provides a driver fatigue state detection device, including:
the calculation module is used for calculating the offset of the face contour of the driver in the face images of the two adjacent frames according to the face images of the two adjacent frames of the driver, wherein the face images of the two adjacent frames are associated with two adjacent moments;
the first determining module is used for determining the change of the face contour of the driver according to the offset of the face contour of the driver;
the image extraction module is used for respectively extracting preset position images from the face images of the two adjacent frames to obtain the preset position images of the two adjacent frames;
the second determination module is used for determining the change of the preset part in the face of the driver according to the preset part images of the two adjacent frames;
and the fatigue judging module is used for determining the fatigue state of the driver according to the change of the face outline of the driver and the change of a preset part in the face of the driver.
The embodiment of the application also provides a storage medium, wherein a computer program is stored in the storage medium, and when the computer program runs on a computer, the driver state detection method of the embodiment is realized.
The embodiment of the application further provides a mobile terminal, which comprises a processor and a memory, wherein a computer program is stored in the memory, and the processor calls the computer program stored in the memory to realize the driver state detection method of the embodiment.
The method and the device for determining the fatigue state of the driver determine the change condition of the face contour of the driver by calculating the offset of the face contour of the driver in two adjacent frames of face images, determine the change condition of the preset part in the face of the driver by the preset part images of two adjacent frames of face images, and simultaneously determine the fatigue state of the driver according to the change condition of the face contour of the driver and the change condition of the preset part in the face of the driver, have rich detection means and improve the judgment accuracy of the fatigue state of the driver.
Drawings
Fig. 1 is a first application scenario diagram of a method for detecting a fatigue state of a driver according to an embodiment of the present application.
Fig. 2 is a second application scenario diagram of the method for detecting the fatigue state of the driver according to the embodiment of the present application.
Fig. 3 is a first flowchart of a method for detecting a fatigue state of a driver according to an embodiment of the present application.
Fig. 4 is a second flowchart of a method for detecting a fatigue state of a driver according to an embodiment of the present application.
Fig. 5 is a third flowchart illustrating a method for detecting a fatigue state of a driver according to an embodiment of the present application.
Fig. 6 is a fourth flowchart illustrating a method for detecting a fatigue state of a driver according to an embodiment of the present application.
Fig. 7 is a first structural schematic diagram of a driver fatigue state detection device according to an embodiment of the present application.
Fig. 8 is a second structural schematic diagram of a driver fatigue state detection device according to an embodiment of the present application.
Fig. 9 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.
Detailed description of the preferred embodiments
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without inventive step, are within the scope of the present application.
Referring to fig. 1, fig. 1 is a first application scenario diagram of a method for detecting a fatigue state of a driver according to an embodiment of the present application provided in this embodiment. The driver fatigue state detection method is applied to the mobile terminal. The mobile terminal is provided with a panoramic sensing framework. The panoramic perception architecture is the integration of hardware and software used for realizing the algorithm model updating method in the mobile terminal.
The panoramic perception architecture comprises an information perception layer, a data processing layer, a feature extraction layer, a scene modeling layer and an intelligent service layer.
The information perception layer is used for acquiring the information of the mobile terminal or the information in the external environment. The information-perceiving layer may include a plurality of sensors. For example, the information sensing layer includes a plurality of sensors such as a distance sensor, a magnetic field sensor, a light sensor, an acceleration sensor, a fingerprint sensor, a hall sensor, a position sensor, a gyroscope, an inertial sensor, an attitude sensor, a barometer, and a heart rate sensor.
Among them, the distance sensor may be used to detect a distance between the mobile terminal and an external object. The magnetic field sensor may be used to detect magnetic field information of the environment in which the mobile terminal is located. The light sensor may be used to detect light information of an environment in which the mobile terminal is located. The acceleration sensor may be used to detect acceleration data of the mobile terminal. The fingerprint sensor may be used to collect fingerprint information of a user. The Hall sensor is a magnetic field sensor manufactured according to the Hall effect, and can be used for realizing automatic control of the mobile terminal. The location sensor may be used to detect the geographic location where the mobile terminal is currently located. The gyroscope may be used to detect angular velocities of the mobile terminal in various directions. The inertial sensor may be used to detect motion data of the mobile terminal. The gesture sensor may be used to sense gesture information of the mobile terminal. The barometer may be used to detect the barometric pressure of the environment in which the mobile terminal is located. The heart rate sensor may be used to detect heart rate information of the user.
And the data processing layer is used for processing the data acquired by the information perception layer. For example, the data processing layer may perform data cleaning, data integration, data transformation, data reduction, and the like on the data acquired by the information sensing layer.
The data cleaning refers to cleaning a large amount of data acquired by the information sensing layer to remove invalid data and repeated data. The data integration refers to integrating a plurality of single-dimensional data acquired by the information perception layer into a higher or more abstract dimension so as to comprehensively process the data of the plurality of single dimensions. The data transformation refers to performing data type conversion or format conversion on the data acquired by the information sensing layer so that the transformed data can meet the processing requirement. The data reduction means that the data volume is reduced to the maximum extent on the premise of keeping the original appearance of the data as much as possible.
The characteristic extraction layer is used for extracting characteristics of the data processed by the data processing layer so as to extract the characteristics included in the data. The extracted features may reflect the state of the mobile terminal itself or the state of the user or the environmental state of the environment in which the mobile terminal is located, etc.
The feature extraction layer may extract features or process the extracted features by a method such as a filtering method, a packing method, or an integration method.
The filtering method is to filter the extracted features to remove redundant feature data. Packaging methods are used to screen the extracted features. The integration method is to integrate a plurality of feature extraction methods together to construct a more efficient and more accurate feature extraction method for extracting features.
The scene modeling layer is used for building a model according to the features extracted by the feature extraction layer, and the obtained model can be used for representing the state of the mobile terminal, the state of the user, the environment state and the like. For example, the scenario modeling layer may construct a key value model, a pattern identification model, a graph model, an entity relation model, an object-oriented model, and the like according to the features extracted by the feature extraction layer.
The intelligent service layer is used for providing intelligent services for the user according to the model constructed by the scene modeling layer. For example, the intelligent service layer can provide basic application services for users, can perform system intelligent optimization for mobile terminals, and can also provide personalized intelligent services for users.
In addition, the panoramic perception architecture can further comprise a plurality of algorithms, each algorithm can be used for analyzing and processing data, and the plurality of algorithms can form an algorithm library. For example, the algorithm library may include algorithms such as a markov algorithm, a hidden dirichlet distribution algorithm, a bayesian classification algorithm, a support vector machine, a K-means clustering algorithm, a K-nearest neighbor algorithm, a conditional random field, a residual error network, a long-short term memory network, a convolutional neural network, and a cyclic neural network.
The present embodiment provides a driver fatigue state detection method that can apply fatigue state detection when a driver drives a vehicle.
Referring to fig. 2, fig. 2 is a second application scenario diagram of a method for detecting a fatigue state of a driver according to an embodiment of the present application. The driver fatigue state detection method of the embodiment of the application is mainly used for detecting the fatigue state when a driver drives a vehicle. The mobile terminal can detect the fatigue state of the driver and perform corresponding operation according to the fatigue state of the driver, such as sending out a doze prompt when the driver is detected to be in the doze state, and enabling the driver to determine whether to adjust the driving mode of the vehicle. When the driver receives the driving mode adjustment prompt sent by the mobile terminal, an operation interface of the mobile terminal can be opened or the mobile terminal can switch the driving mode of the vehicle according to the facial action of the driver, such as switching from a manual driving mode to an automatic driving mode or an auxiliary driving mode. The embodiment of the application can detect the fatigue state of a driver in the driving process and improve the driving safety. It should be noted that the vehicle may include, but is not limited to, any type of vehicle such as an automobile, truck, motorcycle, bus, boat, airplane, helicopter, recreational vehicle, amusement park, tram, train, and cart.
According to an embodiment of the application, it is now assumed that a driver places a mobile terminal in a mobile terminal access device on a car and performs car driving. As shown in fig. 3, a first flowchart of the method for detecting a fatigue state of a driver according to the embodiment of the present application is schematically shown, and the embodiment of the present application may specifically include the following steps:
and 110, calculating the offset of the face contour of the driver in the face images of the two adjacent frames according to the face images of the two adjacent frames of the driver, wherein the face images of the two adjacent frames are associated with two adjacent moments.
In the embodiment of the present application, the mobile terminal may be specifically a mobile phone, a tablet computer, a notebook, a desktop computing device, a wearable device such as a watch, glasses, and the like. The mobile terminal in the embodiment of the application can obtain the face image of the driver by using the data processing layer in the panoramic sensing architecture, wherein the face image of the driver at least comprises two frames of face images, the offset of the face contour of the driver in the two adjacent frames of face images is calculated according to the two adjacent frames of face images of the driver, and the two adjacent frames of face images are associated with two adjacent moments.
It should be noted that, when the obtained face image of the driver is more than two frames, the offset of the face contour of the driver in the face images of every two adjacent frames is calculated, and the face image of every two adjacent frames is associated with every two adjacent moments. For example, after a certain mobile terminal acquires a first face image at a time t, a second face image at a time t +1, and a third face image at a time t +2, a first offset of a face contour of a driver in the first face image and the second face image, and a second offset of the face contour of the driver in the second face image and the third face image may be sequentially calculated.
And 120, determining the change of the face contour of the driver according to the offset of the face contour of the driver.
After the mobile terminal obtains the offset of the face contour of the driver, the change conditions of the face contour of the driver at two adjacent moments, such as the change conditions of the spatial position of the face contour of the driver, are obtained according to the offset of the face contour of the driver.
For example, after the mobile terminal acquires the first face image and the second face image, the feature comparison may be performed on the face contour feature of the first face image and the face contour feature of the second face image to obtain the offset between the face contour of the driver in the first face image and the face contour of the driver in the second face image, and the change condition of the face contour of the driver from time t to time t +1 may be obtained by using the offset between the face contours of the drivers in two adjacent face images.
It should be noted that the mobile terminal may also obtain a change condition of the face contour of the driver within the preset time length according to the offset of the face contour of the driver in every two adjacent face images in the face images at multiple times.
And 130, respectively extracting preset position images from the face images of the two adjacent frames to obtain the preset position images of the two adjacent frames.
The mobile terminal can perform feature recognition on the first face image to recognize features of a preset part, and extracts the features of the preset part of the first face image by using a feature extraction layer in the panoramic framework to obtain a first preset part image. Accordingly, a second preset region image, a third preset region image, or an nth preset region image may be obtained by using the feature extraction layer in the panoramic structure layer. Wherein, the preset part can be one or more parts of eyebrows, eyes and mouth.
140, determining the change of the preset part in the face of the driver according to the preset part images of the two adjacent frames.
After the mobile terminal obtains the preset position images of the two adjacent frames, the characteristics of the preset position images of the two adjacent frames are compared to obtain the characteristic change condition of the preset position images of the two adjacent frames, and the change of the preset position in the face of the driver is determined according to the characteristic change condition of the preset position images of the two adjacent frames.
For example, after the mobile terminal acquires the first preset region image and the second preset region image, the mobile terminal may perform feature comparison on preset region features in the first preset region image and preset region features in the first preset region image to obtain feature change conditions of a preset region in the first preset region image and a preset region in the second preset region image, so as to obtain change conditions of the preset region in the face of the driver from time t to time t + 1.
And 150, determining the fatigue state of the driver according to the change of the face contour of the driver and the change of a preset part in the face of the driver.
The mobile terminal can determine the fatigue state of the driver after acquiring the change of the face contour of the driver and the change of the preset part in the face of the driver. For example, the mobile terminal may store a trained fatigue state determination model in advance, and output the current fatigue state of the driver after inputting the change of the face contour of the driver and the change of the preset part in the face of the driver into the fatigue state determination model. The fatigue state judgment model can be obtained by training the change condition of the historical face contour of the driver and the change condition of the historical preset part.
The driver fatigue state detection method in the embodiment of the application can be applied to the panoramic sensing architecture in fig. 1. The embodiment of the application can acquire data through an information perception layer in a panoramic perception framework, such as a face image of a driver through a camera assembly. Data cleaning can be performed on the data acquired by the information sensing layer through the data processing layer, such as data processing can be performed on the acquired face image of the driver to eliminate invalid face images and repeated face images. Feature extraction may be performed by a feature extraction layer, such as the feature extraction layer may perform feature extraction on a face image to extract a preset region image. The face image and the preset part image can be processed through the algorithm in the algorithm library to obtain the change of the face contour of the driver and the change of the preset part in the face of the driver. The multi-frame face images can be processed through the scene modeling layer to obtain the change situation of the state of the driver in the preset time period, such as modeling the change situation of the multi-frame face images in the preset time period to obtain the fatigue state judgment model. The fatigue state judgment model can be utilized by an intelligent service layer to provide intelligent service for a user, for example, the current fatigue state of a driver can be output after the change of the face outline of the driver and the change of a preset part in the face of the driver are input into the fatigue state judgment model, so that the driver is reminded to switch the vehicle driving mode in time, and the driving safety is improved.
Referring to fig. 4, fig. 4 is a second flowchart of a method for detecting a fatigue state of a driver according to an embodiment of the present application. 110, calculating an offset of a face contour of a driver in two adjacent frames of face images according to the two adjacent frames of face images of the driver, wherein the two adjacent frames of face images are associated with two adjacent moments, and the method comprises the following sub-steps:
111, acquiring two adjacent frames of face images of a driver, wherein the two adjacent frames of face images comprise a first face image and a second face image, and the first face image and the second face image are two adjacent frames of face images acquired at two adjacent moments;
112, respectively identifying the first face image and the second face image to obtain a first face contour feature vector and a second face contour feature vector;
113, calculating an offset vector between the first face contour feature vector and the second face contour feature vector;
and 114, obtaining the offset of the face contour of the driver in the face images of the two adjacent frames according to the offset vector.
The mobile terminal may acquire an initial face image of the driver in advance, and recognize the initial face image to obtain m key points, such as recognizing the initial face image to obtain 80 key points of the face of the driver.
When the mobile terminal detects the fatigue state of a driver, a first face image at the time t and a second face image at the time t +1 can be obtained through a data processing layer in the panoramic perception framework, wherein the time t and the time t +1 are two adjacent times arranged according to a time sequence.
The mobile terminal can be preset with a feature recognition model, and the feature recognition model can perform feature recognition on the acquired face image. For example, the mobile terminal may preset a convolutional neural network model, identify key points of the face of the driver in the first face image by using the convolutional neural network model, determine the positions of the key points corresponding to the face contour of the driver to obtain a plurality of face contour feature values of the driver, where the plurality of face contour feature values of the driver form a first face contour vector, such as the first face contour vector P (P1, P2, P3). Accordingly, the mobile terminal may derive a second face contour vector S (S1, S2, S3), a third face contour vector Q (Q1, Q2, Q3), or an nth face contour vector N (N1, N2, N3) using a convolutional network model. And comparing each characteristic value in the first face contour vector with each characteristic value in the second face contour vector to obtain an offset vector between the first face contour vector and the second face contour vector, and obtaining the offset of the face contour of the driver in the first face image and the second face image according to the offset vector.
In some embodiments, with continued reference to FIG. 4, wherein determining 120 the change in the driver face contour based on the offset of the driver face contour comprises the sub-steps of:
121, processing the offset of the face contour of the driver in the face images of the two adjacent frames to obtain the deflection angle of the face contour of the driver of the first face contour feature vector and the second face contour feature vector;
and 122, determining the change of the face contour of the driver according to the deflection angle of the face contour of the driver.
After the mobile terminal obtains the first face contour feature vector and the second face contour feature vector, the deflection angles of the first face contour feature vector and the second face contour feature vector can be calculated, so that the change conditions of the face contour of the driver at the time t and the time t +1 can be determined according to the deflection angles.
For example, after obtaining the first face contour vector P (P1, P2, P3) and the second face contour vector S (S1, S2, S3), the mobile terminal calculates an offset vector D (D1, D2, D3) between the first face contour vector P (P1, P2, P3) and the second face contour vector S (S1, S2, S3), processes the offset vector D by using a quaternion attitude reference system attitude calculation algorithm to obtain a deflection angle θ of the first face contour vector and the second face contour vector, and knows the change condition of the face contour of the driver from the time t to the time t +1 according to the deflection angle θ.
When the mobile terminal obtains a plurality of face contour vectors of a preset time length, the deflection angle of the face contour of the driver at any moment in the preset time length can be calculated according to the following formula, so that the deflection angles of the face contours of the drivers are obtained.
θt=γθt-12θt-2+…+γnθt-n
Wherein theta istDeflection angle, theta, representing the contour of the driver's face at time tt-nAnd the deflection angle of the face contour of the driver at the moment t-n is represented, and the parameter gamma is a deflection angle attenuation coefficient.
According to the method and the device, the multiple driver deflection angles can be input into a trained classification network model such as a Bayesian network model, and after the Bayesian network model processes the multiple driver deflection angles, the face contour labels of the drivers are output, such as head lowering, head raising, left turning and right turning.
In some embodiments, please refer to fig. 5, and fig. 5 is a third flowchart illustrating a method for detecting a fatigue state of a driver according to an embodiment of the present disclosure. 140, determining the change of the preset part in the face of the driver according to the preset part images of the two adjacent frames, comprising the following substeps:
141, respectively identifying the preset position images of the two adjacent frames to obtain two preset position feature vectors;
142, calculating an offset coordinate quantity between the two preset part feature vectors;
143, determining a change of a preset part in the face of the driver according to an offset coordinate quantity between the two preset part feature vectors.
The mobile terminal can be preset with a feature recognition model, and the feature recognition model can perform feature recognition on the acquired face image. For example, the mobile terminal may preset a convolutional neural network model, identify key points of a face of a driver in the first face image by using the convolutional neural network model, determine positions of the key points corresponding to preset positions of the face of the driver, so as to obtain a plurality of preset position feature values of the face of the driver, where the preset position feature values of the face of the driver form a first preset position vector. Accordingly, the mobile terminal may obtain a second preset location vector, a third preset location vector, or an nth preset location vector using the convolutional network model. And comparing each characteristic value in the first preset part vector with each characteristic value in the second preset part vector to obtain an offset coordinate quantity between the first preset part vector and the second preset part vector, and determining the change conditions of the preset part of the driver at the time t and the time t +1 according to the offset coordinate quantity.
For example, when the preset part is an eye, a convolutional neural network model is used for identifying key points corresponding to the eye in the first face image so as to identify a plurality of characteristic values of the eye, a first eye vector is obtained according to the plurality of characteristic values, a convolutional neural network model is used for identifying the second face image so as to obtain a second eye vector, and an offset coordinate quantity between the first eye vector and the second eye vector is calculated so as to respectively obtain the relative distance change between the upper eyelid and the lower eyelid in the first eye vector and the second eye vector. And determining the change conditions of the eyes of the driver at the t moment and the t +1 moment according to the change conditions of the relative distance between the upper eyelid and the lower eyelid in the first eye vector and the second eye vector.
It should be noted that the relative distance between the upper eyelid and the lower eyelid is only an example of the change between the first eye vector and the second eye vector, and does not constitute a specific limitation on the change between the first eye vector and the second eye vector, and the change between the first eye vector and the second eye vector may also be a change between the relative distance between the first eye vector and a preset reference position and the relative distance between the first eye vector and the preset reference position, where the preset reference position may be an eye center position, a face center position, a forehead center position, and the like.
When the eye vectors are multiple, the mobile terminal can input the multiple eye vectors into a trained classification network model such as a bayesian network model, the bayesian network model obtains eye change states within preset time according to change conditions of every two adjacent eye vectors, and outputs eye labels of a driver, such as eyes opening, eyes closing, squinting and the like, according to the eye change states within the preset time.
After the mobile terminal obtains a driver face contour label with preset duration and a driver preset part label such as a driver eye label, the fatigue state of the driver is comprehensively judged. For example, when the face contour label of the driver acquired by the mobile terminal is low and the eye label of the driver is squinted, the driver is judged to be in a doze state, and the mobile terminal randomly gives out a doze prompt, such as occurrence of alarm bell, to prompt the driver. Meanwhile, the mobile terminal may also send a driving mode adjustment prompting notification to let the driver confirm whether to adjust the current driving mode of the automobile, such as switching the manual driving mode to the automatic driving mode. When a driver needs to adjust the driving mode, the operation interface of the mobile terminal can be manually opened, a driving mode switching instruction is triggered, the mobile terminal responds to the driving mode switching instruction, and the driving mode of the automobile is adjusted to be the automatic driving mode.
The manual operation of the mobile terminal by the driver during driving may affect the driving of the driver. The mobile terminal can also switch the driving mode according to the facial action of the driver, for example, the driver can rotate the face left and right, the mobile terminal can confirm whether to switch the driving mode according to the face rotation condition of the driver, and the mobile terminal can be triggered to switch the driving mode of the automobile from the manual driving mode to the automatic driving mode by rotating the face left and right twice. It should be noted that the driver may also interact with the mobile terminal through other actions that do not affect driving safety, such as blinking eyes twice.
In some embodiments, the predetermined location may also be a pupil. When the preset part is a pupil, detecting a circle of area around the pupil in each face image by an edge detection threshold method to obtain a pupil area, and identifying the pupil area by using a convolutional neural network model to extract a pupil characteristic value in each face image. The pupil feature values in each face image are input into a multi-classifier such as an Extreme Gradient boost (XGBoost) multi-classifier, and the Extreme Gradient boost multi-classifier processes the pupil feature values in each face image to obtain pupil iris labels, such as pupil iris labels for long-term stability, rapid enlargement of the pupil iris, slow enlargement of the pupil iris, rapid reduction of the pupil iris, and slow reduction of the pupil iris.
For example, if the iris of the pupil is suddenly enlarged to indicate that the driver encounters an emergency, the mobile terminal may send a notification to notify the vehicle-mounted auxiliary driving system or automatic driving system to perform emergency temporary hosting, so as to control the safe driving of the vehicle.
In some embodiments, the predetermined location may also include both the eye and the pupil. When the mobile terminal acquires that the eye label is squinting and the pupil iris label is stable for a long time without change, the mobile terminal indicates that the driver is possibly in a fatigue state, and the mobile terminal can send out an alarm bell to inform or remind the driver to rest or replace the driver.
Please refer to fig. 6, fig. 6 is a fourth flowchart illustrating a method for detecting a fatigue state of a driver according to an embodiment of the present disclosure. 111, obtain the facial image of two adjacent frames of driver, the facial image of two adjacent frames includes first facial image and second facial image, first facial image with the two frame facial images that second facial image was obtained for two adjacent moments can include following step:
101, acquiring images of two adjacent frames of a driver, wherein the images of the two adjacent frames comprise a first image and a second image, and the first image and the second image are two images acquired at two adjacent moments;
102, processing the first image to determine at least one face region of the first image;
103, recognizing the at least one face area to determine a driver face area of the first image;
104, acquiring a first face image from the first image according to the face area of the driver of the first image;
and 105, acquiring a second face image according to the first face image to obtain the face images of two adjacent frames of the driver.
The mobile terminal is fixed in front of a windshield at the front end of an automobile, when a driver sits on a driver seat and starts the automobile, the mobile terminal automatically opens an image acquisition device such as a front camera of a mobile phone, and multi-frame images in the driving process of the driver are acquired in real time. The multi-frame images comprise a first image acquired at the time t and a second image acquired at the time t +1, and the time t +1 are two adjacent times arranged according to the time sequence.
The mobile terminal in the embodiment of the application can identify the first image by using a clustering regression algorithm so as to identify at least one face area from the first image, wherein the face area refers to a rectangular window externally connected to the face outline of the driver. Since the mobile terminal may acquire images of other people inside the automobile during the process of acquiring the image of the driver, the face region identified by the clustering regression algorithm is not necessarily only the face of the driver, and therefore, the identified face region needs to be further identified to determine the face region of the driver. The mobile terminal can identify at least one face area according to the characteristic value of the initial face image of the driver by using a deep learning algorithm to determine the face area of the driver in the first image, and further acquire the first face image of the driver from the first image.
In some embodiments, step 105, acquiring a second face image according to the first face image to obtain face images of two adjacent frames of the driver, may include the following steps:
processing the first face image to obtain a driver face prediction area in a second image;
identifying the driver face prediction region to determine a driver face region in the second image;
and acquiring a second face image according to the face area of the driver in the second image.
Because the position of the face of the driver does not change greatly in two adjacent moments, the mobile terminal can predict the region of the face of the driver in the image acquired at the next moment according to the correlation of the positions of the face of the driver between two adjacent frames of images and the region of the face of the driver in the image acquired at the previous moment, so as to realize the tracking of the face of the driver.
For example, the mobile terminal processes the driver face region in the first image to obtain a combined region of the driver face region in the first image, which is expanded around a rectangular window externally connected to the outline of the driver face. Vector A [ x ] for the driver's face area in the first image1,y1,w1,h1]When in representation, the following formula is used for carrying out coordinate transformation on a circumscribed rectangular window of the face area of the driver to obtain a combined area B [ x [ ]2,y2,w2,h2]。
x2=x1*k*100%*stepk
y2=y1*k*100%*stepk
w2=w1*j*100%*stepw
h2=h1*i*100%*steph
Wherein:
k∈(0,10),stepk=[-4,-2,1,2,4]
j∈(0,10),stepw=[-2,1,2]
i∈(0,10),steph=[-2,1,2]
A(x1,y1,w1,h1) In, x1,y1Representing the coordinates of a first corner point, w, of a rectangular window in the first image, which is circumscribed by the contour of the driver's face1,h1And respectively representing second corner point coordinates of a rectangular window which is externally connected with the face contour of the driver in the first image, wherein the first corner point and the second corner point are two points on a diagonal line of the rectangular window. k, j and i denote transform coefficients, stepk,stepwAnd stepwIndicating the amount of coordinate translation.
The face area of the driver from the first image is A [ x ]1,y1,w1,h1]To obtain the predicted driver face area B [ x ] in the second image3,y3,w3,h3]. According to the combined area B [ x ] in the first image2,y2,w2,h2]Obtaining a prediction combination area C [ x ] in the second image4,y4,w4,h4]Due to the correlation between two adjacent frames of images, then:
x1=x2,y1=y2,w1=w2,h1=h2
x3=x4,y3=y4,w3=w4,h3=h4
compressing the prediction combination area in the second image by using a Principal Component Analysis (PCA) algorithm to obtain a compressed prediction combination area, inputting the prediction combination area in the second image and the compressed prediction combination area as high-dimensional features into a Support Vector Machine (SVM) classifier, processing the high-dimensional features by using a Support Vector Machine (SVM) classifier, and judging whether the prediction combination area in the second image is the face of the driver.
When the driver face is judged, a neural network model such as a convolutional neural network model is used for recognizing the predicted driver face area in the second image so as to determine the driver face area in the second image and further obtain a second face image.
When the image acquired by the mobile terminal is more than two frames, each frame of image in the continuous frame of images is processed by utilizing a Support Vector Machine (SVM) classifier in combination with a neural network algorithm according to the position correlation of the face area of the driver between the continuous frame of images, so that the tracking of the face of the driver in the continuous frame of images is realized. The mobile terminal can also process the face areas of every two adjacent frames in the continuous frame images by adopting a smoothing filtering algorithm, so that the face tracking effect of the driver is smoother.
The method and the device for determining the fatigue state of the driver determine the change condition of the face contour of the driver by calculating the offset of the face contour of the driver in two adjacent frames of face images, determine the change condition of the preset part in the face of the driver by the preset part images of two adjacent frames of face images, and simultaneously determine the fatigue state of the driver according to the change condition of the face contour of the driver and the change condition of the preset part in the face of the driver, have rich detection means and improve the judgment accuracy of the fatigue state of the driver.
Referring to fig. 7, fig. 7 is a first structural schematic diagram of a driver fatigue state detection apparatus according to an embodiment of the present application. The driver fatigue state detection device can be applied to a mobile terminal. The driver fatigue state detection apparatus 200 may include: the device comprises a calculation module 201, a first determination module 202, an image extraction module 203, a second determination module 204 and a fatigue judgment module 205.
The calculation module 201 is configured to calculate an offset of a face contour of a driver in two adjacent frames of face images according to the two adjacent frames of face images of the driver, where the two adjacent frames of face images are associated with two adjacent moments;
a first determining module 202, configured to determine a change of the driver face contour according to an offset of the driver face contour;
the image extraction module 203 is configured to extract preset position images from the face images of the two adjacent frames respectively to obtain the preset position images of the two adjacent frames;
a second determining module 204, configured to determine, according to the preset position images of the two adjacent frames, a change of a preset position in the face of the driver;
the fatigue judging module 205 is configured to determine a fatigue state of the driver according to a change of the face contour of the driver and a change of a preset portion in the face of the driver.
Referring to fig. 8, fig. 8 is a schematic view of a second structure of a driver fatigue state detection apparatus according to an embodiment of the present application, where the driver fatigue state detection apparatus 200 may further include: an image acquisition module 206, a face detection module 207, a face recognition module 208, and a target tracking module 209.
The image acquisition module 206 is configured to acquire a first image at a first time and a second image at a second time, where the first time and the second time are two adjacent times arranged in chronological order;
a face detection module 207 for processing the first image to determine at least one face region of the first image;
a face recognition module 208, configured to recognize the at least one face region to determine a driver face region of the first image, and obtain a first face image according to the driver face region of the first image;
and the target tracking module 209 is configured to obtain a second face image according to the first face image.
In one embodiment, the calculation module 201 may be configured to:
the method comprises the steps of obtaining face images of two adjacent frames of a driver, wherein the face images of the two adjacent frames comprise a first face image and a second face image, and the first face image and the second face image are two adjacent frames of face images obtained at two adjacent moments;
respectively identifying the first face image and the second face image to obtain a first face contour feature vector and a second face contour feature vector;
calculating an offset vector between the first face contour feature vector and the second face contour feature vector;
and obtaining the offset of the face contour of the driver in the face images of two adjacent frames according to the offset vector.
In one embodiment, the first determining module 202 may be configured to:
processing the offset of the face contour of the driver in the face images of the two adjacent frames to obtain the deflection angle of the face contour of the driver of the first face contour feature vector and the second face contour feature vector;
and determining the change of the face contour of the driver according to the deflection angle of the face contour of the driver.
In one embodiment, the second determining module 204 may be configured to:
respectively identifying the preset part images of the two adjacent frames to obtain two preset part characteristic vectors;
calculating an offset coordinate quantity between the two preset part feature vectors;
and determining the change of the preset part in the face of the driver according to the offset coordinate quantity between the two preset part feature vectors.
In some embodiments, the target tracking module 209 may be configured to:
processing the first face image to obtain a driver face prediction area in a second image;
identifying the driver face prediction region to determine a driver face region in the second image;
and acquiring a second face image according to the face area of the driver in the second image.
The embodiment of the application further provides a mobile terminal, which comprises a memory and a processor, wherein the processor is used for executing the steps in the detection of the fatigue state of the driver, which are provided by the embodiment, by calling the computer program stored in the memory.
For example, the mobile terminal may be a tablet computer or a smart phone. Referring to fig. 9, fig. 9 is a schematic structural diagram of a mobile terminal according to an embodiment of the present application.
The mobile terminal of the present embodiment may be a mobile terminal 300 such as a smart phone. The mobile terminal 300 may include a processor 301, a memory 302, and a camera assembly 303, wherein the processor 301 is electrically connected to the memory 302 and the camera assembly 303, respectively. The processor 301 is a control center of the mobile terminal 300, connects various parts of the entire mobile terminal 300 using various interfaces and lines, and performs various functions of the mobile terminal and processes data by running or calling a computer program stored in the memory 302 and calling data stored in the memory 302. The camera assembly 303 is used to capture images of the driver. It should be noted that the mobile terminal 300 may further include other components such as a battery, a main board, and a sensor.
The mobile terminal 300 is placed in a mobile terminal access device in an automobile, and the mobile terminal access device can fix the mobile terminal 300 in front of a front windshield of the automobile and is communicated with a central control system of the automobile through a built-in vehicle control communication protocol. The vehicle control communication protocol is provided by an automobile manufacturer, and the mobile terminal access equipment is used as an intermediate bridge for communicating the mobile terminal with the vehicle control system. The mobile terminal 300 may acquire an image of a driver for a preset duration during driving of the vehicle through the camera assembly 303.
In this embodiment, the processor 301 in the mobile terminal 300 loads the executable code corresponding to the process of one or more application programs into the memory 302 according to the following instructions, and the processor 301 runs the application programs stored in the memory 302, thereby implementing the steps:
calculating the offset of the face contour of the driver in the face images of two adjacent frames according to the face images of the two adjacent frames of the driver, wherein the face images of the two adjacent frames are associated with two adjacent moments;
determining the change of the face contour of the driver according to the offset of the face contour of the driver;
respectively extracting preset position images from the face images of the two adjacent frames to obtain the preset position images of the two adjacent frames;
determining the change of a preset part in the face of the driver according to the preset part images of the two adjacent frames;
and determining the fatigue state of the driver according to the change of the face contour of the driver and the change of a preset part in the face of the driver.
In one embodiment, the processor 301 performs the step of calculating an offset of the contour of the face of the driver in the facial images of two adjacent frames according to the facial images of the two adjacent frames of the driver, where the facial images of the two adjacent frames are associated with two adjacent time instants, and may perform: the method comprises the steps of obtaining face images of two adjacent frames of a driver, wherein the face images of the two adjacent frames comprise a first face image and a second face image, and the first face image and the second face image are two adjacent frames of face images obtained at two adjacent moments; respectively identifying the first face image and the second face image to obtain a first face contour feature vector and a second face contour feature vector; calculating an offset vector between the first face contour feature vector and the second face contour feature vector; and obtaining the offset of the face contour of the driver in the face images of two adjacent frames according to the offset vector.
In one embodiment, when the processor 301 performs the step of determining the change of the face contour of the driver according to the offset, it may perform: processing the offset of the face contour of the driver in the face images of the two adjacent frames to obtain the deflection angle of the face contour of the driver of the first face contour feature vector and the second face contour feature vector; and determining the change of the face contour of the driver according to the deflection angle of the face contour of the driver.
In one embodiment, when the processor 301 performs the step of determining the change of the preset portion in the face of the driver according to the preset portion images of the two adjacent frames, the following steps may be performed: respectively identifying the preset part images of the two adjacent frames to obtain two preset part characteristic vectors; calculating an offset coordinate quantity between the two preset part feature vectors; and determining the change of the preset part in the face of the driver according to the offset coordinate quantity between the two preset part feature vectors.
In one embodiment, when the processor 301 performs the step of acquiring two adjacent frames of face images of the driver, where the two adjacent frames of face images include a first face image and a second face image, and the first face image and the second face image are two adjacent frames of face images acquired at two adjacent moments, the steps may be performed as follows: acquiring images of two adjacent frames of a driver, wherein the images of the two adjacent frames comprise a first image and a second image, and the first image and the second image are two images acquired at two adjacent moments; processing the first image to determine at least one face region of the first image; identifying the at least one face region to determine a driver face region of the first image; acquiring a first face image from the first image according to the face area of the driver of the first image; and acquiring a second face image according to the first face image so as to obtain the face images of two adjacent frames of the driver.
In one embodiment, when the processor 301 performs the step of acquiring the second face image according to the first face image, it may perform: processing the first face image to obtain a driver face prediction area in a second image; identifying the driver face prediction region to determine a driver face region in the second image; and acquiring a second face image according to the face area of the driver in the second image.
In the above embodiments, the descriptions of the embodiments have respective emphasis, and a part that is not described in detail in a certain embodiment may be referred to the above detailed description of the driver fatigue state detection method, and is not described herein again.
The driver fatigue state detection device provided by the embodiment of the application and the driver fatigue state detection method in the embodiments belong to the same concept, any method provided in the driver fatigue state detection method embodiment can be operated on the driver fatigue state detection device, and the specific implementation process is described in the driver fatigue state detection method embodiment in detail, and is not repeated herein.
It should be noted that, for the method for detecting a fatigue state of a driver described in the embodiment of the present application, it can be understood by those skilled in the art that all or part of the process for implementing the method for detecting a fatigue state of a driver described in the embodiment of the present application can be implemented by controlling related hardware through a computer program, where the computer program can be stored in a computer-readable storage medium, such as a memory, and executed by at least one processor, and during the execution, the process of the embodiment for detecting a fatigue state of a driver can be included. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
In the driver fatigue state detection apparatus according to the embodiment of the present application, each functional module may be integrated into one processing chip, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, or the like.
The method, the device, the storage medium and the mobile terminal for detecting the fatigue state of the driver provided by the embodiment of the application are described in detail, a specific example is applied in the description to explain the principle and the implementation mode of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A driver fatigue state detection method, characterized by comprising:
calculating the offset of the face contour of the driver in the face images of two adjacent frames according to the face images of the two adjacent frames of the driver, wherein the face images of the two adjacent frames are associated with two adjacent moments;
determining the change of the face contour of the driver according to the offset of the face contour of the driver;
respectively extracting preset position images from the face images of the two adjacent frames to obtain the preset position images of the two adjacent frames;
determining the change of a preset part in the face of the driver according to the preset part images of the two adjacent frames;
and determining the fatigue state of the driver according to the change of the face contour of the driver and the change of a preset part in the face of the driver.
2. The method for detecting the fatigue state of the driver as claimed in claim 1, wherein the step of calculating the offset of the contour of the face of the driver in the two adjacent frames of face images according to the face images of the two adjacent frames of the driver, wherein the face images of the two adjacent frames are associated with two adjacent time instants comprises:
the method comprises the steps of obtaining face images of two adjacent frames of a driver, wherein the face images of the two adjacent frames comprise a first face image and a second face image, and the first face image and the second face image are two adjacent frames of face images obtained at two adjacent moments;
respectively identifying the first face image and the second face image to obtain a first face contour feature vector and a second face contour feature vector;
calculating an offset vector between the first face contour feature vector and the second face contour feature vector;
and obtaining the offset of the face contour of the driver in the face images of two adjacent frames according to the offset vector.
3. The driver fatigue state detection method according to claim 2, wherein the step of determining the change in the contour of the driver's face from the offset amount comprises:
processing the offset of the face contour of the driver in the face images of the two adjacent frames to obtain the deflection angle of the face contour of the driver of the first face contour feature vector and the second face contour feature vector;
and determining the change of the face contour of the driver according to the deflection angle of the face contour of the driver.
4. The method for detecting the fatigue state of the driver as claimed in claim 1, wherein the step of determining the change of the preset portion in the face of the driver according to the preset portion images of the two adjacent frames comprises:
respectively identifying the preset part images of the two adjacent frames to obtain two preset part characteristic vectors;
calculating an offset coordinate quantity between the two preset part feature vectors;
and determining the change of the preset part in the face of the driver according to the offset coordinate quantity between the two preset part feature vectors.
5. The method for detecting the fatigue state of the driver according to claim 2, wherein the step of acquiring the facial images of two adjacent frames of the driver, the facial images of the two adjacent frames including a first facial image and a second facial image, and the first facial image and the second facial image are the facial images of two adjacent frames acquired at two moments includes:
acquiring images of two adjacent frames of a driver, wherein the images of the two adjacent frames comprise a first image and a second image, and the first image and the second image are two images acquired at two adjacent moments;
processing the first image to determine at least one face region of the first image;
identifying the at least one face region to determine a driver face region of the first image;
acquiring a first face image from the first image according to the face area of the driver of the first image;
and acquiring a second face image according to the first face image so as to obtain the face images of two adjacent frames of the driver.
6. The driver fatigue state detection method according to claim 5, wherein the step of acquiring a second face image from the first face image includes:
processing the first face image to obtain a driver face prediction area in a second image;
identifying the driver face prediction region to determine a driver face region in the second image;
and acquiring a second face image according to the face area of the driver in the second image.
7. A driver fatigue state detection device, characterized by comprising:
the calculation module is used for calculating the offset of the face contour of the driver in the face images of the two adjacent frames according to the face images of the two adjacent frames of the driver, wherein the face images of the two adjacent frames are associated with two adjacent moments;
the first determining module is used for determining the change of the face contour of the driver according to the offset of the face contour of the driver;
the image extraction module is used for respectively extracting preset position images from the face images of the two adjacent frames to obtain the preset position images of the two adjacent frames;
the second determination module is used for determining the change of the preset part in the face of the driver according to the preset part images of the two adjacent frames;
and the fatigue judging module is used for determining the fatigue state of the driver according to the change of the face outline of the driver and the change of a preset part in the face of the driver.
8. The driver fatigue state detection device according to claim 7, further comprising:
the image acquisition module is used for acquiring a first image at a first moment and a second image at a second moment, wherein the first moment and the second moment are two adjacent moments arranged according to a time sequence;
the face detection module is used for processing the first image to determine at least one face area of the first image;
the face recognition module is used for recognizing the at least one face area to determine a driver face area of the first image and acquiring a first face image according to the driver face area of the first image;
and the target tracking module is used for acquiring a second face image according to the first face image.
9. A storage medium, characterized in that the storage medium has stored therein a computer program which, when run on a computer, implements the driver fatigue state detection method according to any one of claims 1 to 7.
10. A mobile terminal characterized by comprising a processor and a memory, the memory having stored therein a computer program, the processor implementing the driver fatigue state detection method according to any one of claims 1 to 7 by calling the computer program stored in the memory.
CN201910282019.2A 2019-04-09 2019-04-09 Driver fatigue state detection method and device, storage medium and mobile terminal Pending CN111797654A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910282019.2A CN111797654A (en) 2019-04-09 2019-04-09 Driver fatigue state detection method and device, storage medium and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910282019.2A CN111797654A (en) 2019-04-09 2019-04-09 Driver fatigue state detection method and device, storage medium and mobile terminal

Publications (1)

Publication Number Publication Date
CN111797654A true CN111797654A (en) 2020-10-20

Family

ID=72805700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910282019.2A Pending CN111797654A (en) 2019-04-09 2019-04-09 Driver fatigue state detection method and device, storage medium and mobile terminal

Country Status (1)

Country Link
CN (1) CN111797654A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113688675A (en) * 2021-07-19 2021-11-23 北京鹰瞳科技发展股份有限公司 Target detection method and device, electronic equipment and storage medium
CN116645658A (en) * 2023-04-28 2023-08-25 成都赛力斯科技有限公司 Method, system, computer device and storage medium for monitoring driver's movement amplitude

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104013414A (en) * 2014-04-30 2014-09-03 南京车锐信息科技有限公司 Driver fatigue detecting system based on smart mobile phone
CN105096528A (en) * 2015-08-05 2015-11-25 广州云从信息科技有限公司 Fatigue driving detection method and system
CN106709420A (en) * 2016-11-21 2017-05-24 厦门瑞为信息技术有限公司 Method for monitoring driving behaviors of driver of commercial vehicle
CN108830240A (en) * 2018-06-22 2018-11-16 广州通达汽车电气股份有限公司 Fatigue driving state detection method, device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104013414A (en) * 2014-04-30 2014-09-03 南京车锐信息科技有限公司 Driver fatigue detecting system based on smart mobile phone
CN105096528A (en) * 2015-08-05 2015-11-25 广州云从信息科技有限公司 Fatigue driving detection method and system
CN106709420A (en) * 2016-11-21 2017-05-24 厦门瑞为信息技术有限公司 Method for monitoring driving behaviors of driver of commercial vehicle
CN108830240A (en) * 2018-06-22 2018-11-16 广州通达汽车电气股份有限公司 Fatigue driving state detection method, device, computer equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113688675A (en) * 2021-07-19 2021-11-23 北京鹰瞳科技发展股份有限公司 Target detection method and device, electronic equipment and storage medium
CN116645658A (en) * 2023-04-28 2023-08-25 成都赛力斯科技有限公司 Method, system, computer device and storage medium for monitoring driver's movement amplitude
CN116645658B (en) * 2023-04-28 2024-06-21 重庆赛力斯凤凰智创科技有限公司 Method, system, computer device and storage medium for monitoring driver's movement amplitude

Similar Documents

Publication Publication Date Title
US20210012127A1 (en) Action recognition method and apparatus, driving action analysis method and apparatus, and storage medium
Pantic et al. Automatic analysis of facial expressions: The state of the art
KR102465532B1 (en) Method for recognizing an object and apparatus thereof
US20200143149A1 (en) Electronic apparatus and operation method thereof
KR102459221B1 (en) Electronic apparatus, method for processing image thereof and computer-readable recording medium
Dornaika et al. Simultaneous facial action tracking and expression recognition in the presence of head motion
CN103605969B (en) A kind of method and device of face typing
JP2022530605A (en) Child state detection method and device, electronic device, storage medium
CN202257856U (en) Driver fatigue-driving monitoring device
US11403879B2 (en) Method and apparatus for child state analysis, vehicle, electronic device, and storage medium
CN111523559B (en) Abnormal behavior detection method based on multi-feature fusion
WO2019136449A2 (en) Error correction in convolutional neural networks
WO2018218839A1 (en) Living body recognition method and system
Yan et al. Recognizing driver inattention by convolutional neural networks
CN111797654A (en) Driver fatigue state detection method and device, storage medium and mobile terminal
CN111241922B (en) Robot, control method thereof and computer readable storage medium
CN111428666A (en) Intelligent family accompanying robot system and method based on rapid face detection
Krishnaraj et al. A Glove based approach to recognize Indian Sign Languages
CN113269010B (en) Training method and related device for human face living body detection model
Luo et al. Multimodal information fusion for human-robot interaction
CN113469023B (en) Method, apparatus, device and storage medium for determining alertness
Zhang et al. Development of a rescue system for agricultural machinery operators using machine vision
CN112926364A (en) Head posture recognition method and system, automobile data recorder and intelligent cabin
CN110524559A (en) Intelligent human-machine interaction system and method based on human behavior data
Bergasa et al. Guidance of a wheelchair for handicapped people by face tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination