WO2020237664A1 - Procédé fournissant des notifications de conduite, procédé de détection d'état de conduite et dispositif informatique - Google Patents

Procédé fournissant des notifications de conduite, procédé de détection d'état de conduite et dispositif informatique Download PDF

Info

Publication number
WO2020237664A1
WO2020237664A1 PCT/CN2019/089639 CN2019089639W WO2020237664A1 WO 2020237664 A1 WO2020237664 A1 WO 2020237664A1 CN 2019089639 W CN2019089639 W CN 2019089639W WO 2020237664 A1 WO2020237664 A1 WO 2020237664A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
driving
facial feature
detected object
image information
Prior art date
Application number
PCT/CN2019/089639
Other languages
English (en)
Chinese (zh)
Inventor
郑睿姣
叶凌峡
Original Assignee
驭势(上海)汽车科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 驭势(上海)汽车科技有限公司 filed Critical 驭势(上海)汽车科技有限公司
Priority to CN201980000877.1A priority Critical patent/CN110582437A/zh
Priority to PCT/CN2019/089639 priority patent/WO2020237664A1/fr
Publication of WO2020237664A1 publication Critical patent/WO2020237664A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W50/16Tactile feedback to the driver, e.g. vibration or force feedback to the driver on the steering wheel or the accelerator pedal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0019Control system elements or transfer functions
    • B60W2050/0028Mathematical models, e.g. for simulation
    • B60W2050/0029Mathematical model of the driver

Definitions

  • This application relates to the field of automatic driving technology, and in particular to a driving reminder method, a driving state detection method, and a computing device.
  • Autonomous vehicles (Autonomous vehicles; Self-piloting automobiles), also known as unmanned vehicles and computer-driven vehicles, are intelligent vehicles that realize unmanned driving through a computer's automatic driving system.
  • unmanned vehicles also known as unmanned vehicles and computer-driven vehicles
  • autonomous driving can be divided into several levels:
  • Level L0 The driver has complete control of the vehicle
  • Level L1 The automatic system can sometimes assist the driver to complete certain driving tasks
  • L2 assisted driving The automatic system can complete certain driving tasks, but the driver needs to monitor the driving environment, complete the rest, and ensure that problems occur and take over at any time. At this level, the wrong perception and judgment of the automatic system can be corrected by the driver at any time, and most car companies can provide this system. L2 can be divided into different usage scenarios based on speed and environment, such as low-speed traffic jams on the loop, fast driving on highways and automatic parking by the driver in the car;
  • L3 semi-autonomous driving The automatic system can not only complete certain driving tasks, but also monitor the driving environment under certain conditions, but the driver must be ready to regain driving control (when the automatic system makes a request). Therefore, at this level, the driver still cannot sleep or take a deep rest.
  • the difference between L3 and L2 is that the vehicle is responsible for surrounding monitoring, while the human driver only needs to maintain attention for emergencies.
  • Level 4 highly automated driving Automated systems can complete driving tasks and monitor the driving environment in certain environments and specific conditions; currently, the deployment of L4 is mostly based on city use, which can be fully automated valet parking. It can also be done directly in conjunction with taxi services. At this stage, within the scope of autonomous driving, all tasks related to driving have nothing to do with the driver and passengers. The perception of external responsibility lies in the autonomous driving system, and there are different design and deployment ideas here;
  • Level 5 fully automated driving: all driving tasks that the automated system can complete under all conditions
  • the current automatic driving system has developed corresponding driver state detection methods to monitor the driver's state to ensure that the driver can concentrate on driving.
  • an embodiment of the present application proposes a driving reminder method, a driving state detection method, and a computing device to solve the problems in the prior art.
  • an embodiment of the present application discloses a driving reminder method, including:
  • An embodiment of the present application also discloses a computing device, including:
  • One or more processors are One or more processors.
  • An embodiment of the present application also discloses one or more machine-readable media, on which instructions are stored, which when executed by one or more processors, cause a computing device to execute the foregoing method.
  • the method proposed in the embodiment of the present application can solve the problem of the high false alarm rate of the existing driver state detection system, and ensure that the driver has the ability to safely take over the vehicle within a specified time range.
  • Fig. 1 is a block diagram of an automatic driving system according to an embodiment of the application.
  • Fig. 2 is a block diagram of visual algorithm processing according to an embodiment of the application.
  • Fig. 3 shows a flowchart of a driving reminding method according to an embodiment.
  • 4A to 4D are flowcharts of sub-steps of the driving reminding method shown in FIG. 3.
  • Fig. 5 schematically shows a block diagram of a computing device for executing the method according to the present application.
  • Fig. 6 schematically shows a storage unit for holding or carrying program codes for implementing the method according to the present application.
  • the embodiments of this application propose a driving method and device applied to an automatic driving system, which can solve the problem of high false alarm rate of the existing driver state detection system and ensure that the driver has the ability to Safely take over the vehicle within the time frame.
  • the embodiment of the application proposes a driving reminding method, which is applied to an automatic driving system of a vehicle.
  • the automatic driving system can detect the information inside and outside the car, and this information can be input into the automatic driving system as the basis for the automatic driving system to judge and perform operations.
  • Information outside the vehicle may include traffic environment information and natural environment information; traffic environment information, for example, road condition information, traffic light information, obstacle information, etc.; natural environment information, for example, temperature, humidity, light, etc. This information can be obtained through detection elements such as sensors, cameras, and radars outside the vehicle.
  • In-vehicle information includes, for example, in-vehicle environment information, driver status information, and driver's operation information on the vehicle, etc. These information can be acquired through detection elements such as in-vehicle sensors and cameras.
  • Fig. 1 shows a system block diagram of an automatic driving system proposed in an embodiment of the application.
  • the automatic driving system of a vehicle can be composed of software, hardware, or a combination of software and hardware.
  • the automatic driving system can include a vehicle sensor module 10, a vehicle-mounted camera module 20, a driver state detection module 30, and an automatic driving system master.
  • the vehicle sensor module 10 and the vehicle camera module 20 may be hardware devices, which are connected to the vehicle computer through a connection such as a data bus; the driver state detection module 30, the automatic driving system main control module 40, and the wake-up strategy control
  • the module 50 may be a computer program in a vehicle-mounted computer processor; the human-computer interaction interface module 60 may be a software module or a hardware module.
  • the vehicle sensor module 10 and the vehicle camera module 20 are used to collect information in the vehicle, and the vehicle sensor module 10 is used to detect whether there is a driver at the driving position through sensors.
  • the sensor module 10 may be a pressure sensor installed in the driver's seat.
  • the vehicle sensor module 10 may be used to receive related sensor signals of the vehicle, such as driving position pressure sensor signals, seat belt signals, etc., to determine whether the driver is in the driving position.
  • the vehicle-mounted camera module 20 is used to collect multiple frames of video images of the driving position.
  • the vehicle-mounted camera module 20 may be one or more of a normal camera, a high-definition camera, and a stereo camera.
  • the multiple frames of video images may be continuous or discontinuous.
  • the vehicle-mounted camera module 20 can be installed at the A-pillar position in the vehicle to collect image information of the driver and monitor the status of the driver.
  • the installation position of the vehicle-mounted camera module can be used to obtain a greater degree of driver
  • the facial information is the best and cannot affect the operation of the driver, for example, it cannot obscure the operation of the driver.
  • the signal and video image of the sensor module may be sent to the driver state detection module 30 of the onboard computer.
  • the driver state detection module 30 may use at least one of the sensor signal and the video image to determine the driving state of the driver, and send the driving state to the automatic driving system main control module 40 .
  • the driver state detection module 30 may also send the driving state to the wake-up strategy control module 50.
  • the driver state detection module 30 may receive the image information.
  • the driver state detection module 30 performs driver detection through a face classifier to determine whether there is a human face in the detected frame of video image. When there is a human face, the driver's rectangular area can be determined, and the driver's facial feature points can be located in the rectangular area of the human face to obtain facial feature information, and the driver's state can be determined based on the facial feature information.
  • FIG. 2 shows a visual algorithm processing block diagram of the driver state detection module 30.
  • the processing flow of the driver state detection module 30 includes four parts: video image input, driver detection, facial feature point positioning, and driver state judgment.
  • the video image input process is used to obtain the video image of the on-board camera module 20;
  • the driver detection process is used to determine whether there is a multi-frame video image to determine whether the driver’s facial image information exists;
  • the process of facial feature point positioning is used to determine the The image information determines the facial feature points;
  • the driver state judgment process is used to judge the driver's driving state based on facial features.
  • the main control module 40 of the automatic driving system can be used to send system status signals according to the operating conditions of the automatic driving system, such as a signal that the system is malfunctioning, an emergency situation, or the automatic driving system cannot accurately determine the road conditions ahead. Signal etc.
  • the wake-up strategy control module 50 may be used to send different instructions according to the driver state and/or the state signal of the automatic driving system, and use different reminding methods to remind the driver to take over the vehicle.
  • different reminding methods can be adopted to the driver according to the driver status and confidence level to ensure The driver can take over the driving task safely and smoothly within the specified time.
  • the wake-up strategy control module 50 formulates a wake-up strategy according to the driving state, and executes the wake-up strategy through the human-computer interaction interface module 60.
  • the driver state detection module 30 and the wake-up strategy control module 50 may be software modules in the on-board computer processor, and the human-computer interaction interface module 60 may be used to control driving according to the control instructions given by the wake-up strategy control module 50.
  • the hardware module that sends notification information to the operator, and the control command includes different wake-up modes.
  • the human-computer interaction interface module 60 executes a corresponding wake-up mode for the driver to remind the driver to take over the driving task.
  • the human-computer interaction interface module 60 may include, for example, a sound module, a light module, a vibration module, a display module, etc., which are not particularly limited in this application.
  • the above description of the automatic driving system is only for convenience of description, and does not limit the present application within the scope of the listed embodiments. It can be understood that for those skilled in the art, after understanding the principle of the system, it is possible to arbitrarily combine various modules, or form subsystems to connect with other modules without departing from this principle, to implement the above Various amendments and changes in the form and details of the application field of the method and system.
  • the above-mentioned driver state detection module 30, automatic driving system main control module 40, wake-up strategy control module 50 and human-computer interaction interface module 60 are separate software modules.
  • these modules may also be two-by-two integrated or multiple integrated modules, and any deformation or modification belongs to the protection scope of this application.
  • the driver state detection module 30 and the automatic driving system main control module 40 can be integrated together in the form of software; for example, the wake-up strategy control module 50 and the human-computer interaction interface module 60 can be integrated together in the form of software; another example The driver state detection module 30, the automatic driving system main control module 40, and the wake-up strategy control module 50 can be integrated in the form of software, or the driver state detection module 30, the automatic driving system main control module 40, and the wake-up strategy
  • the control module 50 and the human-computer interaction interface module 60 are integrated together in the form of software as a whole, and this application does not particularly limit the implementation of the above modules alone or in combination. Such deformations are all within the protection scope of this application.
  • FIG. 3 is a flowchart of the steps of the driving reminding method according to the first embodiment of the application. As shown in FIG. 3, the driving reminder method of the embodiment of the present application is applied to an automatic driving system and includes the following steps:
  • the automatic driving system obtains facial image information of the detected object.
  • the automatic driving system may obtain the facial image information of the driver in the driving position of the vehicle where the automatic driving system is located through a sensor or a camera.
  • the face image information of the detected object may include a face image recognized through face recognition technology.
  • the vehicle-mounted camera module 10 in FIG. 1 may be used to shoot video, for example, 30 frames per second of continuous video images.
  • a face classifier can be used to analyze and detect the video image to determine whether there is face image information.
  • the face classifier may be obtained by extracting MBLBP (Multiscale BlockLBP) features from a training set containing human faces and non-human faces, and then training using a cascaded AdaBoost algorithm.
  • MBLBP Multiscale BlockLBP
  • the rectangular area of the face can be obtained through an algorithm.
  • judging whether there is face image information in the image can also be implemented by means of machine learning.
  • a machine learning model can be used to determine whether there is face image information in the image.
  • the application of the machine learning model includes the training phase and the use phase: in the training phase, multiple images containing face image information and images not containing face image information can be input into the machine learning model, and these pictures are marked as "containing "Or “does not contain", use these images as samples to train the machine learning model; in the use stage, input new images into the training mature machine learning model, and the machine learning model can automatically output whether the image contains face image information critical result.
  • judging whether there is face image information in the image can also be implemented by means of deep learning in machine learning.
  • Deep learning uses a neural network model that contains multiple hidden layers to establish and simulate a neural network that simulates the human brain for analysis and learning, and imitates the mechanism of the human brain to interpret data, such as text, images, and sounds. Deep learning usually requires a larger amount of training data to train the neural network model. These training data are, for example, a large number of images marked with "face image information" or "not containing face image information"; in the use stage after training, Input the new image into the neural network model, the neural network model can automatically output the judgment result of whether the image contains facial image information, and the accuracy of the output information is significantly improved compared with the traditional machine learning model.
  • step S102 can be performed as follows:
  • facial feature points from the aforementioned rectangular area of the face, such as eyes, mouth corners, nose tip, and face contour; then use the position information of the facial feature points as facial feature information for subsequent use Determine driving status.
  • the position information of the facial feature points is determined, an initial shape is given for the rectangular region of the face, the image features of the key feature points are extracted, and the initial shape is returned to Position close to or even equal to the true shape.
  • the location information of the facial feature points can be determined by using the aforementioned Supervised Descent Method (SDM) to solve the problem, and the image feature adopts the Histogram of Oriented Gradient, HOG);
  • SDM Supervised Descent Method
  • HOG Histogram of Oriented Gradient
  • the orientation gradient histogram feature is a feature description factor formed by calculating and counting the gradient orientation histogram of the local area of the image, and will not be repeated here.
  • determining the position information of the facial feature points may also be obtained through machine learning.
  • a machine learning model can be used to obtain the location information of facial feature points.
  • the application of the machine learning model includes the training phase and the use phase: in the training phase, multiple face images marked with the location information of the facial feature points can be input into the machine learning model to train the machine learning model; in the use phase, The new face image input trains a mature machine learning model, and the machine learning model can automatically output the location information of the facial feature points of the face image.
  • determining the position information of the facial feature points can also be obtained by means of deep learning in machine learning.
  • a large amount of training data can be used to train the neural network model.
  • These training data are, for example, face images marked with the location information of facial feature points; in the use phase, new face images are input to the neural network model, neural network model
  • the position information of the facial feature points of the face image can be automatically output, and the accuracy of the output information is significantly improved compared to traditional machine learning models.
  • step S103 can be executed as follows:
  • the driving state of the detected object can be obtained based on facial feature information.
  • the corresponding facial feature information can be obtained through facial feature points.
  • the facial feature information includes, for example, position information of facial feature points.
  • the location information can be used to extract feature description factors to determine the driver's state.
  • the driver’s eye area can be located from the position information of the facial feature points; the description factor is extracted using the position information of the eye area, such as the aspect ratio of the driver’s eyes, and the SVM algorithm is used to determine the state of the eyes ——The state can include open, closed, half open, etc., for example.
  • the driver’s mouth area from the position information of facial feature points; use the position information of the mouth area to extract descriptive factors, such as the aspect ratio of the driver’s mouth, and use specific
  • the algorithm determines the state of the mouth-the state can include, for example, open, closed, half-open, etc.
  • the head posture of the driver can be calculated by combining the internal and external parameters of the on-board camera module 20.
  • the head posture of the driver can be calculated according to the driver's current captured image, combined with the deflection angle of the camera relative to the x-y-z three-axis coordinate system.
  • the angle and transformation relationship between the axis of the driver's head and the axis of the body are used to determine the driver's head posture.
  • the driver when the preset head posture is less than a specific angle relative to the body, for example, the angle is between 0-15 degrees, the driver is considered to be in a normal posture; when the driver’s head posture is larger than the body At an angle of 15 degrees, it is considered that the driver may be sleeping.
  • the automatic driving system may set one or more of the postures of the eyes, mouth, and head to correspond to the driving state of the driver. In some embodiments, when the automatic driving system determines one or more of the eyes, mouth, head, etc., the driving state of the driver can be determined.
  • the driving state can be divided into multiple driving state levels. Taking the use of the driver's eyes to detect the driving state as an example, three different levels of state (0/1/2) can be set, the lower the level, the more sober. For example, if any eye is closed at a certain moment, the consecutive frames with closed eyes increase once. If the number of consecutive frames with closed eyes is greater than the maximum number of consecutive closed eyes, the driver's driving state level is 2; if the consecutive number of closed eyes is between the maximum and minimum consecutive closed eyes, the driver's driving state level is 1; otherwise , The driver’s driving state level is 0.
  • the driver’s driving state level when it is detected that the user’s head drooping exceeds the first time period, the driver’s driving state level is 2; when the user’s head drooping time is detected between the first time period and the second time period, the first time period is If the duration is longer than the second duration, the driver's driving state level is 1; otherwise, the driver's driving state level is 0.
  • the open and closed state of the eyes can be determined based on the SVM classifier by extracting feature information from the eye region.
  • the feature information may include the fusion information of the LBP feature, the HU moment feature, and the histogram feature of the gray-level rotation invariant equivalent mode, and these information are used for feature description of the eye region.
  • the SVM classifier first extracts the fusion feature information of the image, and then trains the data sample set images based on the support vector machine algorithm to obtain a classifier capable of judging the open and closed state of the eyes.
  • step S104 can be performed as follows:
  • the predetermined condition is based on at least one of the driver's driving state, the current system state signal of the automatic driving system, and the accuracy of the driver's driving state judgment (for example, system confidence). One or a combination of them.
  • the predetermined condition may be: the current system status signal of the automatic driving system is a system error or a vehicle failure; that is, as long as the system status signal contains a signal related to a system error or a vehicle failure, the wake-up shown in FIG. 1
  • the strategy reminding module 50 determines the reminding mode according to the driving state of the driver, and sends out corresponding reminding information through the human-computer interaction interface module 60.
  • the predetermined condition may be: the driving state level is 1 or 2 (for example, the driver is not awake or less awake), and the current system state signal of the automatic driving system is a system error or a vehicle failure ; That is, it is necessary to simultaneously satisfy that the driver is not in a driving state and the system status signal is displayed as a system error or a vehicle failure.
  • the driver's intervention is required to meet the predetermined conditions; when the predetermined conditions are met, the wake-up strategy control module 50 is According to the driving state of the driver, the reminding method is determined, and the corresponding reminding information is sent out through the human-computer interaction interface module 60.
  • the system confidence level that is, the accuracy of the system judgment
  • the step of determining the confidence of the system may include the following steps, for example:
  • the system confidence is determined according to the number of times of the facial image information, and the system confidence includes more than two system confidence levels.
  • the automatic driving system may use the number of consecutive detections of human faces to divide the confidence of the system result, using three levels for characterization (0/1/2), the higher the level, the higher the confidence. If the number of consecutively detected faces is greater than the maximum threshold for consecutively detected faces, the confidence level is 2; if the number of consecutively detected faces is between the maximum and minimum thresholds, the confidence level is 1; otherwise, the confidence level Is 0.
  • the predetermined condition may be: the driving state level is 1 or 2 (for example, the driver is relatively unconscious or very unconscious), and the current system state signal of the automatic driving system is a system error Or vehicle failure, and the aforementioned system confidence is 1 or 2 (that is, the confidence level is high or medium); that is, it is necessary to satisfy that the driver is not in a driving state, and the system status signal shows that the driver is required to intervene, and the system confidence is high
  • the wake-up strategy reminder module 50 of the on-board control system determines the reminder mode according to the driving state of the driver, and sends out the corresponding reminder through the human-computer interaction interface module 60. Reminder information.
  • the operation of issuing corresponding reminder information based on the driving state of the driver may be, for example, setting reminding methods of different intensities for different levels of driving states, for example, three reminding methods of high, middle and low intensities.
  • the reminding methods of different intensities can be realized by one or more of the methods such as volume, light flashing, steering wheel vibration, seat vibration, and the like.
  • the difference between high-intensity reminder, medium-intensity reminder, and low-intensity reminder lies in the intensity or intensity of the used reminder.
  • the high-intensity reminder can use high decibel volume and high light flashing frequency.
  • medium-intensity reminder can use medium decibel volume, light flashing frequency or yellow light medium-frequency flashing, medium-frequency steering wheel vibration or seat vibration
  • Low-intensity reminders can use low-decibel volume, low-frequency light flashing or green light flashing, low-frequency steering wheel vibration or seat vibration.
  • the corresponding reminder information is issued based on the driving state, and the system confidence can also be used as one of the reference factors, that is, the corresponding reminder method can be determined based on the driving state and the system confidence, and the reminder information of the corresponding level can be issued.
  • Table 1 shows an example of multiple reminding methods set for the driving state when the predetermined conditions are met, as follows:
  • Driving status level System confidence Reminder 1 (not sober) 1 (high) high strength 2 (less sober) 1 (high) Medium intensity 3 (awake) 2 (medium) Low intensity 2 (less sober) 3 (low) Low intensity 3 (awake) 3 (low) Low intensity
  • steps S101 to S104 may respectively include the following sub-steps.
  • the step S101 that is, the step of acquiring face image information of the detected object, may include the following sub-steps:
  • S1011 Use a face classifier to detect whether there is face image information in the collected image
  • S1012 When it is determined that there is facial image information in the collected image, extract facial feature information from the facial image information;
  • an algorithm can be used to obtain a rectangular area of the human face. After the rectangular area of the human face is determined, at least one facial feature information can be obtained from the rectangular area of the human face.
  • a face classifier may be used to detect whether there is a face image.
  • the face classifier can be obtained by extracting MBLBP features from a training set containing human faces and non-human faces, and training them using a cascaded AdaBoost algorithm.
  • the AdaBoost algorithm is used to select some rectangular features (weak classifiers) that best represent the face, and the weak classifier is constructed into a strong classifier according to the weighted voting method, and then Several strong classifiers obtained by training are connected in series to form a cascaded classifier.
  • the aforementioned MBLBP feature refers to the Multiscale Block LBP feature. Compared with the LBP feature, the MBLBP feature used in the embodiment of the present application is more robust and characterizes the image more completely.
  • the facial feature information of the face image information may be further determined.
  • the rectangular area of the face can be obtained.
  • the facial feature points can be obtained, and the location information of the facial feature points can be determined by the positioning method as the facial feature information. That is, the facial feature information may include automatically locating the positions of facial feature points according to the rectangular area of the human face.
  • the facial feature information may be the positions of various parts that make up the human face, such as the eyes, the corners of the mouth, the tip of the nose, and the contour of the human face. These feature positions can be obtained through algorithm positioning.
  • the position information of the facial feature points is determined, an initial shape is given for the rectangular region of the face, the image features of the key feature points are extracted, and the initial shape is returned to a position close to or even equal to the true shape through continuous iteration.
  • the facial feature information may include eye information of the detected object; the eye feature information may include, for example, a ratio of eye height to eye width.
  • step S102 may include the following sub-steps:
  • S1021 Determine facial feature points according to the facial image information
  • S1022 Determine location information of the facial feature point, and use the location information as facial feature information.
  • Sub-step S1021 that is, the step of determining facial feature points according to the facial image information, may include:
  • S1021a Obtain a rectangular area of the face according to the face image information
  • S1021b Extract the initial shape of at least one facial feature point from the rectangular area of the face;
  • Sub-step S1022 determining the location information of the facial feature points, and using the location information as the facial feature information may include:
  • the image features include directional gradient histogram features.
  • the facial feature points such as eyes, mouth corners, nose tip, and face contour
  • the facial feature points can be obtained from the aforementioned rectangular area of the face first; and then the position information of the facial feature points can be used as facial feature information for subsequent Determine driving status.
  • the position information of the facial feature points can be used as facial feature information for subsequent Determine driving status.
  • an initial shape is given, image features of key feature points are extracted, and the initial shape is returned to a position close to or even equal to the true shape through continuous iteration.
  • determining the position information of the facial feature points may be solved by using a supervised descent algorithm (Supervised Descent Method, SDM), and the image feature uses a histogram of orientation gradient (Histogram of Oriented Gradient, HOG).
  • the supervised descent algorithm is a method used to minimize the non-linear least squares (Non-linear Least Squares) objective function. By learning a series of descent directions and the scales of the directions, the objective function is made at a very fast speed. Convergence to the minimum value, avoiding the problem of solving Jacobian matrix and Hessian matrix.
  • the directional gradient histogram feature is a feature description factor formed by calculating and counting the gradient directional histogram of the local area of the image. Its essence is the statistical information of the image gradient, which can maintain good geometrical and optical deformations in the image. Immutability.
  • the aforementioned step S103 that is, the step of judging the driving state of the detected object according to the facial feature information, may include the following sub-steps:
  • S1031 Extract eye feature information according to the eye information, and determine the open and closed state of the eyes;
  • S1032 Determine the driving state of the detected object by using the eye open and closed states of the multiple continuous images, where the driving state includes more than two driving state levels.
  • feature information can be extracted according to the eye region, and then the open and closed state of the eyes can be obtained based on the SVM classifier.
  • the fusion information of the LBP feature, the HU moment feature and the histogram feature of the gray-level rotation invariant equivalent mode can be used to describe the eye area.
  • the state of the driver is judged based on the open and closed state of the human eyes in consecutive frames. For example, three different levels (0/1/2) may be set, and the lower the level, the more awake. Specifically, if any eye is closed at a certain moment, the consecutive frames with closed eyes increase once. If the number of consecutive frames with closed eyes is greater than the maximum number of consecutive closed eyes, the driver's status level is 2; if the number of consecutive eyes closed is between the maximum and minimum consecutive eye closures, the driver's status level is 1; otherwise, driving The status level of the member is 0.
  • the facial feature information may also include the mouth information of the detected object; the mouth information includes the ratio of the height of the mouth of the detected object to the width of the mouth, and at least the area of the mouth.
  • the facial feature information may further include head posture information of the detected object; the head posture information may include, for example, the angle between the current head axis direction and the preset head axis direction.
  • the embodiment of the present application proposes a driving reminder method, which has at least the following advantages compared with the prior art:
  • the driving reminding method proposed in the embodiments of the present application can solve the problem of the high false alarm rate of the existing driver state detection system, and ensure that the driver has the ability to safely take over the vehicle within a specified time range.
  • the driver does not need to be awake all the time, nor need to pay attention to the road conditions ahead, or even sleep, but needs to be awakened in the event of a system failure.
  • the system can adopt different intensities of reminding methods according to the detected driver's status, so that the driver can complete the switching of the driving task subject in a short time.
  • the driving reminder method proposed in this application can be applied to an automatic driving system above the L3 level, can solve the problem of high false alarm rate of the existing driver state detection system, and ensure that the driver has the ability to safely take over the vehicle within a specified time range.
  • the driving reminder method proposed in the optional embodiment of the present application at least includes the following advantages:
  • the driving reminder method proposed in some embodiments of this application uses MBLBP features as the feature descriptor of the face detection process. This feature can more completely characterize image information, is more robust than LBP features, and is more robust than Haar-like features. Efficient.
  • the embodiment of this application uses the supervised descent method to solve the problem of minimizing the nonlinear least squares (Non-linear Least Squares) objective function.
  • This method has fast processing speed and accurate calculation results, which can overcome the shortcomings of many second-order optimization schemes. , Such as non-differentiable, Hessian matrix is computationally intensive, etc.
  • the embodiment of the present application also proposes a driving state detection method, which is used to detect the state of the driver of an automatic driving vehicle, including the aforementioned steps S101 to S103.
  • an embodiment of the present application also proposes a driving reminder device, including:
  • a memory in which a computer readable program is stored
  • the processor is connected to the memory and is used to execute the computer-readable program to perform the following operations:
  • an embodiment of the present application also provides a driving state detection device, including:
  • a memory in which a computer readable program is stored
  • the processor is connected to the memory and is used to execute the computer-readable program to perform the following operations:
  • the driving state of the detected object is acquired.
  • the embodiment of the present application also proposes an automatic driving system, including:
  • the vehicle sensor module is used to detect whether the detected object is in the driving position
  • Vehicle-mounted camera module used to obtain images of detected objects
  • a memory in which a computer readable program is stored
  • the processor is connected to the vehicle sensor module, the vehicle camera module, and the memory, acquires sensor signals and the image, and is used to execute the computer-readable program to perform the following operations:
  • the embodiment of the present application also proposes an automatic driving system for detecting the driving state of the detected object, including:
  • the vehicle sensor module is used to detect whether the detected object is in the driving position
  • Vehicle-mounted camera module used to obtain images of detected objects
  • a memory in which a computer readable program is stored
  • the processor is connected to the vehicle sensor module, the vehicle camera module, and the memory, acquires sensor signals and the image, and is used to execute the computer-readable program to perform the following operations:
  • the driving state of the detected object is acquired.
  • Each component embodiment of the present application may be implemented by hardware, or by software modules running on one or more processors, or by a combination of them.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all of the functions of some or all components in the computing device according to the embodiments of the present application.
  • This application can also be implemented as a device or device program (for example, a computer program and a computer program product) for executing part or all of the methods described herein.
  • Such a program for implementing the present application may be stored on a computer-readable medium, or may have the form of one or more signals. Such signals can be downloaded from Internet websites, or provided on carrier signals, or provided in any other form.
  • FIG. 5 shows a computing device that can implement the method according to the present application.
  • the computing device traditionally includes a processor 1010 and a computer program product in the form of a memory 1020 or a computer readable medium.
  • the memory 1020 may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
  • the memory 1020 has a storage space 1030 for executing the program code 1031 of any method step in the above method.
  • the storage space 1030 for program codes may include various program codes 1031 for implementing various steps in the above method. These program codes can be read out from or written into one or more computer program products.
  • These computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks.
  • Such a computer program product is usually a portable or fixed storage unit as described with reference to FIG. 6.
  • the storage unit may have storage segments, storage spaces, etc., arranged similarly to the memory 1020 in the computing device of FIG. 5.
  • the program code can be compressed in an appropriate form, for example.
  • the storage unit includes computer-readable codes 1031', that is, codes that can be read by, for example, a processor such as 1010. These codes, when run by a computing device, cause the computing device to execute each of the methods described above. step.
  • the embodiment of the present application provides a computing device, including: one or more processors; and one or more machine-readable media on which instructions are stored. When executed by the one or more processors, The computing device executes the method described in one or more of the embodiments of the present application.
  • the driving reminding method proposed in the embodiments of the present application can solve the problem of the high false alarm rate of the existing driver state detection system, and ensure that the driver has the ability to safely take over the vehicle within a specified time range.
  • the driver does not need to be awake all the time, nor need to pay attention to the road conditions ahead, or even sleep, but needs to be awakened in the event of a system failure.
  • the system can adopt different intensities of reminding methods according to the detected driver's status, so that the driver can complete the switching of the driving task subject in a short time.
  • the driving reminder method proposed in this application can be applied to an automatic driving system above the L3 level, can solve the problem of high false alarm rate of the existing driver state detection system, and ensure that the driver has the ability to safely take over the vehicle within a specified time range.
  • the driving reminder method proposed in the optional embodiment of the present application at least includes the following advantages:
  • the driving reminder method proposed in some embodiments of this application uses MBLBP features as the feature descriptor of the face detection process. This feature can more completely characterize image information, is more robust than LBP features, and is more robust than Haar-like features. Efficient.
  • the embodiment of this application uses the supervised descent method to solve the problem of minimizing the nonlinear least squares (Non-linear Least Squares) objective function.
  • This method has fast processing speed and accurate calculation results, which can overcome the shortcomings of many second-order optimization schemes.
  • Such as non-differentiable, Hessian matrix is computationally intensive

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Transportation (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)

Abstract

La présente invention concerne un procédé fournissant des notifications de conduite, un procédé de détection d'état de conduite et un dispositif informatique. Les procédés fournissant des notifications de conduite consistent à : acquérir des informations d'image de visage d'un sujet détecté ; acquérir au moins un élément d'information de caractéristique faciale du sujet détecté à partir des informations d'image de visage ; déterminer l'état de conduite du sujet détecté en fonction des informations de caractéristique faciale ; et envoyer des informations correspondantes de notification sur la base de l'état de conduite lorsqu'une condition prédéfinie est remplie. Le procédé proposé dans les modes de réalisation de la présente invention peut résoudre le problème des taux élevés de fausses alarmes de systèmes existants de détection d'état de conducteur, et garantir qu'un conducteur est capable de maîtriser un véhicule dans une plage de temps spécifiée.
PCT/CN2019/089639 2019-05-31 2019-05-31 Procédé fournissant des notifications de conduite, procédé de détection d'état de conduite et dispositif informatique WO2020237664A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980000877.1A CN110582437A (zh) 2019-05-31 2019-05-31 驾驶提醒方法、驾驶状态检测方法和计算设备
PCT/CN2019/089639 WO2020237664A1 (fr) 2019-05-31 2019-05-31 Procédé fournissant des notifications de conduite, procédé de détection d'état de conduite et dispositif informatique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/089639 WO2020237664A1 (fr) 2019-05-31 2019-05-31 Procédé fournissant des notifications de conduite, procédé de détection d'état de conduite et dispositif informatique

Publications (1)

Publication Number Publication Date
WO2020237664A1 true WO2020237664A1 (fr) 2020-12-03

Family

ID=68815615

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/089639 WO2020237664A1 (fr) 2019-05-31 2019-05-31 Procédé fournissant des notifications de conduite, procédé de détection d'état de conduite et dispositif informatique

Country Status (2)

Country Link
CN (1) CN110582437A (fr)
WO (1) WO2020237664A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591533A (zh) * 2021-04-27 2021-11-02 浙江工业大学之江学院 基于道路监控的防疲劳驾驶方法、装置、设备及存储介质
CN115284976A (zh) * 2022-08-10 2022-11-04 东风柳州汽车有限公司 车辆座椅自动调节方法、装置、设备及存储介质
CN115796494A (zh) * 2022-11-16 2023-03-14 北京百度网讯科技有限公司 用于无人驾驶车辆的工单处理方法、工单信息展示方法
CN116901975A (zh) * 2023-09-12 2023-10-20 深圳市九洲卓能电气有限公司 一种车载ai安防监控***及其方法
CN117622177A (zh) * 2024-01-23 2024-03-01 青岛创新奇智科技集团股份有限公司 一种基于工业大模型的车辆数据处理方法及装置

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110979340A (zh) * 2019-12-20 2020-04-10 北京海纳川汽车部件股份有限公司 车辆及其控制方法和装置
CN111645694B (zh) * 2020-04-15 2021-08-06 南京航空航天大学 一种基于姿态估计的驾驶员驾驶状态监测***及方法
CN112053224B (zh) * 2020-09-02 2023-08-18 中国银行股份有限公司 业务处理监控实现方法、装置及***
CN112693469A (zh) * 2021-01-05 2021-04-23 中国汽车技术研究中心有限公司 驾驶员接管车辆的测试方法、装置、电子设备及介质
CN112977476A (zh) * 2021-02-20 2021-06-18 纳瓦电子(上海)有限公司 基于雷达探测的车辆驾驶方法和自动驾驶车辆
CN113076801A (zh) * 2021-03-04 2021-07-06 广州铁路职业技术学院(广州铁路机械学校) 一种列车在途状态智能联动检测***及方法
CN113191214A (zh) * 2021-04-12 2021-07-30 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) 一种驾驶人员失误操作风险预警方法及***
CN113715766B (zh) * 2021-08-17 2022-05-24 厦门星图安达科技有限公司 一种车内人员检测方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103714660A (zh) * 2013-12-26 2014-04-09 苏州清研微视电子科技有限公司 基于图像处理融合心率特征与表情特征实现疲劳驾驶判别的***
CN104688251A (zh) * 2015-03-02 2015-06-10 西安邦威电子科技有限公司 一种多姿态下的疲劳及非正常姿态驾驶检测方法
US9460601B2 (en) * 2009-09-20 2016-10-04 Tibet MIMAR Driver distraction and drowsiness warning and sleepiness reduction for accident avoidance
CN106485191A (zh) * 2015-09-02 2017-03-08 腾讯科技(深圳)有限公司 一种驾驶员疲劳状态检测方法及***

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019028798A1 (fr) * 2017-08-10 2019-02-14 北京市商汤科技开发有限公司 Procédé et dispositif de surveillance d'une condition de conduite, et dispositif électronique associé
CN107657236A (zh) * 2017-09-29 2018-02-02 厦门知晓物联技术服务有限公司 汽车安全驾驶预警方法及车载预警***
CN109435959B (zh) * 2018-10-24 2020-10-09 斑马网络技术有限公司 疲劳驾驶处理方法、车辆、存储介质及电子设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9460601B2 (en) * 2009-09-20 2016-10-04 Tibet MIMAR Driver distraction and drowsiness warning and sleepiness reduction for accident avoidance
CN103714660A (zh) * 2013-12-26 2014-04-09 苏州清研微视电子科技有限公司 基于图像处理融合心率特征与表情特征实现疲劳驾驶判别的***
CN104688251A (zh) * 2015-03-02 2015-06-10 西安邦威电子科技有限公司 一种多姿态下的疲劳及非正常姿态驾驶检测方法
CN106485191A (zh) * 2015-09-02 2017-03-08 腾讯科技(深圳)有限公司 一种驾驶员疲劳状态检测方法及***

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591533A (zh) * 2021-04-27 2021-11-02 浙江工业大学之江学院 基于道路监控的防疲劳驾驶方法、装置、设备及存储介质
CN115284976A (zh) * 2022-08-10 2022-11-04 东风柳州汽车有限公司 车辆座椅自动调节方法、装置、设备及存储介质
CN115284976B (zh) * 2022-08-10 2023-09-12 东风柳州汽车有限公司 车辆座椅自动调节方法、装置、设备及存储介质
CN115796494A (zh) * 2022-11-16 2023-03-14 北京百度网讯科技有限公司 用于无人驾驶车辆的工单处理方法、工单信息展示方法
CN115796494B (zh) * 2022-11-16 2024-03-29 北京百度网讯科技有限公司 用于无人驾驶车辆的工单处理方法、工单信息展示方法
CN116901975A (zh) * 2023-09-12 2023-10-20 深圳市九洲卓能电气有限公司 一种车载ai安防监控***及其方法
CN116901975B (zh) * 2023-09-12 2023-11-21 深圳市九洲卓能电气有限公司 一种车载ai安防监控***及其方法
CN117622177A (zh) * 2024-01-23 2024-03-01 青岛创新奇智科技集团股份有限公司 一种基于工业大模型的车辆数据处理方法及装置
CN117622177B (zh) * 2024-01-23 2024-05-14 青岛创新奇智科技集团股份有限公司 一种基于工业大模型的车辆数据处理方法及装置

Also Published As

Publication number Publication date
CN110582437A (zh) 2019-12-17

Similar Documents

Publication Publication Date Title
WO2020237664A1 (fr) Procédé fournissant des notifications de conduite, procédé de détection d'état de conduite et dispositif informatique
US11783601B2 (en) Driver fatigue detection method and system based on combining a pseudo-3D convolutional neural network and an attention mechanism
CN111741884B (zh) 交通遇险和路怒症检测方法
CN102263937B (zh) 基于视频检测的驾驶员驾驶行为监控装置及监控方法
CN110765807B (zh) 驾驶行为分析、处理方法、装置、设备和存储介质
CN104021370B (zh) 一种基于视觉信息融合的驾驶员状态监测方法及***
JP4702100B2 (ja) 居眠り判定装置および居眠り運転警告装置
US20160159217A1 (en) System and method for determining drowsy state of driver
JP5666383B2 (ja) 眠気推定装置及び眠気推定方法
CN103824420A (zh) 基于心率变异性非接触式测量的疲劳驾驶识别***
CN105956548A (zh) 驾驶员疲劳状况检测方法和装置
CN107953827A (zh) 一种车辆盲区预警方法及装置
JP4182131B2 (ja) 覚醒度判定装置及び覚醒度判定方法
CN101950355A (zh) 基于数字视频的驾驶员疲劳状态检测方法
CN101599207A (zh) 一种疲劳驾驶检测装置及汽车
Chen et al. Driver behavior monitoring and warning with dangerous driving detection based on the internet of vehicles
WO2022110737A1 (fr) Procédé et appareil d'alerte précoce anti-collision de véhicule, dispositif terminal embarqué et support de stockage
Yan et al. Recognizing driver inattention by convolutional neural networks
CN110281944A (zh) 基于多信息融合的驾驶员状态监测***
CN114771545A (zh) 一种智能安全驾驶***
CN115937830A (zh) 一种面向特种车辆的驾驶员疲劳检测方法
CN103569084B (zh) 驾驶侦测装置及其方法
CN113561983A (zh) 一种车辆内人员智能安保检测***及其检测方法
KR20150066308A (ko) 운전자 운행 상태 판단 장치 및 그 방법
CN112926364A (zh) 头部姿态的识别方法及***、行车记录仪和智能座舱

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19930279

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19930279

Country of ref document: EP

Kind code of ref document: A1