WO2019232972A1 - 驾驶管理方法和***、车载智能***、电子设备、介质 - Google Patents

驾驶管理方法和***、车载智能***、电子设备、介质 Download PDF

Info

Publication number
WO2019232972A1
WO2019232972A1 PCT/CN2018/105790 CN2018105790W WO2019232972A1 WO 2019232972 A1 WO2019232972 A1 WO 2019232972A1 CN 2018105790 W CN2018105790 W CN 2018105790W WO 2019232972 A1 WO2019232972 A1 WO 2019232972A1
Authority
WO
WIPO (PCT)
Prior art keywords
driver
vehicle
information
image
face
Prior art date
Application number
PCT/CN2018/105790
Other languages
English (en)
French (fr)
Inventor
孟德
李轲
于晨笛
秦仁波
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Priority to SG11201911404QA priority Critical patent/SG11201911404QA/en
Priority to MYPI2019007079A priority patent/MY197453A/en
Priority to JP2019565001A priority patent/JP6932208B2/ja
Priority to EP18919400.4A priority patent/EP3617935A4/en
Priority to KR1020207012402A priority patent/KR102305914B1/ko
Priority to US16/224,389 priority patent/US10915769B2/en
Publication of WO2019232972A1 publication Critical patent/WO2019232972A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K28/00Safety devices for propulsion-unit control, specially adapted for, or arranged in, vehicles, e.g. preventing fuel supply or ignition in the event of potentially dangerous conditions
    • B60K28/02Safety devices for propulsion-unit control, specially adapted for, or arranged in, vehicles, e.g. preventing fuel supply or ignition in the event of potentially dangerous conditions responsive to conditions relating to the driver
    • B60K28/06Safety devices for propulsion-unit control, specially adapted for, or arranged in, vehicles, e.g. preventing fuel supply or ignition in the event of potentially dangerous conditions responsive to conditions relating to the driver responsive to incapacity of driver
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/005Handover processes
    • B60W60/0051Handover processes from occupants to vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0818Inactivity or incapacity of driver
    • B60W2040/0863Inactivity or incapacity of driver due to erroneous selection or response of the driver
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0872Driver physiology
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera

Definitions

  • the present application relates to artificial intelligence technology, in particular to a driving management method and system, a vehicle-mounted intelligent system, electronic equipment, and a medium.
  • Intelligent vehicle is a comprehensive system that integrates functions such as environmental perception, planning and decision-making, and multi-level assisted driving. It integrates the technologies of computer, modern sensing, information fusion, communication, artificial intelligence and automatic control. Technology complex. At present, research on smart vehicles is mainly focused on improving the safety and comfort of automobiles, and providing excellent human-vehicle interaction interfaces. In recent years, intelligent vehicles have become a research hotspot in the world's vehicle engineering field and a new driving force for the growth of the automotive industry. Many developed countries have incorporated them into intelligent transportation systems that they have focused on.
  • the embodiments of the present application provide a driving management method and system, a vehicle-mounted intelligent system, an electronic device, and a medium.
  • a driving management method includes:
  • the control vehicle executes the operation instruction received by the vehicle.
  • the method further includes:
  • the method further includes:
  • the feature matching result indicates that the feature matching is successful, obtaining the identity information of the vehicle driver according to a pre-stored face image of the successful feature matching;
  • the method further includes:
  • the feature matching result indicates that the feature matching is successful, obtaining the identity information of the vehicle driver according to a pre-stored face image of the successful feature matching;
  • the method further includes: acquiring a living body detection result of the acquired image;
  • the controlling the vehicle to execute the operation instruction received by the vehicle according to the result of the feature matching includes:
  • the pre-stored face images in the data set are correspondingly provided with driving authority
  • the method further includes: if the feature matching result indicates that the feature matching is successful, obtaining a driving authority corresponding to a pre-stored face image with successful feature matching;
  • the controlling the vehicle to execute an operation instruction received by the vehicle includes: controlling the vehicle to execute an operation instruction received by the vehicle and within the authority range.
  • the method further includes:
  • an early warning prompt for an abnormal driving state and / or an intelligent driving control are performed.
  • the driver state detection includes any one or more of the following: driver fatigue state detection, driver distraction state detection, driver scheduled distraction action detection, and driver gesture detection.
  • the performing driver fatigue state detection based on the video stream includes:
  • the status information of at least a part of the face includes any one or more of the following: eyes open Closing status information, mouth opening and closing status information;
  • the result of the driver fatigue state detection is determined according to a parameter value of an index for characterizing the driver fatigue state.
  • the indicator used to characterize the fatigue state of the driver includes any one or more of the following: the degree of eyes closed and the degree of yawning.
  • the parameter value of the degree of closed eyes includes any one or more of the following: number of eyes closed, frequency of closed eyes, duration of closed eyes, amplitude of closed eyes, number of closed eyes, frequency of closed eyes; and / or,
  • the parameter value of the yawning degree includes any one or more of the following: yawning status, number of yawning, duration of yawning, and frequency of yawning.
  • the detecting the distraction state of the driver based on the video stream includes:
  • the index used to characterize the driver's distracted state includes any of the following Or more: the degree of deviation of the face orientation, the degree of deviation of the line of sight;
  • a result of detecting the driver's distraction state is determined according to a parameter value of an index for characterizing the driver's distraction state.
  • the parameter value of the face orientation deviation degree includes any one or more of the following: the number of turns, the duration of the turn, and the frequency of the turn; and / or,
  • the parameter value of the degree of sight line deviation includes any one or more of the following: the sight line direction deviation angle, the sight line direction deviation duration, and the sight line direction deviation frequency.
  • the detecting the face orientation and / or the line of sight direction of the driver image in the video stream includes:
  • Face detection and / or line of sight detection is performed according to the key points of the face.
  • performing face orientation detection according to the key points of the face to obtain the face orientation information includes:
  • the predetermined distraction action includes any one or more of the following: smoking action, drinking action, eating action, calling action, and entertaining action.
  • detecting the driver's predetermined distraction based on the video stream includes:
  • the method further includes:
  • a result of detecting a driver's predetermined distraction action is determined according to a parameter value of the index for characterizing a driver's distraction degree.
  • the parameter value of the driver's degree of distraction includes any one or more of the following: the number of predetermined distraction actions, the duration of the predetermined distraction action, and the frequency of the predetermined distraction action.
  • the method further includes:
  • the driver detects a predetermined distraction action as a result of detecting the predetermined distraction action, the detected distraction action is prompted.
  • the method further includes:
  • a control operation corresponding to a result of the driver state detection is performed.
  • the performing a control operation corresponding to a result of the driver state detection includes at least one of the following:
  • the driving mode is switched to an automatic driving mode.
  • the method further includes:
  • the at least part of the results include: abnormal driving state information determined according to driver state detection.
  • the method further includes:
  • the method further includes:
  • the data set is acquired by the mobile terminal device from a cloud server and sent to the vehicle when receiving the data set download request.
  • the method further includes:
  • the received operation instruction is refused to be executed.
  • the method further includes:
  • a data set is established according to the registered face image.
  • the obtaining a feature matching result of a face portion of at least one image in the video stream with at least one pre-stored face image in a data set includes:
  • a face portion of at least one image in the video stream is uploaded to the cloud server, and a feature matching result sent by the cloud server is received.
  • a vehicle-mounted intelligent system including:
  • a video acquisition unit for controlling a camera component provided on the vehicle to collect a video stream of the driver of the vehicle;
  • a result obtaining unit configured to obtain a feature matching result between a face portion of at least one image in the video stream and at least one pre-stored face image in a data set; wherein the data set stores at least one registered driver ’s Pre-stored face images;
  • An operation unit is configured to control the vehicle to execute an operation instruction received by the vehicle if the feature matching result indicates that the feature matching is successful.
  • a driving management method includes:
  • the method further includes:
  • the method further includes:
  • a data set is established according to the registered face image.
  • obtaining the feature matching result between the face image and at least one pre-stored face image in the data set includes:
  • Feature matching is performed on the face image and at least one pre-stored face image in the data set to obtain the feature matching result.
  • obtaining the feature matching result between the face image and at least one pre-stored face image in the data set includes:
  • a feature matching result of the face image and at least one pre-stored face image in a data set is obtained from the vehicle.
  • the method further includes:
  • the at least part of the results include: abnormal driving state information determined according to driver state detection.
  • the method further includes: performing a control operation corresponding to a result of the driver state detection.
  • the performing a control operation corresponding to a result of the driver state detection includes:
  • the driving mode is switched to an automatic driving mode.
  • the method further includes:
  • the method further includes:
  • the performing data statistics based on the abnormal driving state information includes:
  • the performing vehicle management based on the abnormal driving state information includes:
  • the performing driver management based on the abnormal driving state information includes:
  • an electronic device including:
  • An image receiving unit configured to receive a face image to be identified sent by a vehicle
  • a matching result obtaining unit configured to obtain a feature matching result between the face image and at least one pre-stored face image in a data set, wherein the data set stores at least one pre-stored face image of a registered driver;
  • An instruction sending unit is configured to: if the feature matching result indicates that the feature matching is successful, send an instruction to the vehicle to allow control of the vehicle.
  • a driving management system including: a vehicle and / or a cloud server;
  • the vehicle is used to execute the driving management method according to any one of the above;
  • the cloud server is configured to execute the driving management method according to any one of the foregoing.
  • the system further includes: a mobile terminal device, configured to:
  • an electronic device including: a memory for storing executable instructions;
  • a processor configured to communicate with the memory to execute the executable instructions to complete the driving management method according to any one of the above.
  • a computer program including computer-readable code.
  • the computer-readable code runs in an electronic device
  • a processor in the electronic device executes the program to implement the foregoing.
  • the driving management method according to any one.
  • a computer storage medium for storing computer-readable instructions, and when the instructions are executed, the driving management method according to any one of the foregoing is implemented.
  • a video stream of a driver of the vehicle is collected by controlling a camera component provided on the vehicle; and at least one image in the video stream is acquired
  • the feature matching result of the human face part with at least one pre-stored face image in the data set if the feature matching result indicates that the feature matching is successful, the vehicle is controlled to execute the operation instruction received by the vehicle, which reduces the driver's recognition of its dependence on the network.
  • the feature matching in the case of the network further improves the safety of the vehicle.
  • FIG. 1 is a flowchart of a driving management method according to some embodiments of the present application.
  • FIG. 2 is a flowchart of driver fatigue state detection based on a video stream in some embodiments of the present application
  • FIG. 3 is a flowchart of detecting a driver's distraction state based on a video stream in some embodiments of the present application
  • FIG. 4 is a flowchart of detecting a predetermined distracted motion of a driver based on a video stream in some embodiments of the present application
  • FIG. 5 is a flowchart of a driver state detection method according to some embodiments of the present application.
  • FIG. 6 is a flowchart of an application example of a driving management method according to some embodiments of the present application.
  • FIG. 7 is a schematic structural diagram of a vehicle-mounted intelligent system according to some embodiments of the present application.
  • FIG. 8 is a flowchart of a driving management method according to another embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an electronic device according to some embodiments of the present application.
  • FIG. 10 is a flowchart of using a driving management system according to some embodiments of the present application.
  • FIG. 11 is a flowchart of using a driving management system according to another embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of an application example of an electronic device according to some embodiments of the present application.
  • the embodiments of the present application can be applied to electronic devices such as a terminal device, a computer system, and a server, and can be operated with many other general or special-purpose computing system environments or configurations.
  • Examples of well-known terminal devices, computing systems, environments, and / or configurations suitable for use with electronic devices such as terminal devices, computer systems, servers include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients Computers, handheld or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, small computer systems, mainframe computer systems, and distributed cloud computing technology environments including any of these systems, and more.
  • Electronic devices such as a terminal device, a computer system, and a server can be described in the general context of computer system executable instructions (such as program modules) executed by a computer system.
  • program modules may include routines, programs, target programs, components, logic, data structures, and so on, which perform specific tasks or implement specific abstract data types.
  • the computer system / server can be implemented in a distributed cloud computing environment. In a distributed cloud computing environment, tasks are performed by remote processing devices linked through a communication network. In a distributed cloud computing environment, program modules may be located on a local or remote computing system storage medium including a storage device.
  • FIG. 1 is a flowchart of a driving management method according to some embodiments of the present application.
  • the execution subject of the driving management method in this embodiment may be a vehicle-end device.
  • the execution subject may be an in-vehicle intelligent system or other devices with similar functions.
  • the method in this embodiment includes:
  • the camera module is set at a position inside the vehicle where the driving position can be photographed.
  • the position of the camera module can be fixed or not fixed; in the case of non-fixed, it can be adjusted according to different drivers
  • the position of the camera component; under fixed conditions, the lens direction of the camera component can be adjusted for different drivers.
  • the operation 110 may be performed by a processor calling a corresponding instruction stored in a memory, or may be performed by a video acquisition unit 71 executed by a processor.
  • a pre-stored face image of at least one registered driver is stored in the data set, that is, a face image corresponding to the registered driver is stored in the data set as the pre-stored face image.
  • the face part in the image can be obtained through face detection (for example, face detection based on a neural network); feature matching can be performed on the face part with a pre-existing face image in the data set, and a convolutional neural network can be used.
  • face detection for example, face detection based on a neural network
  • feature matching can be performed on the face part with a pre-existing face image in the data set, and a convolutional neural network can be used.
  • the features of the face part and the features of the pre-existing face image are obtained separately, and then feature matching is performed to identify the pre-existing face image corresponding to the face part corresponding to the face, thereby realizing the identity of the driver who has collected the image .
  • the operation 120 may be performed by a processor calling a corresponding instruction stored in a memory, or may be performed by a result acquisition unit 72 executed by the processor.
  • the feature matching result includes two cases: successful feature matching and unsuccessful feature matching.
  • the feature matching is successful, it indicates that the driver of the vehicle is a registered driver and can control the vehicle. At this time, the controlling vehicle performs receiving To the operation instructions (operation instructions issued by the driver).
  • the operation 130 may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by an operation unit 73 executed by the processor.
  • a video stream of a driver of the vehicle is collected by controlling a camera component provided on the vehicle; a face portion of at least one image in the video stream and at least one pre-stored face image in a data set are acquired.
  • Feature matching result if the feature matching result indicates that the feature matching is successful, controlling the vehicle to execute the operation instructions received by the vehicle, reducing the driver's reliance on the network for recognition, and enabling feature matching without the network, further improving the vehicle Security.
  • the driving management method further includes:
  • the data set is usually stored in a cloud server.
  • face matching on the vehicle side needs to be implemented. In order to be able to match human faces even when there is no network, you can use the network when Download the data set from the cloud server and save the data set on the vehicle side. At this time, even if there is no network and cannot communicate with the cloud server, face matching can be achieved on the vehicle side, and it is convenient for the vehicle side to manage the data set.
  • the driving management method further includes:
  • the identity information of the driver of the vehicle is obtained according to the pre-stored face image of the successful feature matching
  • the feature matching when the feature matching is successful, it means that the driver is a registered driver, corresponding identity information can be obtained from the data set, and the image and identity information can be sent to the cloud server.
  • the driver establishes real-time tracking (for example, when and where a certain driver drives a certain vehicle). Since the image is obtained based on the video stream, in the presence of the network, the image can be uploaded to the cloud server in real time. To realize the analysis, statistics and / or management of the driving state of the driver.
  • the driving management method may further include:
  • the identity information of the driver of the vehicle is obtained according to the pre-stored face image of the successful feature matching
  • the face matching process is based on the face part in the image
  • the cloud server when sending the image to the cloud server, only the face part obtained based on the image segmentation can be sent, which is beneficial to reducing the on-board end.
  • the cloud server After the cloud server receives the intercepted face part and identity information, it can store the face part as a new face image of the driver in the data set, which can be added or Replace the existing face image; as the basis for the next face recognition.
  • the driving management method may further include: acquiring a living body detection result of the acquired image;
  • Operation 130 may include:
  • the vehicle is controlled to execute the operation instruction received by the vehicle.
  • the living body detection is used to determine whether the image is from a real person (or a living person), and the identity verification of the driver can be made more accurate through the living body detection.
  • This embodiment does not limit the specific method of living body detection. For example, three-dimensional information depth analysis of the image, facial optical flow analysis, Fourier spectrum analysis, edge or reflection security clue analysis, and multi-frame video in the video stream can be used. Image frame comprehensive analysis and other methods are implemented, so it will not be repeated here.
  • the pre-stored face image in the data set is correspondingly provided with driving authority
  • the driving management method may further include: if the feature matching result indicates that the feature matching is successful, obtaining the driving authority corresponding to the pre-stored face image of the successful feature matching;
  • Operation 130 may include controlling the vehicle to execute an operation instruction received by the vehicle within the authority range.
  • the safety of the vehicle can be improved, and a driver with high permission can be guaranteed to have higher control rights, which can improve the user experience.
  • the setting of different permissions can be distinguished by limiting the operating time and / or the operating range. For example, some drivers can drive only during the day or at a specific time, while other drivers can drive all day. Or, some drivers may use the in-car entertainment equipment while driving the vehicle, while other drivers may only drive.
  • the driving management method further includes:
  • an early warning prompt for an abnormal driving state and / or an intelligent driving control are performed.
  • the results of driver state detection may be output.
  • intelligent driving control of the vehicle may be performed according to the result of the driver state detection.
  • the result of the driver state detection may be output, and at the same time, intelligent driving control of the vehicle may be performed according to the result of the driver state detection.
  • the results of driver state detection may be output locally and / or the results of driver state detection may be output remotely.
  • the result of the driver state detection is output locally, that is, the driver state detection result is output through the driver state detection device or the driver monitoring system, or the driver state detection result is output to the central control system in the vehicle, so that the vehicle is based on the As a result of the driver state detection, intelligent driving control is performed on the vehicle.
  • Remotely output the results of driver status detection for example, the results of driver status detection may be sent to a cloud server or management node for collection, analysis, and / or management of the results of driver status detection by the cloud server or management node, or The vehicle is remotely controlled based on the result of the driver state detection.
  • an early warning prompt for an abnormal driving state and / or intelligent driving control may be executed by a processor calling a corresponding instruction stored in a memory, or may be executed by a processor.
  • the output module and / or the intelligent driving control module are executed.
  • the foregoing operations may be performed by a processor calling a corresponding instruction stored in a memory, or may be performed by a driver state detection unit operated by the processor.
  • the driver state detection may include, but is not limited to, any one or more of the following: driver fatigue state detection, driver distraction state detection, driver predetermined distraction motion detection, and driver gesture detection.
  • the driver state detection results accordingly include, but are not limited to, any one or more of the following: the driver fatigue state detection result, the driver distraction state detection result, the driver scheduled distraction action detection result, and the driver gesture Test results.
  • the predetermined distraction action may be any distraction action that may distract the driver ’s attention, such as: smoking action, drinking action, eating action, phone call action, entertainment action, and the like.
  • eating actions include actions such as eating fruits and snacks
  • entertaining actions include actions such as sending messages, playing games, and singing songs by any electronic device.
  • electronic devices include mobile phones, handheld computers, and games. Machine and so on.
  • driver state detection can be performed on a driver image, and a result of the driver state detection can be output, thereby real-time detection of the driving state of the driver is facilitated when driving
  • the driver ’s driving condition is poor, he / she shall take corresponding measures in time to ensure safe driving and avoid road traffic accidents.
  • FIG. 2 is a flowchart of detecting driver fatigue state based on a video stream in some embodiments of the present application.
  • the embodiment shown in FIG. 2 may be executed by a processor calling a corresponding instruction stored in a memory, or may be executed by a state detection unit run by the processor.
  • a method for detecting driver fatigue status based on a video stream may include:
  • the at least partial region of the human face may include at least one of a driver's face eye region, a driver's face mouth region, and an entire region of the driver's face.
  • the state information of at least a part of the face may include any one or more of the following: eye opening and closing state information, and mouth opening and closing state information.
  • the above-mentioned eye-opening state information may be used to perform closed-eye detection of the driver, for example, detecting whether the driver is half-closed ("half" indicates a state of incompletely closed eyes, such as squinting in a doze state, etc. ), Whether to close eyes, the number of eyes closed, and the magnitude of eyes closed.
  • the eye opening and closing state information may be information obtained by normalizing the height of the eyes opened.
  • the mouth opening and closing state information can be used to perform yawn detection of the driver, for example, detecting whether the driver yawns, the number of yawns, and the like.
  • the mouth opening and closing state information may be information obtained by normalizing the height of the mouth opening.
  • face keypoint detection may be performed on the driver image, and eye keypoints in the detected face keypoints may be directly used for calculation, so as to obtain eye opening and closing state information according to the calculation result.
  • an eye key point (for example, coordinate information of the eye key point in the driver image) in the face key point may be first used to locate the eyes in the driver image to obtain an eye image, and Use this eye image to obtain the upper eyelid line and the lower eyelid line, and by calculating the interval between the upper eyelid line and the lower eyelid line, obtain the eye opening and closing state information.
  • the mouth key points in the face key points can be directly used for calculation, so as to obtain the mouth opening and closing state information according to the calculation results.
  • the mouth key point (for example, the coordinate information of the mouth key point in the driver image) in the face key point may be first used to locate the mouth in the driver image.
  • a mouth image is obtained, and an upper lip line and a lower lip line are obtained by using the mouth image.
  • the mouth opening and closing state information is obtained.
  • the indicators used to characterize the fatigue state of the driver may include, but are not limited to, any one or more of the following: the degree of eyes closed, the degree of yawning.
  • the parameter value of the degree of closed eyes may include, but is not limited to, any one or more of the following: number of eyes closed, frequency of closed eyes, duration of closed eyes, amplitude of closed eyes, number of closed eyes, half Eye closing frequency; and / or, yawning parameter values may include, but are not limited to, any one or more of the following: yawn status, yawn count, yawn duration, and yawn frequency.
  • the result of the driver fatigue state detection may include: no fatigue state and fatigue driving state are detected.
  • the result of the driver fatigue state detection may also be a degree of fatigue driving, where the degree of fatigue driving may include a normal driving level (also referred to as a non-fatigue driving level) and a fatigue driving level.
  • the fatigue driving level may be one level, or may be divided into multiple different levels.
  • the above-mentioned fatigue driving level may be divided into: a prompt fatigue driving level (also referred to as a mild fatigue driving level) and a warning fatigue.
  • Driving level also called severe fatigue driving level
  • the degree of fatigue driving can be divided into more levels, such as: mild fatigue driving level, moderate fatigue driving level, and severe fatigue driving level. This embodiment does not limit the different levels included in the degree of fatigue driving.
  • FIG. 3 is a flowchart of detecting a distracted state of a driver based on a video stream in some embodiments of the present application.
  • the embodiment shown in FIG. 3 may be executed by a processor calling a corresponding instruction stored in a memory, or may be executed by a state detection unit run by the processor.
  • a method for detecting driver distraction based on a video stream may include:
  • the above-mentioned face orientation information may be used to determine whether the driver's face direction is normal, for example, determining whether the driver's side face is facing forward or whether he is turning back.
  • the face orientation information may be an angle between the front of the driver's face and the front of the vehicle being driven by the driver.
  • the above-mentioned line-of-sight direction information may be used to determine whether the line-of-sight direction of the driver is normal, for example, determining whether the driver is looking ahead, etc., and the line-of-sight direction information may be used to determine whether the line of sight of the driver has deviated.
  • the line of sight direction information may be an angle between the line of sight of the driver and the front of the vehicle being driven by the driver.
  • the index used to characterize the driver's distracted state may include, but is not limited to, any one or more of the following: the degree of deviation of the face orientation, and the degree of deviation of the line of sight.
  • the parameter value of the degree of deviation of the face orientation may include, but is not limited to, any one or more of the following: the number of turns, the duration of the turn, and the frequency of the turn; and / or, the degree of the line of sight deviation
  • the parameter value may include, for example, but not limited to any one or more of the following: a deviation angle of the sight line direction, a deviation time of the sight line direction, and a deviation frequency of the sight line direction.
  • the above-mentioned degree of deviation of the line of sight may include, for example, at least one of whether the line of sight is deviated and whether the line of sight is severely deviated; and the above-mentioned degree of deviation of the face orientation (also referred to as the degree of turning face or the degree of turning back) may include, for example ,: At least one of whether the head is turned, whether it is turned for a short time, and whether it is turned for a long time.
  • a long-time large-angle turn can record a long-time large-angle turn, as well as the length of the current turn; when it is determined that the face orientation information is not greater than the first orientation, greater than the second orientation, and not greater than the first One direction, which is larger than the second direction, lasts for N1 frames (for example, 9 frames or 10 frames, etc.), then it is determined that the driver has a long-time small-angle turning phenomenon, and a small-angle turning can be recorded. , You can also record the duration of this turn.
  • the included angle between the line of sight information and the front of the car is greater than the first included angle, and the phenomenon that is greater than the first included angle persists for N2 frames (for example, 8 frames or 9 frames) Frame, etc.), it is determined that the driver has experienced a serious sight deviation, which can record a severe sight deviation, or the duration of the severe sight deviation; in determining that the angle between the sight direction information and the front of the car is not greater than If the first included angle is greater than the second included angle, and the phenomenon that the angle is not greater than the first included angle and greater than the second included angle persists for N2 frames (for example, 8 frames or 9 frames, etc.), it is determined that the driver appears Once the sight deviation phenomenon is observed, the sight deviation can be recorded once, and the duration of the sight deviation can also be recorded.
  • N2 frames for example, 8 frames or 9 frames
  • the values of the first orientation, the second orientation, the first included angle, the second included angle, N1, and N2 may be set according to actual conditions, and the value of the values is not limited in this embodiment.
  • the result of the driver's distracted state detection may include, for example, the driver ’s distraction (the driver ’s distraction is not distracted) and the driver ’s distracted state; or the driver ’s distracted state detection result may be driving
  • the level of distraction of the driver may include: the driver's concentration (the driver's distraction is not distracted), the driver's concentration is slightly distracted, the driver's concentration is moderately distracted, and the driver's concentration is seriously distracted.
  • the level of driver distraction can be determined by a preset condition that is satisfied by a parameter value of an index used to characterize a driver's distracted state.
  • the driver's distraction level is the driver's concentration
  • the deviation angle of the sight direction and the face orientation deviation angle are greater than or equal to The first preset angle with a duration greater than the first preset duration and less than or equal to the second preset duration is a slight distraction of the driver's attention
  • the sight direction deviation angle or the face orientation deviation angle is greater than or equal to The first preset angle and the duration is greater than the second preset duration and less than or equal to the third preset duration, which is a moderate distraction of the driver's attention
  • the sight direction deviation angle or the face orientation deviation angle is greater than or equal to The first preset angle and the duration longer than the third preset duration is a serious distraction of the driver.
  • the first preset duration is shorter than the second preset duration and the second preset duration is shorter than the third preset duration.
  • a parameter value of an index for characterizing a driver's distraction state is determined by detecting a face direction and / or a line of sight direction of a driver image, and the result of the driver's distraction state detection is determined according to this, and the driver can be judged Whether to focus on driving, by quantifying the driver's distraction status, quantifying the degree of driving concentration into at least one of the indicators of the degree of sight deviation and the degree of turning, which is conducive to timely and objective measurement of the driver's focused driving status .
  • operation 302 of detecting the face orientation and / or the line of sight direction of the driver image in the video stream may include:
  • Face orientation and / or line of sight detection is performed based on key points of the face.
  • facial keypoints usually include head pose feature information
  • face orientation detection is performed based on facial keypoints to obtain facial orientation information, including: obtaining heads based on facial keypoints Characteristic information of the pose; determine the face orientation (also called head pose) information based on the feature information of the head pose, where the face orientation information here can indicate, for example, the direction and angle of the face's rotation, and the direction of rotation here It can be turning left, turning right, turning down, and / or turning up, etc.
  • Face orientation head attitude
  • yaw represents the horizontal deflection angle (yaw angle) and vertical deflection of the head in the normalized ball coordinates (the camera coordinate system where the camera is located)
  • Angle elevation
  • the horizontal deflection angle and / or the vertical deflection angle is greater than a preset angle threshold and the duration is greater than a preset time threshold, it may be determined that the driver's distracted state detection result is inattention.
  • a corresponding neural network may be utilized to obtain face orientation information of at least one driver image.
  • the detected key points of the face may be input to a first neural network, and the first neural network may extract the characteristic information of the head pose based on the received key points of the face and input the second neural network; Head posture estimation is performed based on the feature information of the head posture, and face orientation information is obtained.
  • the existing developments are mature and have good real-time neural network for extracting feature information of head pose and neural network for estimating face orientation to obtain face orientation information
  • aiming at The video captured by the camera can accurately and timely detect the face orientation information corresponding to at least one image frame (that is, at least one driver image) in the video, thereby helping to improve the accuracy of determining the degree of driver's attention.
  • the gaze direction detection is performed according to the key points of the face to obtain the gaze direction information, including: determining the pupil edge position according to the eye image positioned by the eye key point in the key points of the face, and calculating according to the pupil edge position Pupil center position; Calculate sight direction information based on pupil center position and eye center position. For example: a vector of the pupil center position and the eye center position in the eye image can be calculated, and this vector can be used as the sight direction information.
  • the direction of the line of sight can be used to determine whether the driver is focusing on driving.
  • the line of sight direction can be expressed as (yaw, pitch), where yaw represents the horizontal deflection angle (yaw angle) and vertical deflection angle (elevation angle) of the line of sight in the normalized ball coordinates (the camera coordinate system where the camera is located).
  • yaw represents the horizontal deflection angle
  • vertical deflection angle elevation angle
  • the horizontal deflection angle and / or the vertical deflection angle is greater than a preset angle threshold and the duration is greater than a preset time threshold, it may be determined that the driver's distracted state detection result is inattention.
  • determining the pupil edge position according to an eye image positioned by an eye keypoint in a keypoint of a face can be achieved by: based on a third neural network, the The eye area image detects pupil edge positions, and obtains pupil edge positions based on information output by the third neural network.
  • FIG. 4 is a flowchart of detecting a predetermined distracted motion of a driver based on a video stream in some embodiments of the present application.
  • the embodiment shown in FIG. 4 may be executed by a processor calling a corresponding instruction stored in a memory, or may be executed by a state detection unit run by the processor.
  • a method for detecting a driver's predetermined distraction based on a video stream may include:
  • the driver performs a predetermined distracted motion detection by detecting a target object corresponding to the predetermined distracted motion and determining whether a predetermined distracted motion occurs according to a detection frame of the detected target object, thereby determining whether the driver is distracted.
  • the operations 402 to 404 may include: performing face detection on the driver image via a fourth neural network to obtain a face detection frame, and extracting feature information of the face detection frame; The four neural networks determine whether a smoking action occurs based on the feature information of the face detection frame.
  • the above operations 402 to 404 may include: detecting a preset target object corresponding to the eating, drinking, drinking, calling, and entertainment actions of the driver image via the fifth neural network to obtain a detection frame of the preset target object, where the preset target object may be Including: hands, mouth, eyes, target objects; target objects can include, but are not limited to, any one or more of the following: containers, food, electronic devices; predetermined distracting actions are determined according to a preset target object detection frame
  • the detection result of the predetermined distraction action may include one of the following: no eating action / drinking action / calling action / entertainment action, eating action, drinking action, calling action, or entertainment action.
  • the detection frame of the preset target object to determine the detection result of the predetermined distraction movement may include: a detection frame of whether the hand is detected, a detection frame of the mouth, a detection frame of the eye, and a detection frame of the target object, and according to the hand Whether the detection frame of the target object overlaps with the detection frame of the target object, the type of the target object, and whether the distance between the detection frame of the target object and the detection frame of the mouth or the detection frame of the eye meets a preset condition, determine the Test results.
  • the detection frame of the hand overlaps the detection frame of the target object, and the type of the target object is a container or food, and the detection frame of the target object and the detection frame of the mouth overlap, it is determined that eating or drinking actions occur ; And / or, if the detection frame of the hand overlaps the detection frame of the target object, the type of the target object is an electronic device, and the minimum distance between the detection frame of the target object and the detection frame of the mouth is less than the first preset distance, Or the minimum distance between the detection frame of the target object and the detection frame of the eye is smaller than the second preset distance, and it is determined that an entertainment action or a phone call action occurs.
  • the detection frame of the hand, the detection frame of the mouth, and the detection frame of any target object are not detected at the same time, and the detection frame of the hand, the detection frame of the eye, and the detection frame of any target object are not detected at the same time, Determining the detection result of distraction movement as no eating movement, drinking movement, telephone movement, and entertainment movement; and / or, if the detection frame of the hand does not overlap with the detection frame of the target object, determine the detection result of distraction movement Is that no eating, drinking, calling, or entertaining action is detected; and / or, if the type of the target object is a container or food, and there is no overlap between the detection frame of the target object and the detection frame of the mouth, and / Or, the type of the target object is an electronic device, and the minimum distance between the detection frame of the target object and the detection frame of the mouth is not less than the first preset distance, or between the detection frame of the target object and the detection frame of the eye The minimum distance is not less than the second preset distance, and it
  • the method may further include: if the result of the driver's distraction state detection is that a predetermined distraction action is detected, prompting the detected predetermined distraction action, for example, : When a smoking action is detected, it is prompted to detect smoking; when a drinking action is detected, it is prompted to detect drinking water; when a calling action is detected, it is prompted to detect a call.
  • the operation of the predetermined distracted action detected by the prompt may be executed by the processor by calling a corresponding instruction stored in the memory, or may be performed by a prompt unit executed by the processor.
  • the index used to characterize the degree of driver distraction may include, but is not limited to, any one or more of the following: the number of predetermined distraction actions, the duration of the predetermined distraction action, and the frequency of the predetermined distraction action. For example: the number of smoking actions, duration, frequency; the number of drinking actions, duration, frequency; the number of phone calls, duration, frequency; and so on.
  • the result of the driver's predetermined distracted motion detection may include: no predetermined distracted motion is detected, and the detected predetermined distracted motion is included.
  • the result of the driver's predetermined distraction detection may also be a distraction level, for example, the distraction level may be divided into: an undistracted level (also referred to as a focused driving level), and a distracted driving level (also May be referred to as a mildly distracted driving level) and a warning distracted driving level (also may be referred to as a severely distracted driving level).
  • the level of distraction can also be divided into more levels, such as: undistracted driving level, mildly distracted driving level, moderately distracted driving level, and severely distracted driving level.
  • the distraction level of at least one of the embodiments may also be divided according to other situations, and is not limited to the above-mentioned level division.
  • the distraction level may be determined by a preset condition satisfied by a parameter value of an index used to characterize a driver's degree of distraction. For example: if no predetermined distraction action is detected, the distraction level is the undistraction level (also known as a focused driving level); if the duration of the predetermined distraction action is less than the first preset duration and the frequency is less than the first A preset frequency, the level of distraction is a mild distracted driving level; if the duration of the predetermined distracted action is detected to be greater than the first preset duration, and / or the frequency is greater than the first preset frequency, the distracted level is a severe distraction Heart driving level.
  • the undistraction level also known as a focused driving level
  • the level of distraction is a mild distracted driving level
  • the duration of the predetermined distracted action is detected to be greater than the first preset duration, and / or the frequency is greater than the first preset frequency
  • the distracted level is a severe distraction Heart driving level.
  • the driver state detection method may further include: outputting distraction prompt information according to a result of the driver's distracted state detection and / or a result of the driver's predetermined distracted motion detection.
  • the output Distraction reminders to remind the driver to concentrate on driving.
  • the foregoing operation of outputting the distraction prompt information according to the result of the driver's distracted state detection and / or the result of the driver's predetermined distracted motion detection may be executed by the processor calling a corresponding instruction stored in the memory, or It may be executed by a prompt unit executed by the processor.
  • FIG. 5 is a flowchart of a driver state detection method according to some embodiments of the present application.
  • the embodiment shown in FIG. 5 may be executed by a processor calling a corresponding instruction stored in a memory, or may be executed by a state detection unit run by the processor.
  • the driver state detection method in this embodiment includes:
  • each driver state level corresponds to a preset condition, which can judge the results of driver fatigue state detection, driver distraction state detection results, and driver predetermined distracted motion detection results in real time.
  • the driver status level corresponding to the satisfied preset conditions may be determined as a result of the driver status detection of the driver.
  • the driver status level may include, for example, a normal driving status (also referred to as a focused driving level), a prompt driving status (a poor driving status), and a warning driving status (a very poor driving status).
  • the foregoing embodiment shown in FIG. 5 may be executed by a processor calling a corresponding instruction stored in a memory, or may be executed by an output module run by the processor.
  • the preset conditions corresponding to a normal driving state may include:
  • driver fatigue state detection is: no fatigue state or non-fatigue driving level is detected;
  • the result of the driver's distraction detection is: the driver's concentration
  • condition 3 the result of the driver's predetermined distraction action detection is that no predetermined distraction action or no distraction level is detected.
  • the driving state level is a normal driving state (also referred to as a focused driving level).
  • the preset conditions corresponding to the driving state may include:
  • the result of the driver fatigue state detection is: prompting a fatigue driving level (also known as a mild fatigue driving level);
  • condition 33 the result of the driver's predetermined distracted motion detection is: prompting a distracted driving level (also referred to as a mild distracted driving level).
  • a distracted driving level also referred to as a mild distracted driving level
  • driving The status level is a prompt driving status (the driving status is poor).
  • the preset conditions corresponding to the warning driving state may include:
  • Warning fatigue driving level also called severe fatigue driving level
  • the result of the driver's predetermined distracted motion detection is: a warning of a distracted driving level (also referred to as a severe distracted driving level).
  • the driving state level is a warning driving state (the driving state is very poor).
  • the driver state detection method may further include:
  • a control operation corresponding to the result of the driver state detection is performed.
  • the execution of the control operation corresponding to the result of the driver state detection may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by a control unit executed by the processor.
  • performing a control operation corresponding to the result of the driver state detection may include at least one of the following:
  • Output prompts / alarm information corresponding to the prompt / alarm predetermined conditions for example: alert the driver by means of sound (such as: voice or ringing) / light (such as: lights or lights flashing) / vibration, In order to remind the driver to pay attention, prompt the driver to return distracted attention to driving, or encourage the driver to rest, etc., in order to achieve safe driving and avoid road traffic accidents; and / or,
  • a predetermined driving mode switching condition for example, a preset condition corresponding to a warning driving state (for example, the driving state is very poor) is satisfied or the driving state level is a warning distracted driving level (also referred to as When severely distracted driving level), switch the driving mode to automatic driving mode to achieve safe driving and avoid road traffic accidents; at the same time, you can also use sound (such as: voice or ringing) / light (such as: light on Or flashing lights, etc.) / Vibration to remind the driver in order to remind the driver, prompt the driver to return distracted attention to driving, or encourage the driver to rest, etc .; and / or, if the determined driver status
  • the result of the test satisfies the conditions for sending the predetermined information, sends the predetermined information to the predetermined contact method, or establishes a communication connection with the predetermined contact method; for example, when the driver is required to make a certain action, it indicates that the driver is in a dangerous state or needs assistance
  • the driver state detection method may further include: sending at least part of a result of the driver state detection to a cloud server.
  • At least part of the results include: abnormal driving state information determined according to driver state detection.
  • sending part or all of the results obtained from driver status detection to a cloud server can back up abnormal driving status information. Since normal driving status does not need to be recorded, this embodiment only uses abnormal driving The status information is sent to the cloud server; when the obtained driver status detection results include normal driving status information and abnormal driving status information, part of the result is transmitted, that is, only abnormal driving status information is sent to the cloud server; When all the results are abnormal driving status information, all the abnormal driving status information is transmitted to the cloud server.
  • the driver state detection method may further include: storing an image or a video segment corresponding to the abnormal driving state information in the video stream on the vehicle side; and / or,
  • the image or video segment corresponding to the abnormal driving state information is saved locally on the vehicle side to realize evidence preservation.
  • Images or video segments corresponding to abnormal driving status information can be uploaded to the cloud server for backup. When information is needed, it can be downloaded from the cloud server to the vehicle for viewing, or downloaded from the cloud server to View by other clients.
  • the driving management method further includes: when the vehicle and the mobile terminal device are in a communication connection state, sending a data set download request to the mobile terminal device;
  • the data set is obtained by the mobile device from the cloud server and sent to the vehicle when the data set download request is received.
  • the mobile terminal device may be a mobile phone, a PAD, or a terminal device on another vehicle.
  • the mobile terminal device receives the data set download request, it sends the data set download request to the cloud server, and then obtains the data set and sends it to the vehicle.
  • the network such as 2G, 3G, 4G, etc.
  • the network can be applied to avoid the vehicle being restricted by the network from downloading the dataset from the cloud server to face matching. The problem.
  • the driving management method further includes: if the feature matching result indicates that the feature matching is unsuccessful, refusing to execute the received operation instruction.
  • the unsuccessful feature matching indicates that the driver has not been registered. At this time, in order to protect the rights of the registered driver, the vehicle will refuse to execute the driver's operation instruction.
  • the driving management method further includes:
  • the driver registration request includes a driver's registered face image
  • a driver's registration request from a driver is received by a vehicle, and a registered face image of the driver is saved.
  • a data set is established on the vehicle side based on the registered face image. Individual face matching, no need to download dataset from cloud server.
  • FIG. 6 is a flowchart of an application example of a driving management method according to some embodiments of the present application.
  • the execution subject of the driving management method in this embodiment may be a vehicle-end device.
  • the execution subject may be an in-vehicle intelligent system or other devices with similar functions, and may be a filtered face image and a driver ID.
  • Information is assigned to the corresponding driver authority information and stored in a data set;
  • the vehicle client obtains the driver image, and the driver image is subjected to face detection, quality screening, and living body recognition in order.
  • the filtered to-be-recognized face image is matched with all the face images in the data set. Face features can be obtained through neural network extraction.
  • the authority information corresponding to the face image to be identified is determined, and the vehicle action is controlled based on the authority information.
  • the vehicle client performs feature extraction on the to-be-recognized image and the face image in the data set. To obtain corresponding facial features, perform matching based on the facial features, and perform corresponding operations based on the matching results.
  • operation 120 may include: when the vehicle and the cloud server are in a communication connection state, uploading a face portion of at least one image in the video stream to the cloud server, and receiving the cloud server sending Feature matching results.
  • feature matching is implemented in a cloud server.
  • the vehicle uploads a face portion of at least one image in the video stream to the cloud server, and the cloud server and the face portion in the data set The image is subjected to feature matching to obtain the feature matching result.
  • the vehicle obtains the feature matching result from the cloud server, which reduces the amount of data transmission between the vehicle and the cloud server, and reduces network overhead.
  • the foregoing program may be stored in a computer-readable storage medium.
  • the program is executed, the program is executed.
  • the method includes the steps of the foregoing method embodiment.
  • the foregoing storage medium includes: a ROM, a RAM, a magnetic disk, or an optical disk, and other media that can store program codes.
  • FIG. 7 is a schematic structural diagram of a vehicle-mounted intelligent system according to some embodiments of the present application.
  • the in-vehicle intelligent system of this embodiment can be used to implement the foregoing driving management method embodiments of the present application.
  • the vehicle-mounted intelligent system of this embodiment includes:
  • the video acquisition unit 71 is configured to control a camera component provided on the vehicle to collect a video stream of a driver of the vehicle.
  • the result obtaining unit 72 is configured to obtain a feature matching result of a face part of at least one image in the video stream and at least one pre-stored face image in the data set.
  • a pre-stored face image of at least one registered driver is stored in the data set.
  • An operation unit 73 is configured to control the vehicle to execute an operation instruction received by the vehicle if the feature matching result indicates that the feature matching is successful.
  • the video stream of the driver of the vehicle is collected by controlling the camera component provided on the vehicle; the face portion of at least one image in the video stream and at least one pre-stored face image in the data set are acquired.
  • Feature matching result if the feature matching result indicates that the feature matching is successful, controlling the vehicle to execute the operation instructions received by the vehicle, reducing the driver's reliance on the network for recognition, and achieving feature matching without the network, further improving the vehicle's Security.
  • the vehicle-mounted intelligent system further includes:
  • a first data downloading unit configured to send a data set download request to the cloud server when the vehicle and the cloud server are in a communication connection state
  • the data storage unit is used for receiving and storing the data set sent by the cloud server.
  • the vehicle-mounted intelligent system further includes:
  • the first cloud storage unit is configured to: if the feature matching result indicates that the feature matching is successful, obtain the identity information of the driver of the vehicle according to the pre-stored face image of the feature matching success; and send the image and identity information to the cloud server.
  • the vehicle-mounted intelligent system may further include:
  • the second cloud storage unit if the feature matching result indicates that the feature matching is successful, the identity information of the driver of the vehicle is obtained according to the pre-stored face image of the successful feature matching; the face part in the image is intercepted; the intercepted face part is sent to the cloud server and Identity Information.
  • the in-vehicle intelligent system may further include: a living body detection unit for obtaining a living body detection result of the acquired image;
  • the operation unit 73 is configured to control the vehicle to execute the operation instruction received by the vehicle according to the feature matching result and the living body detection result.
  • the pre-stored face image in the data set is correspondingly provided with driving authority
  • An authority obtaining unit is configured to obtain the driving authority corresponding to the pre-stored face image of the successful feature matching if the feature matching result indicates that the feature matching is successful;
  • the operation unit 73 is further configured to control the vehicle to execute an operation instruction received by the vehicle within the authority range.
  • the vehicle-mounted intelligent system further includes:
  • Status detection unit for detecting driver status based on a video stream
  • An output unit for providing an early warning prompt for an abnormal driving state according to the result of the driver state detection and / or,
  • the intelligent driving control unit is configured to perform intelligent driving control according to the result of the driver state detection.
  • the results of the driver's driver state detection may be output.
  • intelligent driving control of the vehicle may be performed according to the result of the driver state detection.
  • the result of the driver state detection may be output, and at the same time, intelligent driving control of the vehicle may be performed according to the result of the driver state detection.
  • the driver state detection includes any one or more of the following: driver fatigue state detection, driver distraction state detection, driver predetermined distraction motion detection, and driver gesture detection.
  • the state detection unit when the state detection unit performs driver fatigue state detection based on the video stream, the state detection unit is configured to:
  • the status information of at least part of the face includes any one or more of the following: eye opening status information, Mouth opening and closing status information;
  • the result of the driver fatigue state detection is determined according to a parameter value of an index for characterizing the driver fatigue state.
  • the indicator used to characterize the fatigue state of the driver includes any one or more of the following: the degree of eyes closed and the degree of yawning.
  • the parameter value of the degree of closed eyes includes any one or more of the following: the number of closed eyes, the frequency of closed eyes, the duration of closed eyes, the amplitude of closed eyes, the number of closed eyes, the frequency of closed eyes; and / or,
  • the parameter values of the yawning degree include any one or more of the following: yawning status, number of yawning, duration of yawning, and frequency of yawning.
  • the state detection unit when the state detection unit performs driver distraction state detection based on the video stream, the state detection unit is configured to:
  • the indicators used to characterize the driver's distracted state include any one or more of the following: The degree of deviation of the face orientation and the degree of deviation of the sight;
  • the result of detecting the driver's distraction state is determined according to a parameter value of an index for characterizing the driver's distraction state.
  • the parameter value of the deviation degree of the face orientation includes any one or more of the following: the number of turns, the duration of the turn, and the frequency of the turn; and / or,
  • the parameter values of the degree of line of sight deviation include any one or more of the following: the angle of line of sight deviation, the length of time of line of sight deviation, and the frequency of line of sight deviation.
  • the state detection unit when the state detection unit detects a face orientation and / or a line of sight direction of the driver image in the video stream, the state detection unit is configured to:
  • Face orientation and / or line of sight detection is performed based on key points of the face.
  • the state detection unit when the state detection unit performs face orientation detection according to a key point of the face, the state detection unit is configured to:
  • Face orientation information is determined based on the feature information of the head posture.
  • the predetermined distraction action includes any one or more of the following: smoking action, drinking action, eating action, calling action, and entertaining action.
  • the state detection unit when the state detection unit performs a predetermined distracted motion detection of the driver based on the video stream, the state detection unit is configured to:
  • the state detection unit is further configured to:
  • a predetermined distraction action occurs, obtain a parameter value of an index used to characterize the degree of distraction according to a determination result of whether the predetermined distraction action occurs within a period of time;
  • the result of the driver's predetermined distracted motion detection is determined according to a parameter value of an index used to characterize the degree of distraction.
  • the parameter value of the index of the degree of distraction includes any one or more of the following: the number of predetermined distraction actions, the duration of the predetermined distraction action, and the frequency of the predetermined distraction action.
  • the vehicle-mounted intelligent system further includes:
  • the prompting unit is configured to prompt the detected distracted motion if the result of the driver's predetermined distracted motion detection is that a predetermined distracted motion is detected.
  • the vehicle-mounted intelligent system further includes:
  • the control unit is configured to perform a control operation corresponding to a result of the driver state detection.
  • control unit is configured to:
  • the driving mode is switched to an automatic driving mode.
  • the vehicle-mounted intelligent system further includes:
  • the result sending unit is configured to send at least a part of the result of the driver state detection to the cloud server.
  • At least part of the results include: abnormal driving state information determined according to driver state detection.
  • the vehicle-mounted intelligent system further includes: a video storage unit, configured to:
  • the vehicle-mounted intelligent system further includes:
  • the second data downloading unit is configured to send a data set download request to the mobile terminal device when the vehicle and the mobile terminal device are in a communication connection state; receive and store the data set sent by the mobile terminal device.
  • the data set is acquired by the mobile terminal device from the cloud server and sent to the vehicle when the data set download request is received.
  • the operation unit 73 is further configured to refuse to execute the received operation instruction if the feature matching result indicates that the feature matching is unsuccessful.
  • the operation unit 73 is further configured to issue registration information prompting
  • the driver registration request includes a driver's registered face image
  • the result obtaining unit 72 is configured to upload a face portion of at least one image in the video stream to the cloud server when the vehicle-end device is in a communication connection state with the cloud server, and Receive the feature matching results sent by the cloud server.
  • FIG. 8 is a flowchart of a driving management method according to another embodiment of the present application.
  • the execution subject of the driving management method in this embodiment may be a cloud server.
  • the execution subject may be an electronic device or other device with similar functions.
  • the method in this embodiment includes:
  • the face image to be identified is collected by a vehicle, and a face image is obtained from an image in the captured video through face detection.
  • the process of obtaining a face image based on the image in the video may include: face detection, Face quality screening and living body recognition. Through these processes, it can be ensured that the obtained face image to be recognized is a good-quality face image of a real driver in a vehicle, which ensures the effect of subsequent feature matching.
  • the operation 810 may be performed by a processor calling a corresponding instruction stored in a memory, or may be performed by an image receiving unit 91 executed by a processor.
  • a pre-stored face image of at least one registered driver is stored in the data set; optionally, the cloud server may directly obtain a feature matching result from the vehicle. At this time, the feature matching process is implemented on the vehicle side.
  • a feature matching result between the face image and at least one pre-stored face image in the data set is obtained from the vehicle.
  • the operation 820 may be executed by the processor calling a corresponding instruction stored in the memory, or may be executed by the matching result obtaining unit 92 executed by the processor.
  • the operation 830 may be performed by a processor calling a corresponding instruction stored in the memory, or may be performed by an instruction sending unit 93 executed by the processor.
  • the driver recognition is reduced to the network, and feature matching can be achieved without the network, which further improves the safety of the vehicle. Sex.
  • the driving management method further includes:
  • the data set is usually stored in a cloud server.
  • face matching on the vehicle side needs to be implemented. In order to be able to match human faces even when there is no network, you can use the network when Download the data set from the cloud server and save the data set on the vehicle side. At this time, even if there is no network and cannot communicate with the cloud server, face matching can be achieved on the vehicle side, and it is convenient for the vehicle side to manage the data set.
  • the driving management method further includes:
  • the driver registration request including a driver's registered face image
  • a driver In order to identify whether a driver is registered, it is necessary to first store a registered face image corresponding to the registered driver.
  • a data set is created for the registered registered face image, and the registered The registered face images of multiple drivers are saved by the cloud server, ensuring data security.
  • operation 820 may include:
  • Feature matching is performed on the face image and at least one pre-stored face image in the data set to obtain a feature matching result.
  • feature matching is implemented in a cloud server.
  • the vehicle uploads a face portion of at least one image in the video stream to the cloud server, and the cloud server and the face portion in the data set The image is subjected to feature matching to obtain the feature matching result.
  • the vehicle obtains the feature matching result from the cloud server, which reduces the amount of data transmission between the vehicle and the cloud server, and reduces network overhead.
  • the driving management method further includes:
  • At least part of the results include: abnormal driving state information determined according to driver state detection.
  • Sending part or all of the results obtained from driver status detection to the cloud server can back up abnormal driving status information. Since normal driving status does not need to be recorded, this embodiment only sends the abnormal driving status information to the cloud server. ; When the obtained driver state detection results include normal driving state information and abnormal driving state information, a part of the result is transmitted, that is, only the abnormal driving state information is transmitted to the cloud server; and when all the results of the driver state detection are abnormal driving When the status information is transmitted, all abnormal driving status information is transmitted to the cloud server.
  • the driving management method further includes: performing a control operation corresponding to a result of the driver state detection.
  • a predetermined prompt / alarm condition for example, a preset condition corresponding to the prompt driving state (eg, a poor driving state) is satisfied or the driving state level is a prompt driving state (such as: Poor driving status)
  • output prompts / alarm information corresponding to the reminder / alarm predetermined conditions for example: sound (such as: voice or ringing, etc.) / Light (such as: light on or light flashing, etc.) / Vibration, etc.
  • the result of the driver state detection meets the predetermined driving mode switching conditions, for example, the preset conditions corresponding to the warning driving state (eg, the driving state is very poor) are met, or the driving state level is a warning distracted driving level (also called severe distraction) Driving level), switch the driving mode to automatic driving mode to achieve safe driving and avoid road traffic accidents; , Can also remind the driver by sound (such as: voice or ringing) / light (such as: lights or lights flashing) / vibration, etc., in order to remind the driver, prompt the driver to return to the distracted attention To drive or encourage the driver to take a rest, etc .; and / or,
  • send predetermined information such as alarm information, reminder information, or dial number
  • the predetermined contact information for example: alarm phone, nearest contact phone or set emergency contact phone.
  • Phone calls can also establish a communication connection (such as a video call, voice call, or telephone call) with a predetermined contact method directly through the in-vehicle device to protect the driver's personal and / or property safety.
  • the driving management method further includes:
  • an image or a video segment corresponding to the abnormal driving state information may be uploaded to a cloud server for backup, and when the information is needed, it may be downloaded from the cloud server to Check on the vehicle side, or download from the cloud server to other clients for viewing.
  • the driving management method further includes:
  • At least one of the following operations can be performed:
  • the cloud server can receive abnormal driving status information of multiple vehicles, and can implement data statistics based on big data, management of vehicles and drivers, and better services for vehicles and drivers.
  • performing data statistics based on abnormal driving state information includes:
  • the received image or video segment corresponding to the abnormal driving state information is counted, and the image or video segment is classified according to different abnormal driving states to determine the statistical situation of each abnormal driving state.
  • the classification and statistics of each different abnormal driving state can be used to obtain the abnormal driving states that are frequently encountered by drivers based on big data. It can provide more reference data for vehicle developers in order to provide vehicles with more suitable responses to abnormal driving states. A setting or device to provide the driver with a more comfortable driving environment.
  • vehicle management based on abnormal driving state information includes:
  • the received image or video segment corresponding to the abnormal driving state information is counted, and the image or video segment is classified according to different vehicles to determine the abnormal driving statistics of each vehicle.
  • the abnormal driving status information of all drivers corresponding to the vehicle can be processed, for example, when a problem occurs in a certain vehicle, it can be achieved by viewing all the abnormal driving status information corresponding to the vehicle. Liability determination.
  • performing driver management based on abnormal driving state information includes:
  • the received image or video segment corresponding to the abnormal driving state information is processed based on the abnormal driving state information, so that the image or video segment is classified according to different drivers, and the abnormal driving statistics of each driver are determined.
  • each driver's driving habits and frequently occurring problems can be obtained.
  • Each driver can be provided with personalized services. While achieving the goal of safe driving, it will not affect Drivers with good driving habits cause interference; for example, after statistical information on abnormal driving status is determined, a driver often yawns while driving, and a higher volume of prompt information can be provided for the driver.
  • the foregoing program may be stored in a computer-readable storage medium.
  • the program is executed, the program is executed.
  • the method includes the steps of the foregoing method embodiment.
  • the foregoing storage medium includes: a ROM, a RAM, a magnetic disk, or an optical disk, and other media that can store program codes.
  • FIG. 9 is a schematic structural diagram of an electronic device according to some embodiments of the present application.
  • the electronic device in this embodiment may be used to implement the foregoing driving management method embodiments of the present application.
  • the electronic device of this embodiment includes:
  • the image receiving unit 91 is configured to receive a face image to be identified sent by a vehicle.
  • the matching result obtaining unit 92 is configured to obtain a feature matching result between the face image and at least one pre-stored face image in the data set.
  • a pre-stored face image of at least one registered driver is stored in the data set.
  • a feature matching result between the face image and at least one pre-stored face image in the data set is obtained from the vehicle.
  • the instruction sending unit 93 is configured to send an instruction for controlling the vehicle to the vehicle if the feature matching result indicates that the feature matching is successful.
  • the electronic device further includes:
  • the first data sending unit is configured to receive a data set download request sent by a vehicle.
  • the data set stores at least one pre-stored face image of a registered driver; and sends the data set to the vehicle.
  • the electronic device further includes:
  • a registration request receiving unit configured to receive a driver registration request sent by a vehicle or a mobile terminal device, where the driver registration request includes a registered face image of the driver;
  • the matching result obtaining unit 92 is configured to perform feature matching on a face image and at least one pre-stored face image in a data set to obtain a feature matching result.
  • the electronic device further includes:
  • the detection result receiving unit is configured to receive at least part of the results of the driver state detection sent by the vehicle, perform an early warning prompt for an abnormal driving state, and / or send an instruction to the vehicle to perform intelligent driving control.
  • At least part of the results include: abnormal driving state information determined according to driver state detection.
  • the electronic device further includes:
  • the execution control unit is configured to execute a control operation corresponding to a result of the driver state detection.
  • an execution control unit is configured to:
  • the driving mode is switched to an automatic driving mode.
  • the electronic device further includes:
  • the video receiving unit is configured to receive an image or a video segment corresponding to the abnormal driving state information.
  • the electronic device further includes:
  • the abnormality processing unit is configured to perform at least one of the following operations based on abnormal driving state information: data statistics, vehicle management, and driver management.
  • the abnormality processing unit when the abnormality processing unit performs data statistics based on the abnormal driving state information, it is configured to perform statistics on the received image or video segment corresponding to the abnormal driving state information based on the abnormal driving state information, so that the image or video segment drives according to different abnormalities.
  • the status is classified to determine the statistics of each abnormal driving status.
  • the abnormality processing unit when the abnormality processing unit performs vehicle management based on the abnormal driving state information, the abnormality processing unit is configured to count the received image or video segment corresponding to the abnormal driving state information based on the abnormal driving state information, so that the image or video segment is performed according to different vehicles. Classification to determine abnormal driving statistics for each vehicle.
  • the abnormality processing unit when the abnormality processing unit performs driver management based on the abnormal driving state information, the abnormality processing unit is configured to process the received image or video segment corresponding to the abnormal driving state information based on the abnormal driving state information, so that the image or video segment drives according to different driving conditions. Classify and determine the abnormal driving statistics of each driver.
  • a driving management system including: a vehicle and / or a cloud server;
  • the vehicle is used to execute any one of the driving management methods in the embodiments shown in Figs. 1-6;
  • the cloud server is configured to execute any driving management method in the embodiment shown in FIG. 8.
  • the driving management system further includes: a mobile terminal device, configured to:
  • the driver registration request includes a driver's registered face image
  • FIG. 10 is a flowchart of using a driving management system according to some embodiments of the present application.
  • the registration process implemented in the above embodiment is implemented on a mobile phone (mobile terminal device), and the filtered face image and driver ID information (identity information) are uploaded to a cloud server, and the cloud server will The face image and driver ID information and the user permission information corresponding to the face image are stored in the data set.
  • the vehicle client downloads the data set to the vehicle client for matching; the vehicle client obtains the driver Image, the driver image is subjected to face detection, quality screening, and live recognition in order, and the filtered face image to be identified is matched with all the face images in the dataset.
  • the matching is based on the facial features, and the facial features can be obtained through nerves. Obtained through network extraction, determining the authority information corresponding to the face image to be identified based on the comparison result, and controlling the vehicle action based on the authority information.
  • FIG. 11 is a flowchart of using a driving management system according to another embodiment of the present application.
  • the registration process implemented in the foregoing embodiment is implemented on a mobile phone (mobile terminal device), and the filtered face image and driver ID information (identity information) are uploaded to a cloud server, and the cloud server sends the person
  • the face image and driver ID information and the user permission information corresponding to the face image are stored in the data set.
  • permission matching is required, the to-be-recognized face image uploaded by the vehicle client is received. Face images are matched. Matching is based on face features. Face features can be obtained through neural network extraction.
  • the authority information corresponding to the face image to be identified is determined, and vehicle actions are controlled based on the authority information.
  • the vehicle client obtains the driver image, and then the driver image is subjected to face detection, quality screening, and living body recognition in order to obtain the face image to be identified.
  • an electronic device including: a memory for storing executable instructions;
  • a processor for communicating with the memory to execute executable instructions to complete the driving management method of any one of the above embodiments.
  • FIG. 12 is a schematic structural diagram of an application example of an electronic device according to some embodiments of the present application.
  • the electronic device includes one or more processors, a communication unit, and the like.
  • the one or more processors are, for example, one or more central processing units (CPUs) 1201, and / or one or more Acceleration unit 1213, etc.
  • the acceleration unit may include, but is not limited to, GPU, FPGA, other types of special-purpose processors, etc.
  • the processor may be loaded to random based on executable instructions stored in read-only memory (ROM) 1202 or from the storage portion 1208
  • the executable instructions in the memory (RAM) 1203 are accessed to perform various appropriate actions and processes.
  • the communication unit 1212 may include, but is not limited to, a network card.
  • the network card may include, but is not limited to, an IB (Infiniband) network card.
  • the processor may communicate with the read-only memory 1202 and / or the random access memory 1203 to execute executable instructions.
  • the bus 1204 It is connected to the communication unit 1212 and communicates with other target devices via the communication unit 1212, thereby completing operations corresponding to any of the methods provided in the embodiments of the present application, for example, controlling a camera component provided on a vehicle to collect a video stream of a driver of the vehicle; A feature matching result of a face portion of at least one image in a video stream and at least one pre-stored face image in a data set is obtained; if the feature matching result indicates that the feature matching is successful, the vehicle is controlled to execute an operation instruction received by the vehicle.
  • the RAM 1203 can store various programs and data required for the operation of the device.
  • the CPU 1201, the ROM 1202, and the RAM 1203 are connected to each other through a bus 1204.
  • ROM1202 is an optional module.
  • the RAM 1203 stores executable instructions, or writes executable instructions to the ROM 1202 at run time, and the executable instructions cause the central processing unit 1201 to perform operations corresponding to any of the foregoing methods in this application.
  • An input / output (I / O) interface 1205 is also connected to the bus 1204.
  • the communication unit 1212 may be provided in an integrated manner, or may be provided with multiple sub-modules (for example, multiple IB network cards) and connected on a bus link.
  • the following components are connected to the I / O interface 1205: an input portion 1206 including a keyboard, a mouse, and the like; an output portion 1207 including a cathode ray tube (CRT), a liquid crystal display (LCD), and the speaker; a storage portion 1208 including a hard disk and the like And a communication section 1209 including a network interface card such as a LAN card, a modem, and the like.
  • the communication section 1209 performs communication processing via a network such as the Internet.
  • the driver 1210 is also connected to the I / O interface 1205 as needed.
  • a removable medium 1211 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is installed on the drive 1210 as needed, so that a computer program read therefrom is installed into the storage section 1208 as needed.
  • FIG. 12 is only an optional implementation manner.
  • the number and types of components in FIG. 12 may be selected, deleted, added or replaced according to actual needs.
  • Different function components can also be implemented in separate settings or integrated settings.
  • the acceleration unit 1213 and the CPU 1201 can be set separately or the acceleration unit 1213 can be integrated on the CPU 1201.
  • the communication department can be set separately or integrated on the CPU 1201. Or on the acceleration unit 1213, and so on.
  • the process described above with reference to the flowchart may be implemented as a computer software program.
  • embodiments of the present application include a computer program product including a computer program tangibly embodied on a machine-readable medium, the computer program including program code for performing a method shown in a flowchart, and the program code may include a corresponding The instructions corresponding to the steps of the driving management method provided by any embodiment of the present application are executed.
  • the computer program may be downloaded and installed from a network through a communication section, and / or installed from a removable medium.
  • the computer program is executed by the CPU 1201, the functions defined in the method of the present application are executed.
  • a computer storage medium for storing a computer-readable instruction, and when the instruction is executed, the operation of the driving management method of any one of the foregoing embodiments is performed.
  • the methods and devices, systems, and devices of this application may be implemented in many ways.
  • the methods and devices, systems, and devices of the present application can be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware.
  • the above order of the steps of the method is for illustration only, and the steps of the method of the present application are not limited to the order described above, unless otherwise specifically stated.
  • the present application can also be implemented as programs recorded in a recording medium, and these programs include machine-readable instructions for implementing the method according to the present application.
  • the present application also covers a recording medium storing a program for executing the method according to the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Automation & Control Theory (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)
  • Collating Specific Patterns (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

一种驾驶管理方法和***、车载智能***、电子设备、介质,其中方法包括:控制设置在车辆上的摄像组件采集车辆驾驶员的视频流(110);获取所述视频流中的至少一个图形的人脸部分与数据集中至少一个预存人脸图像的特征匹配结果(120);其中,所述数据集中存储有至少一个已注册的驾驶员的预存人脸图像;如果所述特征匹配结果表示特征匹配成功,控制车辆执行所述车辆接收到的操作指令(130)。该方法减少了驾驶员识别对网络的依赖,可以在无网络的情况下实现特征匹配,进一步提高了车辆的安全保障性。

Description

驾驶管理方法和***、车载智能***、电子设备、介质
本申请要求在2018年06月04日提交中国专利局、申请号为CN 201810565711.1、发明名称为“驾驶管理方法和***、车载智能***、电子设备、介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能技术,尤其是一种驾驶管理方法和***、车载智能***、电子设备、介质。
背景技术
智能车辆是一个集环境感知、规划决策、多等级辅助驾驶等功能于一体的综合***,它集中运用了计算机、现代传感、信息融合、通讯、人工智能及自动控制等技术,是典型的高新技术综合体。目前对智能车辆的研究主要致力于提高汽车的安全性、舒适性,以及提供优良的人车交互界面。近年来,智能车辆己经成为世界车辆工程领域研究的热点和汽车工业增长的新动力,很多发达国家都将其纳入到各自重点发展的智能交通***当中。
发明内容
本申请实施例提供一种驾驶管理方法和***、车载智能***、电子设备、介质。
根据本申请实施例的一个方面,提供的一种驾驶管理方法,包括:
控制设置在车辆上的摄像组件采集车辆驾驶员的视频流;
获取所述视频流中的至少一个图像的人脸部分与数据集中至少一个预存人脸图像的特征匹配结果;其中,所述数据集中存储有至少一个已注册的驾驶员的预存人脸图像;
如果所述特征匹配结果表示特征匹配成功,控制车辆执行所述车辆接收到的操作指令。
可选地,所述方法还包括:
在所述车辆与云端服务器处于通信连接状态时,向所述云端服务器发送数据集下载请求;
接收并存储所述云端服务器发送的数据集。
可选地,所述方法还包括:
如果所述特征匹配结果表示特征匹配成功,根据特征匹配成功的预存人脸图像获取所述车辆驾驶员的身份信息;
向所述云端服务器发送所述图像和所述身份信息。
可选地,所述方法还包括:
如果所述特征匹配结果表示特征匹配成功,根据特征匹配成功的预存人脸图像获取所述车辆驾驶员的身份信息;
截取所述图像中的人脸部分;
向所述云端服务器发送所述截取的人脸部分和所述身份信息。
可选地,所述方法还包括:获取所采集的图像的活体检测结果;
所述根据特征匹配结果,控制车辆执行所述车辆接收到的操作指令,包括:
所述根据特征匹配结果和活体检测结果,控制车辆执行所述车辆接收到的操作指令。
可选地,所述数据集中的预存人脸图像还对应设置有驾驶权限;
所述方法还包括:如果所述特征匹配结果表示特征匹配成功,获取与特征匹配成功的预存人脸图像对应的驾驶权限;
所述控制车辆执行所述车辆接收到的操作指令,包括:控制车辆执行所述车辆接收到的在所述权限范围内的操作指令。
可选地,所述方法还包括:
基于所述视频流进行驾驶员状态检测;
根据驾驶员状态检测的结果,进行异常驾驶状态的预警提示和/或进行智能驾驶控制。
可选地,所述驾驶员状态检测包括以下任意一项或多项:驾驶员疲劳状态检测,驾驶员分心状态检测,驾驶员预定分心动作检测,驾驶员手势检测。
可选地,所述基于所述视频流进行驾驶员疲劳状态检测,包括:
对所述视频流中的至少一个图像的人脸至少部分区域进行检测,得到人脸至少部分区域的状态信息,所述人脸至少部分区域的状态信息包括以下任意一项或多项:眼睛睁合状态信息、嘴巴开合状态信息;
根据一段时间内的所述人脸至少部分区域的状态信息,获取用于表征驾驶员疲劳状态的指标的参数值;
根据用于表征驾驶员疲劳状态的指标的参数值确定驾驶员疲劳状态检测的结果。
可选地,所述用于表征驾驶员疲劳状态的指标包括以下任意一项或多项:闭眼程度、打哈欠程度。
可选地,所述闭眼程度的参数值包括以下任意一项或多项:闭眼次数、闭眼频率、闭眼持续时长、闭眼幅度、半闭眼次数、半闭眼频率;和/或,
所述打哈欠程度的参数值包括以下任意一项或多项:打哈欠状态、打哈欠次数、打哈欠持续时长、打哈欠频率。
可选地,所述基于所述视频流进行驾驶员分心状态检测,包括:
对所述视频流中驾驶员图像进行人脸朝向和/或视线方向检测,得到人脸朝向信息和/或视线方向信息;
根据一段时间内的所述人脸朝向信息和/或视线方向信息,确定用于表征驾驶员分心状态的指标的参数值;所述用于表征驾驶员分心状态的指标包括以下任意一项或多项:人脸朝向偏离程度,视线偏离程度;
根据用于表征所述驾驶员分心状态的指标的参数值确定驾驶员分心状态检测的结果。
可选地,所述人脸朝向偏离程度的参数值包括以下任意一项或多项:转头次数、转头持续时长、转头频率;和/或,
所述视线偏离程度的参数值包括以下任意一项或多项:视线方向偏离角度、视线方向偏离时长、视线方向偏离频率。
可选地,所述对所述视频流中驾驶员图像进行人脸朝向和/或视线方向检测,包括:
检测所述视频流中驾驶员图像的人脸关键点;
根据所述人脸关键点进行人脸朝向和/或视线方向检测。
可选地,根据所述人脸关键点进行人脸朝向检测,得到人脸朝向信息,包括:
根据所述人脸关键点获取头部姿态的特征信息;
根据所述头部姿态的特征信息确定人脸朝向信息。
可选地,所述预定分心动作包括以下任意一项或多项:抽烟动作,喝水动作,饮食动作,打电话动作,娱乐动作。
可选地,所述基于所述视频流进行驾驶员预定分心动作检测,包括:
对所述视频流中的至少一个图像进行所述预定分心动作相应的目标对象检测,得到目标对象的检测框;
根据所述目标对象的检测框,确定是否出现所述预定分心动作。
可选地,所述方法还包括:
若出现预定分心动作,根据一段时间内是否出现所述预定分心动作的确定结果,获取用于表征驾驶员分心程度的指标的参数值;
根据所述用于表征驾驶员分心程度的指标的参数值确定驾驶员预定分心动作检测的结果。
可选地,所述驾驶员分心程度的指标的参数值包括以下任意一项或多项:预定分心动作的次数、预定分心动作的持续时长、预定分心动作的频率。
可选地,所述方法还包括:
若驾驶员预定分心动作检测的结果为检测到预定分心动作,提示检测到的分心动作。
可选地,所述方法还包括:
执行与所述驾驶员状态检测的结果对应的控制操作。
可选地,所述执行与所述驾驶员状态检测的结果对应的控制操作,包括以下至少之一:
如果确定的所述驾驶员状态检测的结果满足提示/告警预定条件,输出与所述提示/告警预定条件相应的提示/告警信息;和/或,
如果确定的所述驾驶员状态检测的结果满足预定信息发送条件,向预定联系方式发送预定信息或与预定联系方式建立通信连接;和/或,
如果确定的所述驾驶员状态检测的结果满足预定驾驶模式切换条件,将驾驶模式切换为自动驾驶模式。
可选地,所述方法还包括:
向所述云端服务器发送所述驾驶员状态检测的至少部分结果。
可选地,所述至少部分结果包括:根据驾驶员状态检测确定的异常驾驶状态信息。
可选地,所述方法还包括:
存储所述视频流中与所述异常驾驶状态信息对应的图像或视频段;和/或,
向所述云端服务器发送所述视频流中与所述异常驾驶状态信息对应的图像或视频段。
可选地,所述方法还包括:
在所述车辆与移动端设备处于通信连接状态时,向所述移动端设备发送数据集下载请求;
接收并存储所述移动端设备发送的数据集。
可选地,所述数据集是由所述移动端设备在接收到所述数据集下载请求时,从云端服务器获取并发送给所述车辆的。
可选地,所述方法还包括:
如果所述特征匹配结果表示特征匹配不成功,拒绝执行接收到的操作指令。
可选地,所述方法还包括:
发出提示注册信息;
根据所述提示注册信息接收驾驶员注册请求,所述驾驶员注册请求包括驾驶员的注册人脸图像;
根据所述注册人脸图像,建立数据集。
可选地,所述获取所述视频流中的至少一个图像的人脸部分与数据集中至少一个预存人脸图像的特征匹配结果,包括:
在所述车辆与云端服务器处于通信连接状态时,将所述视频流中的至少一个图像的人脸部分上传到所述云端服务器,并接收所述云端服务器发送的特征匹配结果。
根据本申请实施例的另一个方面,提供的一种车载智能***,包括:
视频采集单元,用于控制设置在车辆上的摄像组件采集车辆驾驶员的视频流;
结果获取单元,用于获取所述视频流中的至少一个图像的人脸部分与数据集中至少一个预存人脸图像的特征匹配结果;其中,所述数据集中存储有至少一个已注册的驾驶员的预存人脸图像;
操作单元,用于如果所述特征匹配结果表示特征匹配成功,控制车辆执行所述车辆接收到的操作指令。
根据本申请实施例的又一个方面,提供的一种驾驶管理方法,包括:
接收车辆发送的待识别的人脸图像;
获得所述人脸图像与数据集中至少一个预存人脸图像的特征匹配结果,其中,所述数据集中存储有至少一个已注册的驾驶员的预存人脸图像;
如果所述特征匹配结果表示特征匹配成功,向所述车辆发送允许控制车辆的指令。
可选地,所述方法还包括:
接收车辆发送的数据集下载请求,所述数据集中存储有至少一已注册的驾驶员的预存人脸图像;
向所述车辆发送所述数据集。
可选地,所述方法还包括:
接收车辆或移动端设备发送的驾驶员注册请求,所述驾驶员注册请求包括驾驶员的注册人脸图像;
根据所述注册人脸图像,建立数据集。
可选地,所述获得所述人脸图像与数据集中至少一个预存人脸图像的特征匹配结果,包括:
对所述人脸图像与数据集中至少一个预存人脸图像进行特征匹配,得到所述特征匹配结果。
可选地,所述获得所述人脸图像与数据集中至少一个预存人脸图像的特征匹配结果,包括:
从所述车辆获取所述人脸图像与数据集中至少一个预存人脸图像的特征匹配结果。
可选地,所述方法还包括:
接收所述车辆发送的驾驶员状态检测的至少部分结果,进行异常驾驶状态的预警提示和/或向所述车辆发送进行智 能驾驶控制的指令。
可选地,所述至少部分结果包括:根据驾驶员状态检测确定的异常驾驶状态信息。
可选地,所述方法还包括:执行与所述驾驶员状态检测的结果对应的控制操作。
可选地,所述执行与所述驾驶员状态检测的结果对应的控制操作,包括:
如果确定的所述驾驶员状态检测的结果满足提示/告警预定条件,输出与所述提示/告警预定条件相应的提示/告警信息;和/或,
如果确定的所述驾驶员状态检测的结果满足预定信息发送条件,向预定联系方式发送预定信息或与预定联系方式建立通信连接;和/或,
如果确定的所述驾驶员状态检测的结果满足预定驾驶模式切换条件,将驾驶模式切换为自动驾驶模式。
可选地,所述方法还包括:
接收与所述异常驾驶状态信息对应的图像或视频段。
可选地,所述方法还包括:
基于所述异常驾驶状态信息进行以下至少一种操作:
数据统计、车辆管理、驾驶员管理。
可选地,所述基于所述异常驾驶状态信息进行数据统计,包括:
基于所述异常驾驶状态信息对接收的与所述异常驾驶状态信息对应的图像或视频段进行统计,使所述图像或视频段按不同异常驾驶状态进行分类,确定每种所述异常驾驶状态的统计情况。
可选地,所述基于所述异常驾驶状态信息进行车辆管理,包括:
基于所述异常驾驶状态信息对接收的与所述异常驾驶状态信息对应的图像或视频段进行统计,使所述图像或视频段按不同车辆进行分类,确定每个所述车辆的异常驾驶统计情况。
可选地,所述基于所述异常驾驶状态信息进行驾驶员管理,包括:
基于所述异常驾驶状态信息对接收的与所述异常驾驶状态信息对应的图像或视频段进行处理,使所述图像或视频段按不同驾驶员进行分类,确定每个所述驾驶员的异常驾驶统计情况。
根据本申请实施例的再一个方面,提供的一种电子设备,包括:
图像接收单元,用于接收车辆发送的待识别的人脸图像;
匹配结果获得单元,用于获得所述人脸图像与数据集中至少一个预存人脸图像的特征匹配结果,其中,所述数据集中存储有至少一个已注册的驾驶员的预存人脸图像;
指令发送单元,用于如果所述特征匹配结果表示特征匹配成功,向所述车辆发送允许控制车辆的指令。
根据本申请实施例的还一个方面,提供的一种驾驶管理***,包括:车辆和/或云端服务器;
所述车辆用于执行上述任意一项所述的驾驶管理方法;
所述云端服务器用于执行上述任意一项所述的驾驶管理方法。
可选地,所述***还包括:移动端设备,用于:
接收驾驶员注册请求,所述驾驶员注册请求包括驾驶员的注册人脸图像;
将所述驾驶员注册请求发送给所述云端服务器。
根据本申请实施例的还一个方面,提供的一种电子设备,包括:存储器,用于存储可执行指令;
以及处理器,用于与所述存储器通信以执行所述可执行指令从而完成上述任意一项所述的驾驶管理方法。
根据本申请实施例的还一个方面,提供的一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现上述任意一项所述的驾驶管理方法。
根据本申请实施例的还一个方面,提供的一种计算机存储介质,用于存储计算机可读取的指令,所述指令被执行时实现上述任意一项所述的驾驶管理方法。
基于本申请上述实施例提供的一种驾驶管理方法和***、车载智能***、电子设备、介质,通过控制设置在车辆上的摄像组件采集车辆驾驶员的视频流;获取视频流中的至少一个图像的人脸部分与数据集中至少一个预存人脸图像的特征匹配结果;如果特征匹配结果表示特征匹配成功,控制车辆执行车辆接收到的操作指令,减少了驾驶员识别对网络的依赖,可以在无网络的情况下实现特征匹配,进一步提高了车辆的安全保障性。
下面通过附图和实施例,对本申请的技术方案做进一步的详细描述。
附图说明
构成说明书的一部分的附图描述了本申请的实施例,并且连同描述一起用于解释本申请的原理。
参照附图,根据下面的详细描述,可以更加清楚地理解本申请,其中:
图1为本申请一些实施例的驾驶管理方法的流程图;
图2为本申请一些实施例中基于视频流进行驾驶员疲劳状态检测的流程图;
图3为本申请一些实施例中基于视频流进行驾驶员分心状态检测的流程图;
图4为本申请一些实施例中基于视频流进行驾驶员预定分心动作检测的流程图;
图5为本申请一些实施例的驾驶员状态检测方法的流程图;
图6为本申请一些实施例的驾驶管理方法的一个应用示例的流程图;
图7为本申请一些实施例的车载智能***的结构示意图;
图8为本申请另一些实施例的驾驶管理方法的流程图;
图9为本申请一些实施例的电子设备的结构示意图;
图10为本申请一些实施例的驾驶管理***的使用流程图;
图11为本申请另一些实施例的驾驶管理***的使用流程图;
图12为本申请一些实施例的电子设备的一个应用示例的结构示意图。
具体实施方式
现在将参照附图来详细描述本申请的各种示例性实施例。应注意到:除非另外具体说明,否则在这些实施例中阐述 的部件和步骤的相对布置、数字表达式和数值不限制本申请的范围。
同时,应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本申请及其应用或使用的任何限制。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
本申请实施例可以应用于终端设备、计算机***、服务器等电子设备,其可与众多其它通用或专用计算***环境或配置一起操作。适于与终端设备、计算机***、服务器等电子设备一起使用的众所周知的终端设备、计算***、环境和/或配置的例子包括但不限于:个人计算机***、服务器计算机***、瘦客户机、厚客户机、手持或膝上设备、基于微处理器的***、机顶盒、可编程消费电子产品、网络个人电脑、小型计算机***﹑大型计算机***和包括上述任何***的分布式云计算技术环境,等等。
终端设备、计算机***、服务器等电子设备可以在由计算机***执行的计算机***可执行指令(诸如程序模块)的一般语境下描述。通常,程序模块可以包括例程、程序、目标程序、组件、逻辑、数据结构等等,它们执行特定的任务或者实现特定的抽象数据类型。计算机***/服务器可以在分布式云计算环境中实施,分布式云计算环境中,任务是由通过通信网络链接的远程处理设备执行的。在分布式云计算环境中,程序模块可以位于包括存储设备的本地或远程计算***存储介质上。
图1为本申请一些实施例的驾驶管理方法的流程图。如图1所示,本实施例驾驶管理方法的执行主体可以为车辆端设备,例如:执行主体可以为车载智能***或其他具有类似功能的设备,该实施例的方法包括:
110,控制设置在车辆上的摄像组件采集车辆驾驶员的视频流。
可选地,为了采集驾驶员的图像,摄像组件设置在车辆内部可以对驾驶位进行拍摄的位置,该摄像组件的位置可以固定或不固定;在不固定的情况下,可以根据不同驾驶员调整摄像组件的位置;在固定的情况下,可以针对不同驾驶员调整摄像组件的镜头方向。
在一个可选示例中,该操作110可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的视频采集单元71执行。
120,获取视频流中的至少一个图像的人脸部分与数据集中至少一个预存人脸图像的特征匹配结果。
可选地,数据集中存储有至少一个已注册的驾驶员的预存人脸图像,即已经注册的驾驶员对应的人脸图像作为预存人脸图像保存在数据集中。
可选地,可以通过人脸检测(例如:基于神经网络进行人脸检测)获得图像中的人脸部分;将人脸部分与数据集中的预存人脸图像进行特征匹配,可以通过卷积神经网络分别获取人脸部分的特征和预存人脸图像的特征,之后进行特征匹配,以识别与人脸部分具有对应相同人脸的预存人脸图像,进而实现对采集到图像的驾驶员的身份进行识别。
在一个可选示例中,该操作120可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的结果获取单元72执行。
130,如果特征匹配结果表示特征匹配成功,控制车辆执行车辆接收到的操作指令。
可选地,特征匹配结果包括两种情况:特征匹配成功和特征匹配不成功,当特征匹配成功时,表示该车辆驾驶员是已经注册的驾驶员,可以控制车辆,此时,控制车辆执行接收到的操作指令(驾驶员发出的操作指令)。
在一个可选示例中,该操作130可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的操作单元73执行。
基于本申请上述实施例提供的驾驶管理方法,通过控制设置在车辆上的摄像组件采集车辆驾驶员的视频流;获取视频流中的至少一个图像的人脸部分与数据集中至少一个预存人脸图像的特征匹配结果;如果特征匹配结果表示特征匹配成功,控制车辆执行车辆接收到的操作指令,减少了驾驶员识别对网络的依赖,可以在无网络的情况下实现特征匹配,进一步提高了车辆的安全保障性。
在一个或多个可选的实施例中,驾驶管理方法还包括:
在车辆与云端服务器处于通信连接状态时,向云端服务器发送数据集下载请求;
接收并存储云端服务器发送的数据集。
可选地,通常数据集保存在云端服务器中,本实施例需要实现在车辆端进行人脸匹配,为了可以在无网络的情况下也能对人脸进行匹配,可以在有网络的情况下,从云端服务器下载数据集,并将数据集保存在车辆端,此时,即使没有网络,无法与云端服务器通信,也可以在车辆端实现人脸匹配,并且方便车辆端对数据集的管理。
在一个或多个可选的实施例中,驾驶管理方法还包括:
如果特征匹配结果表示特征匹配成功,根据特征匹配成功的预存人脸图像获取车辆驾驶员的身份信息;
向云端服务器发送图像和身份信息。
在本实施例中,当特征匹配成功时,说明该驾驶员是已经注册的驾驶员,从数据集中可以获得对应的身份信息,而将图像和身份信息发送到云端服务器,则可以在云端服务器对该驾驶员建立实时追踪(例如:某个驾驶员在什么时间、什么地点驾驶某一车辆),由于图像是基于视频流获得的,在存在网络的情况下,将图像实时上传到云端服务器,可以实现对驾驶员驾驶状态的分析、统计和/或管理。
在一个或多个可选的实施例中,驾驶管理方法还可以包括:
如果特征匹配结果表示特征匹配成功,根据特征匹配成功的预存人脸图像获取车辆驾驶员的身份信息;
截取图像中的人脸部分;
向云端服务器发送截取的人脸部分和身份信息。
由于人脸匹配的过程是基于图像中的人脸部分实现的,在本实施例中,在向云端服务器发送图像时,可以仅发送基于图像分割获得的人脸部分,由此有利于减少车载端设备和云端服务器之间的通信网络开销,云端服务器接收到截取的人脸部分和身份信息后,可以将该人脸部分作为新的该驾驶员的人脸图像存入数据集中,可以是增加或替换现有的人脸图像;以作为下次进行人脸识别的基础。
在一个或多个可选的实施例中,驾驶管理方法还可以包括:获取所采集的图像的活体检测结果;
操作130可以包括:
根据特征匹配结果和活体检测结果,控制车辆执行车辆接收到的操作指令。
在本实施例中,活体检测是用来判断图像是否来自真人(或活人),通过活体检测可以使驾驶员的身份验证更为准确。本实施例对活体检测的具体方式不做限定,例如:可以采用对图像的三维信息深度分析、面部光流分析、傅里叶频谱分析、边缘或反光等防伪线索分析、视频流中多帧视频图像帧综合分析等等方法来实现,故在此不再赘述。
可选地,数据集中的预存人脸图像还对应设置有驾驶权限;
驾驶管理方法还可以包括:如果特征匹配结果表示特征匹配成功,获取与特征匹配成功的预存人脸图像对应的驾驶权限;
操作130可以包括:控制车辆执行车辆接收到的在权限范围内的操作指令。
在本实施例中,通过对不同驾驶员设置不同的驾驶权限实现分类管理,可以提高车辆的安全性,并保障权限高的驾驶员拥有更高的控制权,可以提高用户体验。不同权限的设置可以通过限制操作时间和/或操作范围进行区分,例如:一些驾驶员对应的驾驶权限仅仅可以在白天或特定时段开车,另一些驾驶员对应的驾驶权限可以在全天开车等等;或者,一些驾驶员对应的驾驶权限可以在驾驶车辆时使用车内娱乐设备,而另一些驾驶员对应的驾驶权限仅可以开车。
在一个或多个可选的实施例中,驾驶管理方法还包括:
基于视频流进行驾驶员状态检测;
根据驾驶员状态检测的结果,进行异常驾驶状态的预警提示和/或进行智能驾驶控制。
在其中一些实施例中,可以输出驾驶员状态检测的结果。
在其中另一些实施例中,可以根据驾驶员状态检测的结果,对车辆进行智能驾驶控制。
在其中又一些实施例中,可以输出驾驶员状态检测的结果,同时根据驾驶员状态检测的结果,对车辆进行智能驾驶控制。
可选地,可以本地输出驾驶员状态检测的结果和/或远程输出驾驶员状态检测的结果。其中,本地输出驾驶员状态检测的结果即通过驾驶员状态检测装置或者驾驶员监控***输出驾驶员状态检测的结果,或者向车辆中的中控***输出驾驶员状态检测的结果,以便车辆基于该驾驶员状态检测的结果对车辆进行智能驾驶控制。远程输出驾驶员状态检测的结果,例如:可以向云端服务器或管理节点发送驾驶员状态检测的结果,以便由云端服务器或管理节点进行驾驶员状态检测的结果的收集、分析和/或管理,或者基于该驾驶员状态检测的结果对车辆进行远程控制。
在一个可选示例中,根据驾驶员状态检测的结果,进行异常驾驶状态的预警提示和/或进行智能驾驶控制,可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的输出模块和/或智能驾驶控制模块执行。
在一个可选示例中,上述操作可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的驾驶员状态检测单元执行。
在一些实施例中,驾驶员状态检测例如可以包括但不限于以下任意一项或多项:驾驶员疲劳状态检测,驾驶员分心状态检测,驾驶员预定分心动作检测,驾驶员手势检测。则驾驶员状态检测的结果相应包括但不限于以下任意一项或多项:驾驶员疲劳状态检测的结果,驾驶员分心状态检测的结果,驾驶员预定分心动作检测的结果,驾驶员手势检测的结果。
在本实施例中,预定分心动作,可以是任意可能分散驾驶员的注意力的分心动作,例如:抽烟动作、喝水动作、饮食动作、打电话动作、娱乐动作等。其中,饮食动作例如为吃水果、零食等食物的动作,娱乐动作例如为发信息、玩游戏、K歌等任意借助于电子设备执行的动作,其中,电子设备例如为手机终端、掌上电脑、游戏机等。
基于本申请上述实施例提供的驾驶员状态检测方法,可以对驾驶员图像进行驾驶员状态检测,并输出驾驶员状态检测的结果,从而实现对驾驶员的驾驶状态的实时检测,以便于在驾驶员的驾驶状态较差时及时采取相应的措施,保证安全驾驶,避免发生道路交通事故。
图2为本申请一些实施例中基于视频流进行驾驶员疲劳状态检测的流程图。在一个可选示例中,图2所示的实施例可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的状态检测单元执行。如图2所示,基于视频流进行驾驶员疲劳状态检测的方法,可以包括:
202,对视频流中的至少一个图像的人脸至少部分区域进行检测,得到人脸至少部分区域的状态信息。
在一个可选示例中,上述人脸至少部分区域可以包括:驾驶员人脸眼部区域、驾驶员人脸嘴部区域以及驾驶员面部整个区域等中的至少一个。其中,该人脸至少部分区域的状态信息可以包括以下任意一项或多项:眼睛睁合状态信息、嘴巴开合状态信息。
可选地,上述眼睛睁合状态信息可以用于进行驾驶员的闭眼检测,例如:检测驾驶员是否半闭眼(“半”表示非完全闭眼的状态,如瞌睡状态下的眯眼等)、是否闭眼、闭眼次数、闭眼幅度等。可选地,眼睛睁合状态信息可以为对眼睛睁开的高度进行归一化处理后的信息。可选地,上述嘴巴开合状态信息可以用于进行驾驶员的打哈欠检测,例如:检测驾驶员是否打哈欠、打哈欠次数等。可选地,嘴巴开合状态信息可以为对嘴巴张开的高度进行归一化处理后的信息。
在一个可选示例中,可以对驾驶员图像进行人脸关键点检测,直接利用所检测出的人脸关键点中的眼睛关键点进行计算,从而根据计算结果获得眼睛睁合状态信息。
在一个可选示例中,可以先利用人脸关键点中的眼睛关键点(例如:眼睛关键点在驾驶员图像中的坐标信息)对驾驶员图像中的眼睛进行定位,以获得眼睛图像,并利用该眼睛图像获得上眼睑线和下眼睑线,通过计算上眼睑线和下眼睑线之间的间隔,获得眼睛睁合状态信息。
在一个可选示例中,可以直接利用人脸关键点中的嘴巴关键点进行计算,从而根据计算结果获得嘴巴开合状态信息。
在一个可选示例中,可以先利用人脸关键点中的嘴巴关键点(例如:嘴巴关键点在驾驶员图像中的坐标信息)对驾驶员图像中的嘴巴进行定位,通过剪切等方式可以获得嘴巴图像,并利用该嘴巴图像获得上唇线和下唇线,通过计算上唇线和下唇线之间的间隔,获得嘴巴开合状态信息。
204,根据一段时间内的人脸至少部分区域的状态信息,获取用于表征驾驶员疲劳状态的指标的参数值。
在一些可选示例中,用于表征驾驶员疲劳状态的指标例如可以包括但不限于以下任意一项或多项:闭眼程度、打哈欠程度。
在一个可选示例中,闭眼程度的参数值例如可以包括但不限于以下任意一项或多项:闭眼次数、闭眼频率、闭眼持续时长、闭眼幅度、半闭眼次数、半闭眼频率;和/或,打哈欠程度的参数值例如可以包括但不限于以下任意一项或多 项:打哈欠状态、打哈欠次数、打哈欠持续时长、打哈欠频率。
206,根据用于表征驾驶员疲劳状态的指标的参数值确定驾驶员疲劳状态检测的结果。
可选地,上述驾驶员疲劳状态检测的结果可以包括:未检测到疲劳状态和疲劳驾驶状态。或者,上述驾驶员疲劳状态检测的结果也可以是疲劳驾驶程度,其中,疲劳驾驶程度可以包括:正常驾驶级别(也可以称为非疲劳驾驶级别)以及疲劳驾驶级别。其中,疲劳驾驶级别可以为一个级别,也可以被划分为多个不同的级别,例如:上述疲劳驾驶级别可以被划分为:提示疲劳驾驶级别(也可以称为轻度疲劳驾驶级别)和警告疲劳驾驶级别(也可以称为重度疲劳驾驶级别)。当然,疲劳驾驶程度可以被划分为更多级别,例如:轻度疲劳驾驶级别、中度疲劳驾驶级别以及重度疲劳驾驶级别等。本实施例不限制疲劳驾驶程度所包括的不同级别。
图3为本申请一些实施例中基于视频流进行驾驶员分心状态检测的流程图。在一个可选示例中,图3所示的实施例可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的状态检测单元执行。如图3所示,基于视频流进行驾驶员分心状态检测的方法,可以包括:
302,对视频流中驾驶员图像进行人脸朝向和/或视线方向检测,得到人脸朝向信息和/或视线方向信息。
可选地,上述人脸朝向信息可以用于确定驾驶员的人脸方向是否正常,例如:确定驾驶员是否侧脸朝向前方或者是否回头等。可选地,人脸朝向信息可以为驾驶员人脸正前方与驾驶员所驾驶的车辆正前方之间的夹角。可选地,上述视线方向信息可以用于确定驾驶员的视线方向是否正常,例如:确定驾驶员是否目视前方等,视线方向信息可以用于判断驾驶员的视线是否发生了偏离现象等。可选地,视线方向信息可以为驾驶员的视线与驾驶员所驾驶的车辆正前方之间的夹角。
304,根据一段时间内的人脸朝向信息和/或视线方向信息,确定用于表征驾驶员分心状态的指标的参数值。
在一些可选示例中,用于表征驾驶员分心状态的指标例如可以包括但不限于以下任意一项或多项:人脸朝向偏离程度,视线偏离程度。在一个可选示例中,人脸朝向偏离程度的参数值例如可以包括但不限于以下任意一项或多项:转头次数、转头持续时长、转头频率;和/或,视线偏离程度的参数值例如可以包括但不限于以下任意一项或多项:视线方向偏离角度、视线方向偏离时长、视线方向偏离频率。
在一个可选示例中,上述视线偏离程度例如可以包括:视线是否偏离以及视线是否严重偏离等中的至少一个;上述人脸朝向偏离程度(也可以称为转脸程度或者回头程度)例如可以包括:是否转头、是否短时间转头以及是否长时间转头中的至少一个。
在一个可选示例中,在判断出人脸朝向信息大于第一朝向,大于第一朝向的这一现象持续了N1帧(例如:持续了9帧或者10帧等),则确定驾驶员出现了一次长时间大角度转头现象,可以记录一次长时间大角度转头,也可以记录本次转头时长;在判断出人脸朝向信息不大于第一朝向,大于第二朝向,在不大于第一朝向,大于第二朝向的这一现象持续了N1帧(例如:持续了9帧或者10帧等),则确定驾驶员出现了一次长时间小角度转头现象,可以记录一次小角度转头,也可以记录本次转头时长。
在一个可选示例中,在判断出视线方向信息和汽车正前方之间的夹角大于第一夹角,大于第一夹角的这一现象持续了N2帧(例如:持续了8帧或者9帧等),则确定驾驶员出现了一次视线严重偏离现象,可以记录一次视线严重偏离,也可以记录本次视线严重偏离时长;在判断出视线方向信息和汽车正前方之间的夹角不大于第一夹角,大于第二夹角,在不大于第一夹角,大于第二夹角的这一现象持续了N2帧(例如:持续了8帧或者9帧等),则确定驾驶员出现了一次视线偏离现象,可以记录一次视线偏离,也可以记录本次视线偏离时长。
在一个可选示例中,上述第一朝向、第二朝向、第一夹角、第二夹角、N1以及N2的取值可以根据实际情况设置,本实施例不限制其取值的大小。
306,根据用于表征驾驶员分心状态的指标的参数值确定驾驶员分心状态检测的结果。
可选地,上述驾驶员分心状态检测的结果例如可以包括:驾驶员注意力集中(驾驶员注意力未分散),驾驶员注意力分散;或者,驾驶员分心状态检测的结果可以为驾驶员注意力分散级别,例如:可以包括:驾驶员注意力集中(驾驶员注意力未分散),驾驶员注意力轻度分散,驾驶员注意力中度分散,驾驶员注意力严重分散等。其中,驾驶员注意力分散级别可以通过用于表征驾驶员分心状态的指标的参数值所满足的预设条件确定。例如:若视线方向偏离角度和人脸朝向偏离角度均小于第一预设角度,驾驶员注意力分散级别为驾驶员注意力集中;若视线方向偏离角度和人脸朝向偏离角度任一大于或等于第一预设角度,且持续时间大于第一预设时长、且小于或等于第二预设时长为驾驶员注意力轻度分散;若视线方向偏离角度和人脸朝向偏离角度任一大于或等于第一预设角度,且持续时间大于第二预设时长、且小于或等于第三预设时长为驾驶员注意力中度分散;若视线方向偏离角度和人脸朝向偏离角度任一大于或等于第一预设角度,且持续时间大于第三预设时长为驾驶员注意力重度分散,其中,第一预设时长小于第二预设时长,第二预设时长小于第三预设时长。
本实施例通过检测驾驶员图像的人脸朝向和/或视线方向来确定用于表征驾驶员分心状态的指标的参数值,并据此确定驾驶员分心状态检测的结果,可以判断驾驶员是否集中注意力驾驶,通过对驾驶员分心状态的指标进行量化,将驾驶专注程度量化为视线偏离程度和转头程度的指标中的至少一个,有利于及时客观的衡量驾驶员的专注驾驶状态。
在一些实施例中,操作302对视频流中驾驶员图像进行人脸朝向和/或视线方向检测,可以包括:
检测视频流中驾驶员图像的人脸关键点;
根据人脸关键点进行人脸朝向和/或视线方向检测。
由于人脸关键点中通常会包含有头部姿态特征信息,在一些可选示例中,根据人脸关键点进行人脸朝向检测,得到人脸朝向信息,包括:根据人脸关键点获取头部姿态的特征信息;根据头部姿态的特征信息确定人脸朝向(也称为头部姿态)信息,此处的人脸朝向信息例如可以表现出人脸转动的方向以及角度,这里的转动的方向可以为向左转动、向右转动、向下转动和/或者向上转动等。
在一个可选示例中,可以通过人脸朝向判断驾驶员是否集中注意力驾驶。人脸朝向(头部姿态)可以表示为(yaw,pitch),其中,yaw表示头部在归一化球坐标(摄像头所在的相机坐标系)中的水平偏转角度(偏航角)和垂直偏转角度(俯仰角)。当水平偏转角和/或垂直偏转角大于一个预设角度阈值、且持续时间大于一个预设时间阈值时可以确定驾驶员分心状态检测的结果为注意力不集中。
在一个可选示例中,可以利用相应的神经网络来获得至少一个驾驶员图像的人脸朝向信息。例如:可以将上述检测到的人脸关键点输入第一神经网络,经第一神经网络基于接收到的人脸关键点提取头部姿态的特征信息并输入第二神经 网络;由第二神经网络基于该头部姿态的特征信息进行头部姿态估计,获得人脸朝向信息。
在采用现有的发展较为成熟,具有较好的实时性的用于提取头部姿态的特征信息的神经网络和用于估测人脸朝向的神经网络来获取人脸朝向信息的情况下,针对摄像头摄取到的视频,可以准确及时的检测出视频中的至少一个图像帧(即至少一帧驾驶员图像)所对应的人脸朝向信息,从而有利于提高确定驾驶员注意力程度的准确性。
在一些可选示例中,根据人脸关键点进行视线方向检测,得到视线方向信息,包括:根据人脸关键点中的眼睛关键点所定位的眼睛图像确定瞳孔边沿位置,并根据瞳孔边沿位置计算瞳孔中心位置;根据瞳孔中心位置与眼睛中心位置计算视线方向信息。例如:计算瞳孔中心位置与眼睛图像中的眼睛中心位置的向量,该向量即可作为视线方向信息。
在一个可选示例中,可以通过视线方向判断驾驶员是否集中注意力驾驶。视线方向可以表示为(yaw,pitch),其中,yaw表示视线在归一化球坐标(摄像头所在的相机坐标系)中的水平偏转角度(偏航角)和垂直偏转角度(俯仰角)。当水平偏转角和/或垂直偏转角大于一个预设角度阈值、且持续时间大于一个预设时间阈值时可以确定驾驶员分心状态检测的结果为注意力不集中。
在一个可选示例中,根据人脸关键点中的眼睛关键点所定位的眼睛图像确定瞳孔边沿位置,可以通过如下方式实现:基于第三神经网络对根据人脸关键点分割出的图像中的眼睛区域图像进行瞳孔边沿位置的检测,并根据第三神经网络输出的信息获取到瞳孔边沿位置。
图4为本申请一些实施例中基于视频流进行驾驶员预定分心动作检测的流程图。在一个可选示例中,图4所示的实施例可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的状态检测单元执行。如图4所示,基于视频流进行驾驶员预定分心动作检测的方法,可以包括:
402,对视频流中的至少一个图像进行预定分心动作相应的目标对象检测,得到目标对象的检测框。
404,根据目标对象的检测框,确定是否出现预定分心动作。
在本实施例中,对驾驶员进行预定分心动作检测通过检测预定分心动作相应的目标对象、根据检测到的目标对象的检测框确定是否出现预定分心动作,从而判断驾驶员是否分心,有助于获取准确的驾驶员预定分心动作检测的结果,从而有助于提高驾驶员状态检测的结果的准确性。
例如:预定分心动作为抽烟动作时,上述操作402~404可以包括:经第四神经网络对驾驶员图像进行人脸检测,得到人脸检测框,并提取人脸检测框的特征信息;经第四神经网络根据人脸检测框的特征信息确定是否出现抽烟动作。
又例如:预定分心动作为饮食动作/喝水动作/打电话动作/娱乐动作(即,饮食动作和/或喝水动作和/或打电话动作和/或娱乐动作)时,上述操作402~404可以包括:经第五神经网络对驾驶员图像进行饮食动作/喝水动作/打电话动作/娱乐动作相应的预设目标对象检测,得到预设目标对象的检测框,其中,预设目标对象可以包括:手部、嘴部、眼部、目标物体;目标物体例如可以包括但不限于以下任意一类或多类:容器、食物、电子设备;根据预设目标对象的检测框确定预定分心动作的检测结果,该预定分心动作的检测结果可以包括以下之一:未出现饮食动作/喝水动作/打电话动作/娱乐动作,出现饮食动作,出现喝水动作,出现打电话动作,出现娱乐动作。
在一些可选示例中,预定分心动作为饮食动作/喝水动作/打电话动作/娱乐动作(即,饮食动作和/或喝水动作和/或打电话动作和/或娱乐动作)时,根据预设目标对象的检测框确定预定分心动作的检测结果,可以包括:根据是否检测到手部的检测框、嘴部的检测框、眼部的检测框和目标物体的检测框,以及根据手部的检测框与目标物体的检测框是否重叠、目标物体的类型以及目标物体的检测框与嘴部的检测框或眼部的检测框之间的距离是否满足预设条件,确定预定分心动作的检测结果。
可选地,若手部的检测框与目标物体的检测框重叠,目标物体的类型为容器或食物、且目标物体的检测框与嘴部的检测框之间重叠,确定出现饮食动作或喝水动作;和/或,若手部的检测框与目标物体的检测框重叠,目标物体的类型为电子设备,且目标物体的检测框与嘴部的检测框之间的最小距离小于第一预设距离、或者目标物体的检测框与眼部的检测框之间的最小距离小于第二预设距离,确定出现娱乐动作或打电话动作。
另外,若未同时检测到手部的检测框、嘴部的检测框和任一目标物体的检测框,且未同时检测到手部的检测框、眼部的检测框和任一目标物体的检测框,确定分心动作的检测结果为未检测到饮食动作、喝水动作、打电话动作和娱乐动作;和/或,若手部的检测框与目标物体的检测框未重叠,确定分心动作的检测结果为未检测到饮食动作、喝水动作、打电话动作和娱乐动作;和/或,若目标物体的类型为容器或食物、且目标物体的检测框与嘴部的检测框之间未重叠,和/或,目标物体的类型为电子设备、且目标物体的检测框与嘴部的检测框之间的最小距离不小于第一预设距离、或者目标物体的检测框与眼部的检测框之间的最小距离不小于第二预设距离,确定分心动作的检测结果为未检测到饮食动作、喝水动作、打电话动作和娱乐动作。
另外,在上述对驾驶员图像进行预定分心动作检测的实施例中,还可以包括:若驾驶员分心状态检测的结果为检测到预定分心动作,提示检测到的预定分心动作,例如:检测到抽烟动作时,提示检测到抽烟;检测到喝水动作时,提示检测到喝水;检测到打电话动作时,提示检测到打电话。
在一个可选示例中,上述提示检测到的预定分心动作的操作可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的提示单元执行。
另外,请再参见图4所示,在对驾驶员图像进行驾驶员预定分心动作检测的另一个实施例中,还可以选择性地包括:
406,若出现预定分心动作,根据一段时间内是否出现预定分心动作的确定结果,获取用于表征驾驶员分心程度的指标的参数值。
可选地,用于表征驾驶员分心程度的指标例如可以包括但不限于以下任意一项或多项:预定分心动作的次数、预定分心动作的持续时长、预定分心动作的频率。例如:抽烟动作的次数、持续时长、频率;喝水动作的次数、持续时长、频率;打电话动作的次数、持续时长、频率;等等。
408,根据用于表征驾驶员分心程度的指标的参数值确定驾驶员预定分心动作检测的结果。
可选地,上述驾驶员预定分心动作检测的结果可以包括:未检测到预定分心动作,检测到的预定分心动作。另外,上述驾驶员预定分心动作检测的结果也可以为分心级别,例如:分心级别可以被划分为:未分心级别(也可以称为专注驾驶级别),提示分心驾驶级别(也可以称为轻度分心驾驶级别)和警告分心驾驶级别(也可以称为重度分心驾驶级别)。当然,分心级别也可以被划分为更多级别,例如:未分心驾驶级别,轻度分心驾驶级别、中度分心驾驶级别以及重度分心驾驶级别等。当然,本实施例至少一个实施例的分心级别也可以按照其他情况划分,不限制为上述级别划分情况。
在一个可选示例中,分心级别可以通过用于表征驾驶员分心程度的指标的参数值所满足的预设条件确定。例如:若 未检测到预定分心动作,分心级别为未分心级别(也可以称为专注驾驶级别);若检测到预定分心动作的持续时间小于第一预设时长、且频率小于第一预设频率,分心级别为轻度分心驾驶级别;若检测到预定分心动作的持续时间大于第一预设时长,和/或频率大于第一预设频率,分心级别为重度分心驾驶级别。
在一些实施例中,驾驶员状态检测方法还可以包括:根据驾驶员分心状态检测的结果和/或驾驶员预定分心动作检测的结果,输出分心提示信息。
可选地,若驾驶员分心状态检测的结果为驾驶员注意力分散或者驾驶员注意力分散级别,和/或驾驶员预定分心动作检测的结果为检测到预定分心动作,则可以输出分心提示信息,以提醒驾驶员集中注意力驾驶。
在一个可选示例中,上述根据驾驶员分心状态检测的结果和/或驾驶员预定分心动作检测的结果,输出分心提示信息的操作可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的提示单元执行。
图5为本申请一些实施例的驾驶员状态检测方法的流程图。在一个可选示例中,图5所示的实施例可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的状态检测单元执行。如图5所示,该实施例的驾驶员状态检测方法包括:
502,基于视频流进行驾驶员疲劳状态检测、驾驶员分心状态检测和驾驶员预定分心动作检测,得到驾驶员疲劳状态检测的结果、驾驶员分心状态检测的结果和驾驶员预定分心动作检测的结果。
504,根据驾驶员疲劳状态检测的结果、驾驶员分心状态检测的结果和驾驶员预定分心动作检测的结果所满足的预设条件确定驾驶状态等级。
506,将确定的驾驶状态等级作为驾驶员状态检测的结果。
在一个可选示例中,每一个驾驶员状态等级均对应有预设条件,可以实时的判断驾驶员疲劳状态检测的结果、驾驶员分心状态检测的结果和驾驶员预定分心动作检测的结果所满足的预设条件,可以将被满足的预设条件所对应的驾驶员状态等级确定为驾驶员的驾驶员状态检测的结果。其中,驾驶员状态等级例如可以包括:正常驾驶状态(也可以称为专注驾驶级别),提示驾驶状态(驾驶状态较差),警告驾驶状态(驾驶状态非常差)。
在一个可选示例中,上述图5所示的实施例可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的输出模块执行。
例如:正常驾驶状态(也可以称为专注驾驶级别)对应的预设条件可以包括:
条件1、驾驶员疲劳状态检测的结果为:未检测到疲劳状态或者非疲劳驾驶级别;
条件2,驾驶员分心状态检测的结果为:驾驶员注意力集中;
条件3,驾驶员预定分心动作检测的结果为:未检测到预定分心动作或者未分心级别。
在上述条件1、条件2、条件3均满足的情况下,驾驶状态等级为正常驾驶状态(也可以称为专注驾驶级别)。
例如:提示驾驶状态(驾驶状态较差)对应的预设条件可以包括:
条件11、驾驶员疲劳状态检测的结果为:提示疲劳驾驶级别(也可以称为轻度疲劳驾驶级别);
条件22,驾驶员分心状态检测的结果为:驾驶员注意力轻度分散;
条件33,驾驶员预定分心动作检测的结果为:提示分心驾驶级别(也可以称为轻度分心驾驶级别)。
在上述条件11、条件22、条件33中的任一条件满足,且其他条件中的结果未达到更严重的疲劳驾驶级别、注意力分散级别、分心级别对应的预设条件的情况下,驾驶状态等级为提示驾驶状态(驾驶状态较差)。
例如:警告驾驶状态(驾驶状态非常差)对应的预设条件可以包括:
条件111、驾驶员疲劳状态检测的结果为:警告疲劳驾驶级别(也可以称为重度疲劳驾驶级别);
条件222,驾驶员分心状态检测的结果为:驾驶员注意力严重分散;
条件333,驾驶员预定分心动作检测的结果为:警告分心驾驶级别(也可以称为重度分心驾驶级别)。
在上述条件111、条件222、条件333中的任一条件满足时,驾驶状态等级为警告驾驶状态(驾驶状态非常差)。
在一些实施例中,驾驶员状态检测方法还可以包括:
执行与驾驶员状态检测的结果对应的控制操作。
在一个可选示例中,执行与驾驶员状态检测的结果对应的控制操作可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的控制单元执行。
可选地,执行与驾驶员状态检测的结果对应的控制操作可以包括以下至少之一:
如果确定的驾驶员状态检测的结果满足提示/告警预定条件,例如:满足提示驾驶状态(如:驾驶状态较差)对应的预设条件或者驾驶状态等级为提示驾驶状态(如:驾驶状态较差),输出与该提示/告警预定条件相应的提示/告警信息,例如:通过声(如:语音或者响铃等)/光(如:亮灯或者灯光闪烁等)/震动等方式提示驾驶员,以便于提醒驾驶员注意,促使驾驶员将被分散的注意力回归到驾驶上或者促使驾驶员进行休息等,以实现安全驾驶,避免发生道路交通事故;和/或,
如果确定的驾驶员状态检测的结果满足预定驾驶模式切换条件,例如:满足警告驾驶状态(如:驾驶状态非常差)对应的预设条件或者驾驶状态等级为警告分心驾驶级别(也可以称为重度分心驾驶级别)时,将驾驶模式切换为自动驾驶模式,以实现安全驾驶,避免发生道路交通事故;同时,还可以通过声(如:语音或者响铃等)/光(如:亮灯或者灯光闪烁等)/震动等方式提示驾驶员,以便于提醒驾驶员,促使驾驶员将被分散的注意力回归到驾驶上或者促使驾驶员进行休息等;和/或,如果确定的驾驶员状态检测的结果满足预定信息发送条件,向预定联系方式发送预定信息或与预定联系方式建立通信连接;例如:约定驾驶员做出某个或某些动作时,表示驾驶员处于危险状态或需要求助,当检测到这些动作时,向预定联系方式(例如:报警电话、最近联系人的电话或设置的紧急联系人的电话)发送预定信息(如:报警信息、提示信息或拨通电话),还可以直接通过车载设备与预定联系方式建立通信连接(如:视频通话、语音通话或电话通话),以保障驾驶员的人身和/或财产安全。
在一个或多个可选的实施例中,驾驶员状态检测方法还可以包括:向云端服务器发送驾驶员状态检测的至少部分结果。
可选地,至少部分结果包括:根据驾驶员状态检测确定的异常驾驶状态信息。
在本实施例中,将驾驶员状态检测得到的部分结果或全部结果发送到云端服务器,可实现对异常驾驶状态信息的备份,由于正常驾驶状态无需进行记录,因此,本实施例仅将异常驾驶状态信息发送给云端服务器;当得到的驾驶员状态检测结果包括正常驾驶状态信息和异常驾驶状态信息时,传输部分结果,即仅将异常驾驶状态信息发送给云端服务器; 而当驾驶员状态检测的全部结果都为异常驾驶状态信息时,传输全部的异常驾驶状态信息给云端服务器。
可选地,驾驶员状态检测方法还可以包括:在车辆端存储视频流中与异常驾驶状态信息对应的图像或视频段;和/或,
向云端服务器发送视频流中与异常驾驶状态信息对应的图像或视频段。
在本实施例中,通过在车辆端本地保存与异常驾驶状态信息对应的图像或视频段,实现证据保存,通过保存的图像或视频段,如果后续由于驾驶员异常驾驶状态出现驾驶安全或其他问题,可以通过调取保存的图像或视频段进行责任确定,如果在保存的图像或视频段中发现与出现的问题相关的异常驾驶状态,即可以确定为该驾驶员的责任;而为了防止车辆端的数据被误删或蓄意删除,可以将与异常驾驶状态信息对应的图像或视频段上传到云端服务器进行备份,在需要信息时,可以从云端服务器下载到车辆端进行查看,或从云端服务器下载到其他客户端进行查看。
在一个或多个可选的实施例中,驾驶管理方法还包括:在车辆与移动端设备处于通信连接状态时,向移动端设备发送数据集下载请求;
接收并存储移动端设备发送的数据集。
其中,数据集是由移动端设备在接收到数据集下载请求时,从云端服务器获取并发送给车辆的。
可选地,移动端设备可以为手机、PAD或者其他车辆上的终端设备等,移动端设备在接收到数据集下载请求时向云端服务器发送数据集下载请求,然后获得数据集再发送给车辆,通过移动端设备下载数据集时,可以应用移动端设备自带的网络(如:2G、3G、4G网络等),避免了车辆受网络限制不能从云端服务器下载到数据集而无法进行人脸匹配的问题。
在一个或多个可选的实施例中,驾驶管理方法还包括:如果特征匹配结果表示特征匹配不成功,拒绝执行接收到的操作指令。
在本实施例中,特征匹配不成功表示该驾驶员未经过注册,此时,为了保障已注册驾驶员的权益,车辆将拒绝执行该驾驶员的操作指令。
可选地,驾驶管理方法还包括:
发出提示注册信息;
根据提示注册信息接收驾驶员注册请求,驾驶员注册请求包括驾驶员的注册人脸图像;
根据注册人脸图像,建立数据集。
在本实施例中,通过车辆接收驾驶员发出的驾驶员注册请求,对该驾驶员的注册人脸图像进行保存,在车辆端基于该注册人脸图像建立数据集,通过数据集可实现车辆端的单独人脸匹配,无需从云端服务器下载数据集。
图6为本申请一些实施例驾驶管理方法的一个应用示例的流程图。如图6所示,本实施例驾驶管理方法的执行主体可以为车辆端设备,例如:执行主体可以为车载智能***或其他具有类似功能的设备,并为经过筛选的人脸图像和驾驶员ID信息(身份信息)分配对应的驾驶员权限信息之后存入数据集;
车辆客户端获取驾驶员图像,对驾驶员图像依次经过人脸检测、质量筛选和活体识别,以经过筛选的待识别人脸图像与数据集中所有人脸图像进行匹配,匹配基于人脸特征实现,人脸特征可通过神经网络提取获得,基于比对结果确定待识别人脸图像对应的权限信息,基于权限信息控制车辆动作;在车辆客户端分别对待识别图像和数据集中的人脸图像进行特征提取,得到对应的人脸特征,基于人脸特征进行匹配,基于匹配结果执行相应的操作。
在一个或多个可选的实施例中,操作120可以包括:在车辆与云端服务器处于通信连接状态时,将视频流中的至少一个图像的人脸部分上传到云端服务器,并接收云端服务器发送的特征匹配结果。
在本实施例中,实现在云端服务器中进行特征匹配,在匹配前,车辆将视频流中的至少一个图像的人脸部分上传到云端服务器,云端服务器将该人脸部分与数据集中的人脸图像进行特征匹配,以获得特征匹配结果,车辆从云端服务器获取该特征匹配结果,减少了车辆与云端服务器之间的数据传输量,减小了网络开销。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
图7为本申请一些实施例的车载智能***的结构示意图。该实施例的车载智能***可用于实现本申请上述各驾驶管理方法实施例。如图7所示,该实施例的车载智能***包括:
视频采集单元71,用于控制设置在车辆上的摄像组件采集车辆驾驶员的视频流。
结果获取单元72,用于获取视频流中的至少一个图像的人脸部分与数据集中至少一个预存人脸图像的特征匹配结果。
可选地,数据集中存储有至少一个已注册的驾驶员的预存人脸图像。
操作单元73,用于如果特征匹配结果表示特征匹配成功,控制车辆执行车辆接收到的操作指令。
基于本申请上述实施例提供的车载智能***,通过控制设置在车辆上的摄像组件采集车辆驾驶员的视频流;获取视频流中的至少一个图像的人脸部分与数据集中至少一个预存人脸图像的特征匹配结果;如果特征匹配结果表示特征匹配成功,控制车辆执行车辆接收到的操作指令,减少了驾驶员识别对网络的依赖,可以在无网络的情况下实现特征匹配,进一步提高了车辆的安全保障性。
在一个或多个可选的实施例中,车载智能***还包括:
第一数据下载单元,用于在车辆与云端服务器处于通信连接状态时,向云端服务器发送数据集下载请求;
数据保存单元,用于接收并存储云端服务器发送的数据集。
在一个或多个可选的实施例中,车载智能***还包括:
第一云端存储单元,用于如果特征匹配结果表示特征匹配成功,根据特征匹配成功的预存人脸图像获取车辆驾驶员的身份信息;向云端服务器发送图像和身份信息。
在一个或多个可选的实施例中,车载智能***还可以包括:
第二云端存储单元,如果特征匹配结果表示特征匹配成功,根据特征匹配成功的预存人脸图像获取车辆驾驶员的身份信息;截取图像中的人脸部分;向云端服务器发送截取的人脸部分和身份信息。
在一个或多个可选的实施例中,车载智能***还可以包括:活体检测单元,用于获取所采集的图像的活体检测结果;
操作单元73,用于根据特征匹配结果和活体检测结果,控制车辆执行车辆接收到的操作指令。
可选地,数据集中的预存人脸图像还对应设置有驾驶权限;
本实施例的***还可以包括:
权限获取单元,用于如果特征匹配结果表示特征匹配成功,获取与特征匹配成功的预存人脸图像对应的驾驶权限;
操作单元73,还用于控制车辆执行车辆接收到的在权限范围内的操作指令。
在一个或多个可选的实施例中,车载智能***还包括:
状态检测单元,用于基于视频流进行驾驶员状态检测;
输出单元,用于根据驾驶员状态检测的结果,进行异常驾驶状态的预警提示;和/或,
智能驾驶控制单元,用于根据驾驶员状态检测的结果,进行智能驾驶控制。
在其中一些实施例中,可以输出驾驶员的驾驶员状态检测的结果。
在其中另一些实施例中,可以根据驾驶员状态检测的结果,对车辆进行智能驾驶控制。
在其中又一些实施例中,可以输出驾驶员状态检测的结果,同时根据驾驶员状态检测的结果,对车辆进行智能驾驶控制。
可选地,驾驶员状态检测包括以下任意一项或多项:驾驶员疲劳状态检测,驾驶员分心状态检测,驾驶员预定分心动作检测,驾驶员手势检测。
可选地,状态检测单元基于视频流进行驾驶员疲劳状态检测时,用于:
对视频流中的至少一个图像的人脸至少部分区域进行检测,得到人脸至少部分区域的状态信息,人脸至少部分区域的状态信息包括以下任意一项或多项:眼睛睁合状态信息、嘴巴开合状态信息;
根据一段时间内的人脸至少部分区域的状态信息,获取用于表征驾驶员疲劳状态的指标的参数值;
根据用于表征驾驶员疲劳状态的指标的参数值确定驾驶员疲劳状态检测的结果。
可选地,用于表征驾驶员疲劳状态的指标包括以下任意一项或多项:闭眼程度、打哈欠程度。
可选地,闭眼程度的参数值包括以下任意一项或多项:闭眼次数、闭眼频率、闭眼持续时长、闭眼幅度、半闭眼次数、半闭眼频率;和/或,
打哈欠程度的参数值包括以下任意一项或多项:打哈欠状态、打哈欠次数、打哈欠持续时长、打哈欠频率。
在一个或多个可选的实施例中,状态检测单元基于视频流进行驾驶员分心状态检测时,用于:
对视频流中驾驶员图像进行人脸朝向和/或视线方向检测,得到人脸朝向信息和/或视线方向信息;
根据一段时间内的人脸朝向信息和/或视线方向信息,确定用于表征驾驶员分心状态的指标的参数值;用于表征驾驶员分心状态的指标包括以下任意一项或多项:人脸朝向偏离程度,视线偏离程度;
根据用于表征驾驶员分心状态的指标的参数值确定驾驶员分心状态检测的结果。
可选地,人脸朝向偏离程度的参数值包括以下任意一项或多项:转头次数、转头持续时长、转头频率;和/或,
视线偏离程度的参数值包括以下任意一项或多项:视线方向偏离角度、视线方向偏离时长、视线方向偏离频率。
可选地,状态检测单元对视频流中驾驶员图像进行人脸朝向和/或视线方向检测时,用于:
检测视频流中驾驶员图像的人脸关键点;
根据人脸关键点进行人脸朝向和/或视线方向检测。
可选地,状态检测单元根据人脸关键点进行人脸朝向检测时,用于:
根据人脸关键点获取头部姿态的特征信息;
根据头部姿态的特征信息确定人脸朝向信息。
可选地,预定分心动作包括以下任意一项或多项:抽烟动作,喝水动作,饮食动作,打电话动作,娱乐动作。
在一个或多个可选的实施例中,,状态检测单元基于视频流进行驾驶员预定分心动作检测时,用于:
对视频流中的至少一个图像进行预定分心动作相应的目标对象检测,得到目标对象的检测框;
根据目标对象的检测框,确定是否出现预定分心动作。
可选地,状态检测单元,还用于:
若出现预定分心动作,根据一段时间内是否出现预定分心动作的确定结果,获取用于表征分心程度的指标的参数值;
根据用于表征分心程度的指标的参数值确定驾驶员预定分心动作检测的结果。
可选地,分心程度的指标的参数值包括以下任意一项或多项:预定分心动作的次数、预定分心动作的持续时长、预定分心动作的频率。
可选地,车载智能***还包括:
提示单元,用于若驾驶员预定分心动作检测的结果为检测到预定分心动作,提示检测到的分心动作。
可选地,车载智能***还包括:
控制单元,用于执行与驾驶员状态检测的结果对应的控制操作。
可选地,控制单元,用于:
如果确定的驾驶员状态检测的结果满足提示/告警预定条件,输出与提示/告警预定条件相应的提示/告警信息;和/或,
如果确定的驾驶员状态检测的结果满足预定信息发送条件,向预定联系人发送预定信息或与预定联系方式建立通信连接;和/或,
如果确定的驾驶员状态检测的结果满足预定驾驶模式切换条件,将驾驶模式切换为自动驾驶模式。
在一个或多个可选的实施例中,车载智能***还包括:
结果发送单元,用于向云端服务器发送驾驶员状态检测的至少部分结果。
可选地,至少部分结果包括:根据驾驶员状态检测确定的异常驾驶状态信息。
可选地,车载智能***还包括:视频存储单元,用于:
存储视频流中与异常驾驶状态信息对应的图像或视频段;和/或,
向云端服务器发送视频流中与异常驾驶状态信息对应的图像或视频段。
在一个或多个可选的实施例中,的车载智能***还包括:
第二数据下载单元,用于在车辆与移动端设备处于通信连接状态时,向移动端设备发送数据集下载请求;接收并存储移动端设备发送的数据集。
可选地,数据集是由移动端设备在接收到数据集下载请求时,从云端服务器获取并发送给车辆的。
在一个或多个可选的实施例中,操作单元73,还用于如果特征匹配结果表示特征匹配不成功,拒绝执行接收到的操作指令。
可选地,操作单元73,还用于发出提示注册信息;
根据提示注册信息接收驾驶员注册请求,驾驶员注册请求包括驾驶员的注册人脸图像;
根据注册人脸图像,建立数据集。
在一个或多个可选的实施例中,结果获取单元72,用于在车辆端设备与云端服务器处于通信连接状态时,将视频流中的至少一个图像的人脸部分上传到云端服务器,并接收云端服务器发送的特征匹配结果。
本申请实施例提供的车载智能***任一实施例的工作过程以及设置方式均可以参照本申请上述相应方法实施例的具体描述,限于篇幅,在此不再赘述。
图8为本申请另一些实施例的驾驶管理方法的流程图。如图8所示,本实施例驾驶管理方法的执行主体可以为云端服务器,例如:执行主体可以为电子设备或其他具有类似功能的设备,该实施例的方法包括:
810,接收车辆发送的待识别的人脸图像。
可选地,待识别的人脸图像通过车辆进行采集,经过人脸检测从采集到的视频中的图像获得人脸图像,基于视频中的图像获得人脸图像的过程可以包括:人脸检测、人脸质量筛选和活体识别,通过这些过程可以保证获得的待识别的人脸图像是车辆中的真实驾驶员的质量较好的人脸图像,保证了后续特征匹配的效果。
在一个可选示例中,该操作810可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的图像接收单元91执行。
820,获得人脸图像与数据集中至少一个预存人脸图像的特征匹配结果。
可选地,数据集中存储有至少一个已注册的驾驶员的预存人脸图像;可选地,云端服务器可以从车辆直接获取到特征匹配结果,此时,特征匹配的过程在车辆端实现。
可选地,从车辆获取人脸图像与数据集中至少一个预存人脸图像的特征匹配结果。
在一个可选示例中,该操作820可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的匹配结果获得单元92执行。
830,如果特征匹配结果表示特征匹配成功,向车辆发送允许控制车辆的指令。
在一个可选示例中,该操作830可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的指令发送单元93执行。
基于本申请上述实施例提供的驾驶管理方法,通过在车辆端实现人脸特征匹配,减少了驾驶员识别对网络的依赖,可以在无网络的情况下实现特征匹配,进一步提高了车辆的安全保障性。
可选地,驾驶管理方法还包括:
接收车辆发送的数据集下载请求,数据集中存储有至少一已注册的驾驶员的预存人脸图像;
向车辆发送数据集。
可选地,通常数据集保存在云端服务器中,本实施例需要实现在车辆端进行人脸匹配,为了可以在无网络的情况下也能对人脸进行匹配,可以在有网络的情况下,从云端服务器下载数据集,并将数据集保存在车辆端,此时,即使没有网络,无法与云端服务器通信,也可以在车辆端实现人脸匹配,并且方便车辆端对数据集的管理。
在一个或多个可选的实施例中,驾驶管理方法还包括:
接收车辆或移动端设备发送的驾驶员注册请求,驾驶员注册请求包括驾驶员的注册人脸图像;
根据注册人脸图像,建立数据集。
为了识别驾驶员是否注册,首先需要存储注册的驾驶员对应的注册人脸图像,在本实施例中,在云端服务器,为已注册的注册人脸图像建立数据集,在数据集中保存已经注册的多个驾驶员的注册人脸图像,通过云端服务器保存,保证了数据的安全性。
在一个或多个可选的实施例中,操作820可以包括:
对人脸图像与数据集中至少一个预存人脸图像进行特征匹配,得到特征匹配结果。
在本实施例中,实现在云端服务器中进行特征匹配,在匹配前,车辆将视频流中的至少一个图像的人脸部分上传到云端服务器,云端服务器将该人脸部分与数据集中的人脸图像进行特征匹配,以获得特征匹配结果,车辆从云端服务器获取该特征匹配结果,减少了车辆与云端服务器之间的数据传输量,减小了网络开销。
在一个或多个可选的实施例中,驾驶管理方法还包括:
接收车辆发送的驾驶员状态检测的至少部分结果,进行异常驾驶状态的预警提示和/或向车辆发送进行智能驾驶控制的指令。
可选地,至少部分结果包括:根据驾驶员状态检测确定的异常驾驶状态信息。
将驾驶员状态检测得到的部分结果或全部结果发送到云端服务器,可以实现对异常驾驶状态信息的备份,由于正常驾驶状态无需进行记录,因此,本实施例仅将异常驾驶状态信息发送给云端服务器;当得到的驾驶员状态检测结果包括正常驾驶状态信息和异常驾驶状态信息时,传输部分结果,即仅将异常驾驶状态信息发送给云端服务器;而当驾驶员状态检测的全部结果都为异常驾驶状态信息时,传输全部的异常驾驶状态信息给云端服务器。
在一个或多个可选的实施例中,驾驶管理方法还包括:执行与驾驶员状态检测的结果对应的控制操作。
可选地,如果确定的驾驶员状态检测的结果满足提示/告警预定条件,例如:满足提示驾驶状态(如:驾驶状态较差)对应的预设条件或者驾驶状态等级为提示驾驶状态(如:驾驶状态较差),输出与该提示/告警预定条件相应的提示/告警信息,例如:通过声(如:语音或者响铃等)/光(如:亮灯或者灯光闪烁等)/震动等方式提示驾驶员,以便于提醒驾驶员注意,促使驾驶员将被分散的注意力回归到驾驶上或者促使驾驶员进行休息等,以实现安全驾驶,避免发生道路交通事故;和/或,如果确定的驾驶员状态检测的结果满足预定驾驶模式切换条件,例如:满足警告驾驶状态(如:驾驶状态非常差)对应的预设条件或者驾驶状态等级为警告分心驾驶级别(也可以称为重度分心驾驶级别)时,将驾驶模式切换为自动驾驶模式,以实现安全驾驶,避免发生道路交通事故;同时,还可以通过声(如:语音或者响铃等)/光(如:亮灯或者灯光闪烁等)/震动等方式提示驾驶员,以便于提醒驾驶员,促使驾驶员将被分散的注意力回归到驾驶上或者促使驾驶员进行休息等;和/或,
如果确定的驾驶员状态检测的结果满足信息发送预定条件,向预定联系方式发送预定信息或与预定联系方式建立通信连接;例如:约定驾驶员做出某个或某些动作时,表示驾驶员处于危险状态或需要求助,当检测到这些动作时,向预定联系方式(例如:报警电话、最近联系人的电话或设置的紧急联系人的电话)发送预定信息(如:报警信息、提示信息或拨通电话),还可以直接通过车载设备与预定联系方式建立通信连接(如:视频通话、语音通话或电话通话),以保障驾驶员的人身和/或财产安全。
可选地,驾驶管理方法还包括:
接收与异常驾驶状态信息对应的图像或视频段。
在本实施例中,而为了防止车辆端的数据被误删或蓄意删除,可以将与异常驾驶状态信息对应的图像或视频段上传到云端服务器进行备份,在需要信息时,可以从云端服务器下载到车辆端进行查看,或从云端服务器下载到其他客户端进行查看。
可选地,驾驶管理方法还包括:
基于异常驾驶状态信息可以进行以下至少一种操作:
数据统计、车辆管理、驾驶员管理。
云端服务器可以接收多个车辆的异常驾驶状态信息,可以实现基于大数据的数据统计、对车辆及驾驶员的管理,以实现更好的为车辆和驾驶员服务。
可选地,基于异常驾驶状态信息进行数据统计,包括:
基于异常驾驶状态信息对接收的与异常驾驶状态信息对应的图像或视频段进行统计,使图像或视频段按不同异常驾驶状态进行分类,确定每种异常驾驶状态的统计情况。
对每种不同异常驾驶状态进行分类统计,可以得到基于大数据的驾驶员经常出现的异常驾驶状态,可以为车辆开发者提供更多的参考数据,以便在车辆中提供更适合应对异常驾驶状态的设置或装置,为驾驶员提供更舒适的驾驶环境。
可选地,基于异常驾驶状态信息进行车辆管理,包括:
基于异常驾驶状态信息对接收的与异常驾驶状态信息对应的图像或视频段进行统计,使图像或视频段按不同车辆进行分类,确定每个车辆的异常驾驶统计情况。
通过基于车辆对异常驾驶状态信息进行统计,可以对车辆对应的所有驾驶员的异常驾驶状态信息进行处理,例如:当某一车辆出现问题,通过查看该车辆对应的所有异常驾驶状态信息即可实现责任确定。
可选地,基于异常驾驶状态信息进行驾驶员管理,包括:
基于异常驾驶状态信息对接收的与异常驾驶状态信息对应的图像或视频段进行处理,使图像或视频段按不同驾驶员进行分类,确定每个驾驶员的异常驾驶统计情况。
通过基于驾驶员对异常驾驶状态信息进行统计,可获得每个驾驶员的驾驶习惯及经常出现的问题,可以为每个驾驶员提供个性化服务,在达到安全驾驶的目的的同时,不会对驾驶习惯良好的驾驶员造成干扰;例如:经过对异常驾驶状态信息进行统计,确定某个驾驶员经常在开车时打哈欠,针对该驾驶员可提供更高音量的提示信息。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
图9为本申请一些实施例的电子设备的结构示意图。该实施例的电子设备可用于实现本申请上述各驾驶管理方法实施例。如图9所示,该实施例的电子设备包括:
图像接收单元91,用于接收车辆发送的待识别的人脸图像。
匹配结果获得单元92,用于获得人脸图像与数据集中至少一个预存人脸图像的特征匹配结果。
可选地,数据集中存储有至少一个已注册的驾驶员的预存人脸图像。
可选地,从车辆获取人脸图像与数据集中至少一个预存人脸图像的特征匹配结果。
指令发送单元93,用于如果特征匹配结果表示特征匹配成功,向车辆发送允许控制车辆的指令。
基于本申请上述实施例提供的电子设备,通过在车辆端实现人脸特征匹配,减少了驾驶员识别对网络的依赖,可以在无网络的情况下实现特征匹配,进一步提高了车辆的安全保障性。
可选地,电子设备还包括:
第一数据发送单元,用于接收车辆发送的数据集下载请求,数据集中存储有至少一已注册的驾驶员的预存人脸图像;向车辆发送所述数据集。
在一个或多个可选的实施例中,电子设备还包括:
注册请求接收单元,用于接收车辆或移动端设备发送的驾驶员注册请求,驾驶员注册请求包括驾驶员的注册人脸图像;
根据注册人脸图像,建立数据集。
在一个或多个可选的实施例中,匹配结果获得单元92,用于对人脸图像与数据集中至少一个预存人脸图像进行特征匹配,得到特征匹配结果。
在一个或多个可选的实施例中,电子设备还包括:
检测结果接收单元,用于接收车辆发送的驾驶员状态检测的至少部分结果,进行异常驾驶状态的预警提示和/或向车辆发送进行智能驾驶控制的指令。
可选地,至少部分结果包括:根据驾驶员状态检测确定的异常驾驶状态信息。
在一个或多个可选的实施例中,电子设备还包括:
执行控制单元,用于执行与驾驶员状态检测的结果对应的控制操作。
可选地,执行控制单元,用于:
如果确定的驾驶员状态检测的结果满足提示/告警预定条件,输出与提示/告警预定条件相应的提示/告警信息;和/或,
如果确定的驾驶员状态检测的结果满足预定信息发送条件,向预定联系方式发送预定信息或与预定联系方式建立通信连接;和/或,
如果确定的驾驶员状态检测的结果满足预定驾驶模式切换条件,将驾驶模式切换为自动驾驶模式。
可选地,电子设备还包括:
视频接收单元,用于接收与异常驾驶状态信息对应的图像或视频段。
可选地,电子设备还包括:
异常处理单元,用于基于异常驾驶状态信息进行以下至少一种操作:数据统计、车辆管理、驾驶员管理。
可选地,异常处理单元基于异常驾驶状态信息进行数据统计时,用于基于异常驾驶状态信息对接收的与异常驾驶状态信息对应的图像或视频段进行统计,使图像或视频段按不同异常驾驶状态进行分类,确定每种异常驾驶状态的统计情况。
可选地,异常处理单元基于异常驾驶状态信息进行车辆管理时,用于基于异常驾驶状态信息对接收的与异常驾驶状态信息对应的图像或视频段进行统计,使图像或视频段按不同车辆进行分类,确定每个车辆的异常驾驶统计情况。
可选地,异常处理单元基于异常驾驶状态信息进行驾驶员管理时,用于基于异常驾驶状态信息对接收的与异常驾驶状态信息对应的图像或视频段进行处理,使图像或视频段按不同驾驶员进行分类,确定每个驾驶员的异常驾驶统计情况。
本申请实施例提供的电子设备任一实施例的工作过程以及设置方式均可以参照本申请上述相应方法实施例的具体描述,限于篇幅,在此不再赘述。
根据本申请实施例的另一个方面,提供的一种驾驶管理***,包括:车辆和/或云端服务器;
车辆用于执行如图1-6所示实施例中任一驾驶管理方法;
云端服务器用于执行如图8所示实施例中任一驾驶管理方法。
可选地,驾驶管理***还包括:移动端设备,用于:
接收驾驶员注册请求,驾驶员注册请求包括驾驶员的注册人脸图像;
将驾驶员注册请求发送给云端服务器。
图10为本申请一些实施例的驾驶管理***的使用流程图。如图10所示,上述实施例实现的注册过程在手机端(移动端设备)实现,并将经过筛选的人脸图像和驾驶员的ID信息(身份信息)上传到云端服务器中,云端服务器将人脸图像和驾驶员ID信息及该人脸图像对应的用户权限信息存入数据集,在需要进行权限匹配时,通过车辆客户端下载数据集到车辆客户端进行匹配;车辆客户端获取驾驶员图像,对驾驶员图像依次经过人脸检测、质量筛选和活体识别,以经过筛选的待识别人脸图像与数据集中所有人脸图像进行匹配,匹配基于人脸特征实现,人脸特征可以通过神经网络提取获得,基于比对结果确定待识别人脸图像对应的权限信息,基于权限信息控制车辆动作。
图11为本申请另一些实施例的驾驶管理***的使用流程图。如图11所示,上述实施例实现的注册过程在手机端(移动端设备)实现,并将经过筛选的人脸图像和驾驶员ID信息(身份信息)上传到云端服务器中,云端服务器将人脸图像和驾驶员ID信息及该人脸图像对应的用户权限信息存入数据集,在需要进行权限匹配时,接收车辆客户端上传的待识别人脸图像,待识别人脸图像与数据集中所有人脸图像进行匹配,匹配基于人脸特征实现,人脸特征可以通过神经网络提取获得,基于比对结果确定待识别人脸图像对应的权限信息,基于权限信息控制车辆动作。车辆客户端获取驾驶员图像,对驾驶员图像依次经过人脸检测、质量筛选和活体识别,得到待识别人脸图像。
根据本申请实施例的另一个方面,提供的一种电子设备,包括:存储器,用于存储可执行指令;
以及处理器,用于与存储器通信以执行可执行指令从而完成上述任一实施例的驾驶管理方法。
图12为本申请一些实施例的电子设备的一个应用示例的结构示意图。下面参考图12,其示出了适于用来实现本申请实施例的终端设备或服务器的电子设备的结构示意图。如图12所示,该电子设备包括一个或多个处理器、通信部等,所述一个或多个处理器例如:一个或多个中央处理单元(CPU)1201,和/或一个或多个加速单元1213等,加速单元可包括但不限于GPU、FPGA、其他类型的专用处理器等,处理器可以根据存储在只读存储器(ROM)1202中的可执行指令或者从存储部分1208加载到随机访问存储器(RAM)1203中的可执行指令而执行各种适当的动作和处理。通信部1212可包括但不限于网卡,所述网卡可包括但不限于IB(Infiniband)网卡,处理器可与只读存储器1202和/或随机访问存储器1203中通信以执行可执行指令,通过总线1204与通信部1212相连、并经通信部1212与其他目标设备通信,从而完成本申请实施例提供的任一方法对应的操作,例如,控制设置在车辆上的摄像组件采集车辆驾驶员的视频流;获取视频流中的至少一个图像的人脸部分与数据集中至少一个预存人脸图像的特征匹配结果;如果特征匹配结果表示特征匹配成功,控制车辆执行车辆接收到的操作指令。
此外,在RAM1203中,还可存储有装置操作所需的各种程序和数据。CPU1201、ROM1202以及RAM1203通过总线1204彼此相连。在有RAM1203的情况下,ROM1202为可选模块。RAM1203存储可执行指令,或在运行时向ROM1202中写入可执行指令,可执行指令使中央处理单元1201执行本申请上述任一方法对应的操作。输入/输出(I/O)接口1205也连接至总线1204。通信部1212可以集成设置,也可以设置为具有多个子模块(例如多个IB网卡),并在总线链接上。
以下部件连接至I/O接口1205:包括键盘、鼠标等的输入部分1206;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分1207;包括硬盘等的存储部分1208;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分1209。通信部分1209经由诸如因特网的网络执行通信处理。驱动器1210也根据需要连接至I/O接口1205。可拆卸介质1211,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器1210上,以便于从其上读出的计算机程序根据需要被安装入存储部分1208。
需要说明的,如图12所示的架构仅为一种可选实现方式,在具体实践过程中,可根据实际需要对上述图12的部件数量和类型进行选择、删减、增加或替换;在不同功能部件设置上,也可采用分离设置或集成设置等实现方式,例如加速单元1213和CPU1201可分离设置或者可将加速单元1213集成在CPU1201上,通信部可分离设置,也可集成设置在CPU1201或加速单元1213上,等等。这些可替换的实施方式均落入本申请公开的保护范围。
特别地,根据本申请的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本申请的实施例包括一种计算机程序产品,其包括有形地包含在机器可读介质上的计算机程序,计算机程序包含用于执行流程图所示的方法的程序代码,程序代码可包括对应执行本申请任一实施例提供的驾驶管理方法步骤对应的指令。在这样的实施例中,该计算机程序可以通过通信部分从网络上被下载和安装,和/或从可拆卸介质被安装。在该计算机程序被CPU1201执行时,执行本申请的方法中限定的上述功能。
根据本申请实施例的另一个方面,提供的一种计算机存储介质,用于存储计算机可读取的指令,所述指令被执行时执行上述实施例任意一项驾驶管理方法的操作。
本说明书中各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例 之间相同或相似的部分相互参见即可。对于***实施例而言,由于其与方法实施例基本对应,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
可能以许多方式来实现本申请的方法和装置、***、设备。例如,可通过软件、硬件、固件或者软件、硬件、固件的任何组合来实现本申请的方法和装置、***、设备。用于所述方法的步骤的上述顺序仅是为了进行说明,本申请的方法的步骤不限于以上描述的顺序,除非以其它方式特别说明。此外,在一些实施例中,还可将本申请实施为记录在记录介质中的程序,这些程序包括用于实现根据本申请的方法的机器可读指令。因而,本申请还覆盖存储用于执行根据本申请的方法的程序的记录介质。
本申请的描述是为了示例和描述起见而给出的,而并不是无遗漏的或者将本申请限于所公开的形式。很多修改和变化对于本领域的普通技术人员而言是显然的。选择和描述实施例是为了更好说明本申请的原理和实际应用,并且使本领域的普通技术人员能够理解本申请从而设计适于特定用途的带有各种修改的各种实施例。

Claims (93)

  1. 一种驾驶管理方法,其特征在于,包括:
    控制设置在车辆上的摄像组件采集车辆驾驶员的视频流;
    获取所述视频流中的至少一个图像的人脸部分与数据集中至少一个预存人脸图像的特征匹配结果;其中,所述数据集中存储有至少一个已注册的驾驶员的预存人脸图像;
    如果所述特征匹配结果表示特征匹配成功,控制车辆执行所述车辆接收到的操作指令。
  2. 根据权利要求1所述的方法,其特征在于,还包括:
    在所述车辆与云端服务器处于通信连接状态时,向所述云端服务器发送数据集下载请求;
    接收并存储所述云端服务器发送的数据集。
  3. 根据权利要求1或2所述的方法,其特征在于,还包括:
    如果所述特征匹配结果表示特征匹配成功,根据特征匹配成功的预存人脸图像获取所述车辆驾驶员的身份信息;
    向所述云端服务器发送所述图像和所述身份信息。
  4. 根据权利要求1或2所述的方法,其特征在于,还包括:
    如果所述特征匹配结果表示特征匹配成功,根据特征匹配成功的预存人脸图像获取所述车辆驾驶员的身份信息;
    截取所述图像中的人脸部分;
    向所述云端服务器发送所述截取的人脸部分和所述身份信息。
  5. 根据权利要求1-4任一所述的方法,其特征在于,所述方法还包括:获取所采集的图像的活体检测结果;
    根据所述特征匹配结果,控制车辆执行所述车辆接收到的操作指令,包括:
    根据所述特征匹配结果和所述活体检测结果,控制车辆执行所述车辆接收到的操作指令。
  6. 根据权利要求5所述的方法,其特征在于,所述数据集中的预存人脸图像还对应设置有驾驶权限;
    所述方法还包括:如果所述特征匹配结果表示特征匹配成功,获取与特征匹配成功的预存人脸图像对应的驾驶权限;
    所述控制车辆执行所述车辆接收到的操作指令,包括:控制车辆执行所述车辆接收到的在所述权限范围内的操作指令。
  7. 根据权利要求1-6任一所述的方法,其特征在于,还包括:
    基于所述视频流进行驾驶员状态检测;
    根据驾驶员状态检测的结果,进行异常驾驶状态的预警提示和/或进行智能驾驶控制。
  8. 根据权利要求7所述的方法,其特征在于,所述驾驶员状态检测包括以下任意一项或多项:驾驶员疲劳状态检测,驾驶员分心状态检测,驾驶员预定分心动作检测,驾驶员手势检测。
  9. 根据权利要求8所述的方法,其特征在于,基于所述视频流进行驾驶员疲劳状态检测,包括:
    对所述视频流中的至少一个图像的人脸至少部分区域进行检测,得到人脸至少部分区域的状态信息,所述人脸至少部分区域的状态信息包括以下任意一项或多项:眼睛睁合状态信息、嘴巴开合状态信息;
    根据一段时间内的所述人脸至少部分区域的状态信息,获取用于表征驾驶员疲劳状态的指标的参数值;
    根据用于表征驾驶员疲劳状态的指标的参数值确定驾驶员疲劳状态检测的结果。
  10. 根据权利要求9所述的方法,其特征在于,所述用于表征驾驶员疲劳状态的指标包括以下任意一项或多项:闭眼程度、打哈欠程度。
  11. 根据权利要求10所述的方法,其特征在于,所述闭眼程度的参数值包括以下任意一项或多项:闭眼次数、闭眼频率、闭眼持续时长、闭眼幅度、半闭眼次数、半闭眼频率;和/或,
    所述打哈欠程度的参数值包括以下任意一项或多项:打哈欠状态、打哈欠次数、打哈欠持续时长、打哈欠频率。
  12. 根据权利要求8-11任一所述的方法,其特征在于,基于所述视频流进行驾驶员分心状态检测,包括:
    对所述视频流中驾驶员图像进行人脸朝向和/或视线方向检测,得到人脸朝向信息和/或视线方向信息;
    根据一段时间内的所述人脸朝向信息和/或视线方向信息,确定用于表征驾驶员分心状态的指标的参数值;所述用于表征驾驶员分心状态的指标包括以下任意一项或多项:人脸朝向偏离程度,视线偏离程度;
    根据用于表征所述驾驶员分心状态的指标的参数值确定驾驶员分心状态检测的结果。
  13. 根据权利要求12所述的方法,其特征在于,所述人脸朝向偏离程度的参数值包括以下任意一项或多项:转头次数、转头持续时长、转头频率;和/或,
    所述视线偏离程度的参数值包括以下任意一项或多项:视线方向偏离角度、视线方向偏离时长、视线方向偏离频率。
  14. 根据权利要求12或13所述的方法,其特征在于,所述对所述视频流中驾驶员图像进行人脸朝向和/或视线方向检测,包括:
    检测所述视频流中驾驶员图像的人脸关键点;
    根据所述人脸关键点进行人脸朝向和/或视线方向检测。
  15. 根据权利要求14所述的方法,其特征在于,根据所述人脸关键点进行人脸朝向检测,得到人脸朝向信息,包括:
    根据所述人脸关键点获取头部姿态的特征信息;
    根据所述头部姿态的特征信息确定人脸朝向信息。
  16. 根据权利要求8-15任一所述的方法,其特征在于,所述预定分心动作包括以下任意一项或多项:抽烟动作,喝水动作,饮食动作,打电话动作,娱乐动作。
  17. 根据权利要求16所述的方法,其特征在于,基于所述视频流进行驾驶员预定分心动作检测,包括:
    对所述视频流中的至少一个图像进行所述预定分心动作相应的目标对象检测,得到目标对象的检测框;
    根据所述目标对象的检测框,确定是否出现所述预定分心动作。
  18. 根据权利要求17所述的方法,其特征在于,还包括:
    若出现预定分心动作,根据一段时间内是否出现所述预定分心动作的确定结果,获取用于表征驾驶员分心程度的指标的参数值;
    根据所述用于表征驾驶员分心程度的指标的参数值确定驾驶员预定分心动作检测的结果。
  19. 根据权利要求18所述的方法,其特征在于,所述驾驶员分心程度的指标的参数值包括以下任意一项或多项:预定分心动作的次数、预定分心动作的持续时长、预定分心动作的频率。
  20. 根据权利要求16-19任一所述的方法,其特征在于,还包括:
    若驾驶员预定分心动作检测的结果为检测到预定分心动作,提示检测到的分心动作。
  21. 根据权利要求7-20任一所述的方法,其特征在于,还包括:
    执行与所述驾驶员状态检测的结果对应的控制操作。
  22. 根据权利要求21所述的方法,其特征在于,所述执行与所述驾驶员状态检测的结果对应的控制操作,包括以下至少之一:
    如果确定的所述驾驶员状态检测的结果满足提示/告警预定条件,输出与所述提示/告警预定条件相应的提示/告警信息;和/或,
    如果确定的所述驾驶员状态检测的结果满足预定信息发送条件,向预定联系方式发送预定信息或与预定联系方式建立通信连接;和/或,
    如果确定的所述驾驶员状态检测的结果满足预定驾驶模式切换条件,将驾驶模式切换为自动驾驶模式。
  23. 根据权利要求7-22任一所述的方法,其特征在于,还包括:
    向所述云端服务器发送所述驾驶员状态检测的至少部分结果。
  24. 根据权利要求23所述的方法,其特征在于,所述至少部分结果包括:根据驾驶员状态检测确定的异常驾驶状态信息。
  25. 根据权利要求24所述的方法,其特征在于,还包括:
    存储所述视频流中与所述异常驾驶状态信息对应的图像或视频段;和/或,
    向所述云端服务器发送所述视频流中与所述异常驾驶状态信息对应的图像或视频段。
  26. 根据权利要求1所述的方法,其特征在于,还包括:
    在所述车辆与移动端设备处于通信连接状态时,向所述移动端设备发送数据集下载请求;
    接收并存储所述移动端设备发送的数据集。
  27. 根据权利要求26所述的方法,其特征在于,所述数据集是由所述移动端设备在接收到所述数据集下载请求时,从云端服务器获取并发送给所述车辆的。
  28. 根据权利要求1-27任一所述的方法,其特征在于,还包括:
    如果所述特征匹配结果表示特征匹配不成功,拒绝执行接收到的操作指令。
  29. 根据权利要求28所述的方法,其特征在于,还包括:
    发出提示注册信息;
    根据所述提示注册信息接收驾驶员注册请求,所述驾驶员注册请求包括驾驶员的注册人脸图像;
    根据所述注册人脸图像,建立数据集。
  30. 根据权利要求1-29任一所述的方法,其特征在于,所述获取所述视频流中的至少一个图像的人脸部分与数据集中至少一个预存人脸图像的特征匹配结果,包括:
    在所述车辆与云端服务器处于通信连接状态时,将所述视频流中的至少一个图像的人脸部分上传到所述云端服务器,并接收所述云端服务器发送的特征匹配结果。
  31. 一种车载智能***,其特征在于,包括:
    视频采集单元,用于控制设置在车辆上的摄像组件采集车辆驾驶员的视频流;
    结果获取单元,用于获取所述视频流中的至少一个图像的人脸部分与数据集中至少一个预存人脸图像的特征匹配结果;其中,所述数据集中存储有至少一个已注册的驾驶员的预存人脸图像;
    操作单元,用于如果所述特征匹配结果表示特征匹配成功,控制车辆执行所述车辆接收到的操作指令。
  32. 根据权利要求31所述的***,其特征在于,还包括:
    第一数据下载单元,用于在所述车辆与云端服务器处于通信连接状态时,向所述云端服务器发送数据集下载请求;
    数据保存单元,用于接收并存储所述云端服务器发送的数据集。
  33. 根据权利要求31或32所述的***,其特征在于,还包括:
    第一云端存储单元,用于如果所述特征匹配结果表示特征匹配成功,根据特征匹配成功的预存人脸图像获取所述车辆驾驶员的身份信息;向所述云端服务器发送所述图像和所述身份信息。
  34. 根据权利要求31或32所述的***,其特征在于,还包括:
    第二云端存储单元,如果所述特征匹配结果表示特征匹配成功,根据特征匹配成功的预存人脸图像获取所述车辆驾驶员的身份信息;截取所述图像中的人脸部分;向所述云端服务器发送所述截取的人脸部分和所述身份信息。
  35. 根据权利要求31-34任一所述的***,其特征在于,
    所述***还包括:活体检测单元,用于获取所采集的图像的活体检测结果;
    所述操作单元,用于根据所述特征匹配结果和所述活体检测结果,控制车辆执行所述车辆接收到的操作指令。
  36. 根据权利要求35所述的***,其特征在于,所述数据集中的预存人脸图像还对应设置有驾驶权限;
    所述***还包括:
    权限获取单元,用于如果所述特征匹配结果表示特征匹配成功,获取与特征匹配成功的预存人脸图像对应的驾驶权限;
    所述操作单元,还用于控制车辆执行所述车辆接收到的在所述权限范围内的操作指令。
  37. 根据权利要求31-36任一所述的***,其特征在于,还包括:
    状态检测单元,用于基于所述视频流进行驾驶员状态检测;
    输出单元,用于根据驾驶员状态检测的结果,进行异常驾驶状态的预警提示;和/或,
    智能驾驶控制单元,用于根据驾驶员状态检测的结果,进行智能驾驶控制。
  38. 根据权利要求37所述的***,其特征在于,所述驾驶员状态检测包括以下任意一项或多项:驾驶员疲劳状态检测,驾驶员分心状态检测,驾驶员预定分心动作检测,驾驶员手势检测。
  39. 根据权利要求38所述的***,其特征在于,所述状态检测单元基于所述视频流进行驾驶员疲劳状态检测时, 用于:
    对所述视频流中的至少一个图像的人脸至少部分区域进行检测,得到人脸至少部分区域的状态信息,所述人脸至少部分区域的状态信息包括以下任意一项或多项:眼睛睁合状态信息、嘴巴开合状态信息;
    根据一段时间内的所述人脸至少部分区域的状态信息,获取用于表征驾驶员疲劳状态的指标的参数值;
    根据用于表征驾驶员疲劳状态的指标的参数值确定驾驶员疲劳状态检测的结果。
  40. 根据权利要求39所述的***,其特征在于,所述用于表征驾驶员疲劳状态的指标包括以下任意一项或多项:闭眼程度、打哈欠程度。
  41. 根据权利要求40所述的***,其特征在于,所述闭眼程度的参数值包括以下任意一项或多项:闭眼次数、闭眼频率、闭眼持续时长、闭眼幅度、半闭眼次数、半闭眼频率;和/或,
    所述打哈欠程度的参数值包括以下任意一项或多项:打哈欠状态、打哈欠次数、打哈欠持续时长、打哈欠频率。
  42. 根据权利要求38-41任一所述的***,其特征在于,所述状态检测单元基于所述视频流进行驾驶员分心状态检测时,用于:
    对所述视频流中驾驶员图像进行人脸朝向和/或视线方向检测,得到人脸朝向信息和/或视线方向信息;
    根据一段时间内的所述人脸朝向信息和/或视线方向信息,确定用于表征驾驶员分心状态的指标的参数值;所述用于表征驾驶员分心状态的指标包括以下任意一项或多项:人脸朝向偏离程度,视线偏离程度;
    根据用于表征所述驾驶员分心状态的指标的参数值确定驾驶员分心状态检测的结果。
  43. 根据权利要求42所述的***,其特征在于,所述人脸朝向偏离程度的参数值包括以下任意一项或多项:转头次数、转头持续时长、转头频率;和/或,
    所述视线偏离程度的参数值包括以下任意一项或多项:视线方向偏离角度、视线方向偏离时长、视线方向偏离频率。
  44. 根据权利要求42或43所述的***,其特征在于,所述状态检测单元对所述视频流中驾驶员图像进行人脸朝向和/或视线方向检测时,用于:
    检测所述视频流中驾驶员图像的人脸关键点;
    根据所述人脸关键点进行人脸朝向和/或视线方向检测。
  45. 根据权利要求44所述的***,其特征在于,所述状态检测单元根据所述人脸关键点进行人脸朝向检测时,用于:
    根据所述人脸关键点获取头部姿态的特征信息;
    根据所述头部姿态的特征信息确定人脸朝向信息。
  46. 根据权利要求38-45任一所述的***,其特征在于,所述预定分心动作包括以下任意一项或多项:抽烟动作,喝水动作,饮食动作,打电话动作,娱乐动作。
  47. 根据权利要求46所述的***,其特征在于,所述状态检测单元基于所述视频流进行驾驶员预定分心动作检测时,用于:
    对所述视频流中的至少一个图像进行所述预定分心动作相应的目标对象检测,得到目标对象的检测框;
    根据所述目标对象的检测框,确定是否出现所述预定分心动作。
  48. 根据权利要求47所述的***,其特征在于,所述状态检测单元,还用于:
    若出现预定分心动作,根据一段时间内是否出现所述预定分心动作的确定结果,获取用于表征驾驶员分心程度的指标的参数值;
    根据所述用于表征驾驶员分心程度的指标的参数值确定驾驶员预定分心动作检测的结果。
  49. 根据权利要求48所述的***,其特征在于,所述驾驶员分心程度的指标的参数值包括以下任意一项或多项:预定分心动作的次数、预定分心动作的持续时长、预定分心动作的频率。
  50. 根据权利要求46-49任一所述的***,其特征在于,还包括:
    提示单元,用于若驾驶员预定分心动作检测的结果为检测到预定分心动作,提示检测到的分心动作。
  51. 根据权利要求37-50任一所述的***,其特征在于,还包括:
    控制单元,用于执行与所述驾驶员状态检测的结果对应的控制操作。
  52. 根据权利要求51所述的***,其特征在于,所述控制单元,用于:
    如果确定的所述驾驶员状态检测的结果满足提示/告警预定条件,输出与所述提示/告警预定条件相应的提示/告警信息;和/或,
    如果确定的所述驾驶员状态检测的结果满足预定信息发送条件,向预定联系方式发送预定信息或与预定联系方式建立通信连接;和/或,
    如果确定的所述驾驶员状态检测的结果满足预定驾驶模式切换条件,将驾驶模式切换为自动驾驶模式。
  53. 根据权利要求37-52任一所述的***,其特征在于,还包括:
    结果发送单元,用于向所述云端服务器发送所述驾驶员状态检测的至少部分结果。
  54. 根据权利要求53所述的***,其特征在于,所述至少部分结果包括:根据驾驶员状态检测确定的异常驾驶状态信息。
  55. 根据权利要求54所述的***,其特征在于,还包括:视频存储单元,用于:
    存储所述视频流中与所述异常驾驶状态信息对应的图像或视频段;和/或,
    向所述云端服务器发送所述视频流中与所述异常驾驶状态信息对应的图像或视频段。
  56. 根据权利要求31所述的***,其特征在于,还包括:
    第二数据下载单元,用于在所述车辆与移动端设备处于通信连接状态时,向所述移动端设备发送数据集下载请求;接收并存储所述移动端设备发送的数据集。
  57. 根据权利要求56所述的***,其特征在于,所述数据集是由所述移动端设备在接收到所述数据集下载请求时,从云端服务器获取并发送给所述车辆的。
  58. 根据权利要求31-57任一所述的***,其特征在于,所述操作单元,还用于如果所述特征匹配结果表示特征匹配不成功,拒绝执行接收到的操作指令。
  59. 根据权利要求58所述的***,其特征在于,所述操作单元,还用于发出提示注册信息;
    根据所述提示注册信息接收驾驶员注册请求,所述驾驶员注册请求包括驾驶员的注册人脸图像;
    根据所述注册人脸图像,建立数据集。
  60. 根据权利要求31-59任一所述的***,其特征在于,所述结果获取单元,用于在所述车辆与云端服务器处于通信连接状态时,将所述视频流中的至少一个图像的人脸部分上传到所述云端服务器,并接收所述云端服务器发送的特征匹配结果。
  61. 一种驾驶管理方法,其特征在于,包括:
    接收车辆发送的待识别的人脸图像;
    获得所述人脸图像与数据集中至少一个预存人脸图像的特征匹配结果,其中,所述数据集中存储有至少一个已注册的驾驶员的预存人脸图像;
    如果所述特征匹配结果表示特征匹配成功,向所述车辆发送允许控制车辆的指令。
  62. 根据权利要求61所述的方法,其特征在于,还包括:
    接收车辆发送的数据集下载请求,所述数据集中存储有至少一已注册的驾驶员的预存人脸图像;
    向所述车辆发送所述数据集。
  63. 根据权利要求61或62所述的方法,其特征在于,还包括:
    接收车辆或移动端设备发送的驾驶员注册请求,所述驾驶员注册请求包括驾驶员的注册人脸图像;
    根据所述注册人脸图像,建立数据集。
  64. 根据权利要求61-63任一所述的方法,其特征在于,所述获得所述人脸图像与数据集中至少一个预存人脸图像的特征匹配结果,包括:
    对所述人脸图像与数据集中至少一个预存人脸图像进行特征匹配,得到所述特征匹配结果。
  65. 根据权利要求61-64任一所述的方法,其特征在于,所述获得所述人脸图像与数据集中至少一个预存人脸图像的特征匹配结果,包括:
    从所述车辆获取所述人脸图像与数据集中至少一个预存人脸图像的特征匹配结果。
  66. 根据权利要求61-65任一所述的方法,其特征在于,还包括:
    接收所述车辆发送的驾驶员状态检测的至少部分结果,进行异常驾驶状态的预警提示和/或向所述车辆发送进行智能驾驶控制的指令。
  67. 根据权利要求66所述的方法,其特征在于,所述至少部分结果包括:根据驾驶员状态检测确定的异常驾驶状态信息。
  68. 根据权利要求66或67所述的方法,其特征在于,还包括:执行与所述驾驶员状态检测的结果对应的控制操作。
  69. 根据权利要求68所述的方法,其特征在于,所述执行与所述驾驶员状态检测的结果对应的控制操作,包括:
    如果确定的所述驾驶员状态检测的结果满足提示/告警预定条件,输出与所述提示/告警预定条件相应的提示/告警信息;和/或,
    如果确定的所述驾驶员状态检测的结果满足预定信息发送条件,向预定联系方式发送预定信息或与预定联系方式建立通信连接;和/或,
    如果确定的所述驾驶员状态检测的结果满足预定驾驶模式切换条件,将驾驶模式切换为自动驾驶模式。
  70. 根据权利要求67-69任一所述的方法,其特征在于,还包括:
    接收与所述异常驾驶状态信息对应的图像或视频段。
  71. 根据权利要求70所述的方法,其特征在于,还包括:
    基于所述异常驾驶状态信息进行以下至少一种操作:
    数据统计、车辆管理、驾驶员管理。
  72. 根据权利要求71所述的方法,其特征在于,所述基于所述异常驾驶状态信息进行数据统计,包括:
    基于所述异常驾驶状态信息对接收的与所述异常驾驶状态信息对应的图像或视频段进行统计,使所述图像或视频段按不同异常驾驶状态进行分类,确定每种所述异常驾驶状态的统计情况。
  73. 根据权利要求71或72所述的方法,其特征在于,所述基于所述异常驾驶状态信息进行车辆管理,包括:
    基于所述异常驾驶状态信息对接收的与所述异常驾驶状态信息对应的图像或视频段进行统计,使所述图像或视频段按不同车辆进行分类,确定每个所述车辆的异常驾驶统计情况。
  74. 根据权利要求71-73任一所述的方法,其特征在于,所述基于所述异常驾驶状态信息进行驾驶员管理,包括:
    基于所述异常驾驶状态信息对接收的与所述异常驾驶状态信息对应的图像或视频段进行处理,使所述图像或视频段按不同驾驶员进行分类,确定每个所述驾驶员的异常驾驶统计情况。
  75. 一种电子设备,其特征在于,包括:
    图像接收单元,用于接收车辆发送的待识别的人脸图像;
    匹配结果获得单元,用于获得所述人脸图像与数据集中至少一个预存人脸图像的特征匹配结果,其中,所述数据集中存储有至少一个已注册的驾驶员的预存人脸图像;
    指令发送单元,用于如果所述特征匹配结果表示特征匹配成功,向所述车辆发送允许控制车辆的指令。
  76. 根据权利要求75所述的电子设备,其特征在于,还包括:
    第一数据发送单元,用于接收车辆发送的数据集下载请求,所述数据集中存储有至少一已注册的驾驶员的预存人脸图像;向所述车辆发送所述数据集。
  77. 根据权利要求75或76所述的电子设备,其特征在于,还包括:
    注册请求接收单元,用于接收车辆或移动端设备发送的驾驶员注册请求,所述驾驶员注册请求包括驾驶员的注册人脸图像;
    根据所述注册人脸图像,建立数据集。
  78. 根据权利要求75-77任一所述的电子设备,其特征在于,所述匹配结果获得单元,用于对所述人脸图像与数据集中至少一个预存人脸图像进行特征匹配,得到所述特征匹配结果。
  79. 根据权利要求75-78任一所述的电子设备,其特征在于,所述匹配结果获得单元,用于从所述车辆获取所述人脸图像与数据集中至少一个预存人脸图像的特征匹配结果。
  80. 根据权利要求75-79任一所述的电子设备,其特征在于,还包括:
    检测结果接收单元,用于接收所述车辆发送的驾驶员状态检测的至少部分结果,进行异常驾驶状态的预警提示和/或向所述车辆发送进行智能驾驶控制的指令。
  81. 根据权利要求80所述的电子设备,其特征在于,所述至少部分结果包括:根据驾驶员状态检测确定的异常驾驶状态信息。
  82. 根据权利要求80或81所述的电子设备,其特征在于,还包括:
    执行控制单元,用于执行与所述驾驶员状态检测的结果对应的控制操作。
  83. 根据权利要求82所述的电子设备,其特征在于,所述执行控制单元,用于:
    如果确定的所述驾驶员状态检测的结果满足提示/告警预定条件,输出与所述提示/告警预定条件相应的提示/告警信息;和/或,
    如果确定的所述驾驶员状态检测的结果满足预定信息发送条件,向预定联系方式发送预定信息或与预定联系方式建立通信连接;和/或,
    如果确定的所述驾驶员状态检测的结果满足预定驾驶模式切换条件,将驾驶模式切换为自动驾驶模式。
  84. 根据权利要求81-83任一所述的电子设备,其特征在于,还包括:
    视频接收单元,用于接收与所述异常驾驶状态信息对应的图像或视频段。
  85. 根据权利要求84所述的电子设备,其特征在于,还包括:
    异常处理单元,用于基于所述异常驾驶状态信息进行以下至少一种操作:数据统计、车辆管理、驾驶员管理。
  86. 根据权利要求85所述的电子设备,其特征在于,所述异常处理单元基于所述异常驾驶状态信息进行数据统计时,用于基于所述异常驾驶状态信息对接收的与所述异常驾驶状态信息对应的图像或视频段进行统计,使所述图像或视频段按不同异常驾驶状态进行分类,确定每种所述异常驾驶状态的统计情况。
  87. 根据权利要求85或86所述的电子设备,其特征在于,所述异常处理单元基于所述异常驾驶状态信息进行车辆管理时,用于基于所述异常驾驶状态信息对接收的与所述异常驾驶状态信息对应的图像或视频段进行统计,使所述图像或视频段按不同车辆进行分类,确定每个所述车辆的异常驾驶统计情况。
  88. 根据权利要求85-87任一所述的电子设备,其特征在于,所述异常处理单元基于所述异常驾驶状态信息进行驾驶员管理时,用于基于所述异常驾驶状态信息对接收的与所述异常驾驶状态信息对应的图像或视频段进行处理,使所述图像或视频段按不同驾驶员进行分类,确定每个所述驾驶员的异常驾驶统计情况。
  89. 一种驾驶管理***,其特征在于,包括:车辆和/或云端服务器;
    所述车辆用于执行权利要求1-30任意一项所述的驾驶管理方法;
    所述云端服务器用于执行权利要求61-74任意一项所述的驾驶管理方法。
  90. 根据权利要求89所述的***,其特征在于,还包括:移动端设备,用于:
    接收驾驶员注册请求,所述驾驶员注册请求包括驾驶员的注册人脸图像;
    将所述驾驶员注册请求发送给所述云端服务器。
  91. 一种电子设备,其特征在于,包括:存储器,用于存储可执行指令;
    以及处理器,用于与所述存储器通信以执行所述可执行指令从而完成权利要求1至30任意一项所述驾驶管理方法或权利要求61至74任意一项所述的驾驶管理方法。
  92. 一种计算机程序,包括计算机可读代码,其特征在于,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1至30任意一项所述的驾驶管理方法或权利要求61至74任意一项所述的驾驶管理方法。
  93. 一种计算机存储介质,用于存储计算机可读取的指令,其特征在于,所述指令被执行时实现权利要求1至30任意一项所述驾驶管理方法或权利要求61至74任意一项所述的驾驶管理方法。
PCT/CN2018/105790 2018-06-04 2018-09-14 驾驶管理方法和***、车载智能***、电子设备、介质 WO2019232972A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
SG11201911404QA SG11201911404QA (en) 2018-06-04 2018-09-14 Driving management methods and systems, vehicle-mounted intelligent systems, electronic devices, and medium
MYPI2019007079A MY197453A (en) 2018-06-04 2018-09-14 Driving management methods and systems, vehicle-mounted intelligent systems, electronic devices, and medium
JP2019565001A JP6932208B2 (ja) 2018-06-04 2018-09-14 運転管理方法及びシステム、車載スマートシステム、電子機器並びに媒体
EP18919400.4A EP3617935A4 (en) 2018-06-04 2018-09-14 DRIVING MANAGEMENT METHOD AND SYSTEM, ON-BOARD INTELLIGENT SYSTEM, ELECTRONIC DEVICE AND MEDIUM
KR1020207012402A KR102305914B1 (ko) 2018-06-04 2018-09-14 운전 관리 방법 및 시스템, 차량 탑재 지능형 시스템, 전자 기기, 매체
US16/224,389 US10915769B2 (en) 2018-06-04 2018-12-18 Driving management methods and systems, vehicle-mounted intelligent systems, electronic devices, and medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810565711.1 2018-06-04
CN201810565711.1A CN109002757A (zh) 2018-06-04 2018-06-04 驾驶管理方法和***、车载智能***、电子设备、介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/224,389 Continuation US10915769B2 (en) 2018-06-04 2018-12-18 Driving management methods and systems, vehicle-mounted intelligent systems, electronic devices, and medium

Publications (1)

Publication Number Publication Date
WO2019232972A1 true WO2019232972A1 (zh) 2019-12-12

Family

ID=64574253

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/105790 WO2019232972A1 (zh) 2018-06-04 2018-09-14 驾驶管理方法和***、车载智能***、电子设备、介质

Country Status (7)

Country Link
EP (1) EP3617935A4 (zh)
JP (1) JP6932208B2 (zh)
KR (1) KR102305914B1 (zh)
CN (1) CN109002757A (zh)
MY (1) MY197453A (zh)
SG (1) SG11201911404QA (zh)
WO (1) WO2019232972A1 (zh)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476185A (zh) * 2020-04-13 2020-07-31 罗翌源 一种驾驶者注意力监测方法、装置以及***
CN112288286A (zh) * 2020-10-30 2021-01-29 上海仙塔智能科技有限公司 安全派单方法、安全派单***及可读存储介质
CN112660157A (zh) * 2020-12-11 2021-04-16 重庆邮电大学 一种多功能无障碍车远程监控与辅助驾驶***
CN112699807A (zh) * 2020-12-31 2021-04-23 车主邦(北京)科技有限公司 一种驾驶员状态信息监控方法和装置
CN113104046A (zh) * 2021-04-28 2021-07-13 中国第一汽车股份有限公司 一种基于云服务器的开门预警方法及装置
CN113191286A (zh) * 2021-05-08 2021-07-30 重庆紫光华山智安科技有限公司 图像数据质量检测调优方法、***、设备及介质
CN113285998A (zh) * 2021-05-20 2021-08-20 江西北斗应用科技有限公司 驾驶员管理终端和无人航空器监管***
CN113581209A (zh) * 2021-08-04 2021-11-02 东风柳州汽车有限公司 驾驶辅助模式切换方法、装置、设备及存储介质
CN113744498A (zh) * 2020-05-29 2021-12-03 杭州海康汽车软件有限公司 驾驶员注意力监测的***和方法
CN114368395A (zh) * 2022-01-21 2022-04-19 华录智达科技股份有限公司 一种基于公交数字化转型的人工智能公交驾驶安全管理***
CN114475623A (zh) * 2021-12-28 2022-05-13 阿波罗智联(北京)科技有限公司 车辆的控制方法、装置、电子设备及存储介质
CN115147785A (zh) * 2021-03-29 2022-10-04 东风汽车集团股份有限公司 一种车辆识别方法、装置、电子设备和存储介质
CN115214505A (zh) * 2022-06-29 2022-10-21 重庆长安汽车股份有限公司 车辆座舱音效的控制方法、装置、车辆及存储介质
WO2022222174A1 (zh) * 2021-04-21 2022-10-27 彭泳 基于视频图像分析的危货监管***及危货监管方法

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109977771A (zh) * 2019-02-22 2019-07-05 杭州飞步科技有限公司 司机身份的验证方法、装置、设备及计算机可读存储介质
CN110119708A (zh) * 2019-05-10 2019-08-13 万雪莉 一种基于台灯的用户状态调整方法和装置
CN112069863B (zh) * 2019-06-11 2022-08-19 荣耀终端有限公司 一种面部特征的有效性判定方法及电子设备
CN112218270A (zh) * 2019-07-09 2021-01-12 奥迪股份公司 车辆用户的呼入或呼出处理***及相应的方法和介质
CN110390285A (zh) * 2019-07-16 2019-10-29 广州小鹏汽车科技有限公司 驾驶员分神检测方法、***及车辆
CN110550043B (zh) * 2019-09-05 2022-07-22 上海博泰悦臻网络技术服务有限公司 危险行为的警示方法、***、计算机存储介质及车载终端
CN110737688B (zh) * 2019-09-30 2023-04-07 上海商汤临港智能科技有限公司 驾驶数据分析方法、装置、电子设备和计算机存储介质
CN110758324A (zh) * 2019-10-23 2020-02-07 上海能塔智能科技有限公司 试驾控制方法及***、车载智能设备、车辆、存储介质
CN110780934B (zh) * 2019-10-23 2024-03-12 深圳市商汤科技有限公司 车载图像处理***的部署方法和装置
CN110816473B (zh) * 2019-11-29 2022-08-16 福智易车联网(宁波)有限公司 一种车辆控制方法、车辆控制***及存储介质
WO2021212504A1 (zh) * 2020-04-24 2021-10-28 上海商汤临港智能科技有限公司 车辆和车舱域控制器
CN111483471B (zh) * 2020-04-26 2021-11-30 湘潭牵引机车厂有限公司 车辆控制方法、装置及车载控制器
CN113696897B (zh) * 2020-05-07 2023-06-23 沃尔沃汽车公司 驾驶员分神预警方法和驾驶员分神预警***
CN111951637B (zh) * 2020-07-19 2022-05-03 西北工业大学 一种任务情景相关联的无人机飞行员视觉注意力分配模式提取方法
CN112037380B (zh) * 2020-09-03 2022-06-24 上海商汤临港智能科技有限公司 车辆控制方法及装置、电子设备、存储介质和车辆
CN112861677A (zh) * 2021-01-28 2021-05-28 上海商汤临港智能科技有限公司 轨交驾驶员的动作检测方法及装置、设备、介质及工具
CN114132329B (zh) * 2021-12-10 2024-04-12 智己汽车科技有限公司 一种驾驶员注意力保持方法及***
CN114312669B (zh) * 2022-02-15 2022-08-05 远峰科技股份有限公司 一种基于人脸识别的智能座舱显示***
CN114895983A (zh) * 2022-05-12 2022-08-12 合肥杰发科技有限公司 Dms的启动方法及相关设备
KR102510733B1 (ko) * 2022-08-10 2023-03-16 주식회사 에이모 영상에서 학습 대상 이미지 프레임을 선별하는 방법 및 장치
CN116912808B (zh) * 2023-09-14 2023-12-01 四川公路桥梁建设集团有限公司 架桥机控制方法、电子设备和计算机可读介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050110610A1 (en) * 2003-09-05 2005-05-26 Bazakos Michael E. System and method for gate access control
US20110091079A1 (en) * 2009-10-21 2011-04-21 Automotive Research & Testing Center Facial image recognition system for a driver of a vehicle
CN104169993A (zh) * 2012-03-14 2014-11-26 株式会社电装 驾驶辅助装置及驾驶辅助方法
CN107578025A (zh) * 2017-09-15 2018-01-12 赵立峰 一种驾驶员识别方法及***

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2546415B2 (ja) * 1990-07-09 1996-10-23 トヨタ自動車株式会社 車両運転者監視装置
EP2305106A1 (en) * 2002-02-19 2011-04-06 Volvo Technology Corporation Method for monitoring and managing driver attention loads
ATE454849T1 (de) * 2002-10-15 2010-01-15 Volvo Technology Corp Verfahren für die auswertung der kopf- und augenaktivität einer person
JP2004284460A (ja) * 2003-03-20 2004-10-14 Aisin Seiki Co Ltd 車両盗難防止システム
JP4564320B2 (ja) * 2004-09-29 2010-10-20 アイシン精機株式会社 ドライバモニタシステム
JP2011192031A (ja) * 2010-03-15 2011-09-29 Denso It Laboratory Inc 制御装置及び運転安全性保護方法
CN101844548B (zh) * 2010-03-30 2012-06-27 奇瑞汽车股份有限公司 一种车辆自动控制方法和***
CN102975690A (zh) * 2011-09-02 2013-03-20 上海博泰悦臻电子设备制造有限公司 汽车锁定***及方法
JP6150258B2 (ja) * 2014-01-15 2017-06-21 みこらった株式会社 自動運転車
CN104408878B (zh) * 2014-11-05 2017-01-25 唐郁文 一种车队疲劳驾驶预警监控***及方法
CN104732251B (zh) * 2015-04-23 2017-12-22 郑州畅想高科股份有限公司 一种基于视频的机车司机驾驶状态检测方法
JP6447379B2 (ja) * 2015-06-15 2019-01-09 トヨタ自動車株式会社 認証装置、認証システムおよび認証方法
CN105469035A (zh) * 2015-11-17 2016-04-06 中国科学院重庆绿色智能技术研究院 基于双目视频分析的驾驶员不良驾驶行为检测***
JP6641916B2 (ja) * 2015-11-20 2020-02-05 オムロン株式会社 自動運転支援装置、自動運転支援システム、自動運転支援方法および自動運転支援プログラム
CN105654753A (zh) * 2016-01-08 2016-06-08 北京乐驾科技有限公司 一种智能车载安全驾驶辅助方法及***
FR3048544B1 (fr) * 2016-03-01 2021-04-02 Valeo Comfort & Driving Assistance Dispositif et methode de surveillance d'un conducteur d'un vehicule automobile
WO2017193272A1 (zh) * 2016-05-10 2017-11-16 深圳市赛亿科技开发有限公司 一种基于人脸识别的车载疲劳预警***及预警方法
JP6790483B2 (ja) * 2016-06-16 2020-11-25 日産自動車株式会社 認証方法及び認証装置
CN106335469B (zh) * 2016-09-04 2019-11-26 深圳市云智易联科技有限公司 车载认证方法、***、车载装置、移动终端及服务器
CN106338944B (zh) * 2016-09-29 2019-02-15 山东华旗新能源科技有限公司 施工升降机安全智能控制***
CN110178104A (zh) * 2016-11-07 2019-08-27 新自动公司 用于确定驾驶员分心的***和方法
CN107092881B (zh) * 2017-04-18 2018-01-09 黄海虹 一种驾驶人员更换***及方法
CN107244306A (zh) * 2017-07-27 2017-10-13 深圳小爱智能科技有限公司 一种启动汽车的装置
CN107657236A (zh) * 2017-09-29 2018-02-02 厦门知晓物联技术服务有限公司 汽车安全驾驶预警方法及车载预警***
CN107891746A (zh) * 2017-10-19 2018-04-10 李娟� 基于汽车驾驶防疲劳的***
CN207433445U (zh) * 2017-10-31 2018-06-01 安徽江淮汽车集团股份有限公司 一种车辆管理***
CN107953854A (zh) * 2017-11-10 2018-04-24 惠州市德赛西威汽车电子股份有限公司 一种基于人脸识别的智能车载辅助***及方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050110610A1 (en) * 2003-09-05 2005-05-26 Bazakos Michael E. System and method for gate access control
US20110091079A1 (en) * 2009-10-21 2011-04-21 Automotive Research & Testing Center Facial image recognition system for a driver of a vehicle
CN104169993A (zh) * 2012-03-14 2014-11-26 株式会社电装 驾驶辅助装置及驾驶辅助方法
CN107578025A (zh) * 2017-09-15 2018-01-12 赵立峰 一种驾驶员识别方法及***

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476185A (zh) * 2020-04-13 2020-07-31 罗翌源 一种驾驶者注意力监测方法、装置以及***
CN111476185B (zh) * 2020-04-13 2023-10-10 罗跃宸 一种驾驶者注意力监测方法、装置以及***
CN113744498B (zh) * 2020-05-29 2023-10-27 杭州海康汽车软件有限公司 驾驶员注意力监测的***和方法
CN113744498A (zh) * 2020-05-29 2021-12-03 杭州海康汽车软件有限公司 驾驶员注意力监测的***和方法
CN112288286A (zh) * 2020-10-30 2021-01-29 上海仙塔智能科技有限公司 安全派单方法、安全派单***及可读存储介质
CN112660157A (zh) * 2020-12-11 2021-04-16 重庆邮电大学 一种多功能无障碍车远程监控与辅助驾驶***
CN112699807A (zh) * 2020-12-31 2021-04-23 车主邦(北京)科技有限公司 一种驾驶员状态信息监控方法和装置
CN115147785A (zh) * 2021-03-29 2022-10-04 东风汽车集团股份有限公司 一种车辆识别方法、装置、电子设备和存储介质
WO2022222174A1 (zh) * 2021-04-21 2022-10-27 彭泳 基于视频图像分析的危货监管***及危货监管方法
CN113104046A (zh) * 2021-04-28 2021-07-13 中国第一汽车股份有限公司 一种基于云服务器的开门预警方法及装置
CN113191286B (zh) * 2021-05-08 2023-04-25 重庆紫光华山智安科技有限公司 图像数据质量检测调优方法、***、设备及介质
CN113191286A (zh) * 2021-05-08 2021-07-30 重庆紫光华山智安科技有限公司 图像数据质量检测调优方法、***、设备及介质
CN113285998A (zh) * 2021-05-20 2021-08-20 江西北斗应用科技有限公司 驾驶员管理终端和无人航空器监管***
CN113581209A (zh) * 2021-08-04 2021-11-02 东风柳州汽车有限公司 驾驶辅助模式切换方法、装置、设备及存储介质
CN113581209B (zh) * 2021-08-04 2023-06-20 东风柳州汽车有限公司 驾驶辅助模式切换方法、装置、设备及存储介质
CN114475623A (zh) * 2021-12-28 2022-05-13 阿波罗智联(北京)科技有限公司 车辆的控制方法、装置、电子设备及存储介质
CN114368395A (zh) * 2022-01-21 2022-04-19 华录智达科技股份有限公司 一种基于公交数字化转型的人工智能公交驾驶安全管理***
CN115214505A (zh) * 2022-06-29 2022-10-21 重庆长安汽车股份有限公司 车辆座舱音效的控制方法、装置、车辆及存储介质
CN115214505B (zh) * 2022-06-29 2024-04-26 重庆长安汽车股份有限公司 车辆座舱音效的控制方法、装置、车辆及存储介质

Also Published As

Publication number Publication date
KR102305914B1 (ko) 2021-09-28
MY197453A (en) 2023-06-19
JP2020525334A (ja) 2020-08-27
CN109002757A (zh) 2018-12-14
SG11201911404QA (en) 2020-01-30
KR20200063193A (ko) 2020-06-04
EP3617935A1 (en) 2020-03-04
JP6932208B2 (ja) 2021-09-08
EP3617935A4 (en) 2020-07-08

Similar Documents

Publication Publication Date Title
WO2019232972A1 (zh) 驾驶管理方法和***、车载智能***、电子设备、介质
US10915769B2 (en) Driving management methods and systems, vehicle-mounted intelligent systems, electronic devices, and medium
US10970571B2 (en) Vehicle control method and system, vehicle-mounted intelligent system, electronic device, and medium
WO2019232973A1 (zh) 车辆控制方法和***、车载智能***、电子设备、介质
CN109937152B (zh) 驾驶状态监测方法和装置、驾驶员监控***、车辆
CN111079476B (zh) 驾驶状态分析方法和装置、驾驶员监控***、车辆
JP7146959B2 (ja) 運転状態検出方法及び装置、運転者監視システム並びに車両
CN106965675B (zh) 一种货车集群智能安全作业***
US11783600B2 (en) Adaptive monitoring of a vehicle using a camera
CN113901866A (zh) 一种机器视觉的疲劳驾驶预警方法
US20240051465A1 (en) Adaptive monitoring of a vehicle using a camera
P Mathai A New Proposal for Smartphone-Based Drowsiness Detection and Warning System for Automotive Drivers

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2019565001

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2018919400

Country of ref document: EP

Effective date: 20191128

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18919400

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20207012402

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE