WO2020173213A1 - 车内人员危险动作识别方法和装置、电子设备、存储介质 - Google Patents

车内人员危险动作识别方法和装置、电子设备、存储介质 Download PDF

Info

Publication number
WO2020173213A1
WO2020173213A1 PCT/CN2019/129370 CN2019129370W WO2020173213A1 WO 2020173213 A1 WO2020173213 A1 WO 2020173213A1 CN 2019129370 W CN2019129370 W CN 2019129370W WO 2020173213 A1 WO2020173213 A1 WO 2020173213A1
Authority
WO
WIPO (PCT)
Prior art keywords
action
vehicle
actions
dangerous
person
Prior art date
Application number
PCT/CN2019/129370
Other languages
English (en)
French (fr)
Inventor
陈彦杰
王飞
钱晨
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Priority to JP2020551547A priority Critical patent/JP2021517313A/ja
Priority to KR1020207027781A priority patent/KR20200124278A/ko
Priority to SG11202009720QA priority patent/SG11202009720QA/en
Publication of WO2020173213A1 publication Critical patent/WO2020173213A1/zh
Priority to US17/034,290 priority patent/US20210009150A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/087Interaction between the driver and the control system where the control system corrects or modifies a request from the driver
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/143Alarm means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/225Direction of gaze
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/229Attention level, e.g. attentive to driving, reading or sleeping

Definitions

  • the present disclosure relates to computer vision technology, in particular to a method and device for identifying dangerous actions of persons in a vehicle, electronic equipment, and storage media.
  • driver monitoring With the rapid development of the intelligence of the people in the car, various AI technologies have been implemented, and the demand for driver monitoring in the market is becoming more and more urgent.
  • the main functional modules of driver monitoring can be roughly attributed to modules such as face recognition and fatigue detection in the car.
  • danger signals can be discovered in time, and possible dangers can be prevented and dealt with in advance to improve driving safety.
  • the embodiments of the present disclosure provide a technology for identifying dangerous actions of persons in a vehicle.
  • a method for identifying dangerous actions including:
  • each of the video streams including at least one person in the vehicle;
  • a prompt message is issued and/or an operation is performed to control the vehicle;
  • the predetermined dangerous action includes the action of the person in the vehicle showing at least one of the following: distraction action , Discomfort, irregular behavior.
  • a device for identifying dangerous actions of persons in a vehicle including:
  • a video acquisition unit configured to obtain at least one video stream of a person in the vehicle by using a camera device, and each video stream includes at least one person in the vehicle;
  • An action recognition unit configured to perform action recognition on the person in the vehicle based on the video stream
  • the dangerous processing unit is configured to respond to the result of the action recognition belonging to a predetermined dangerous action, send out prompt information and/or perform an operation to control the vehicle;
  • the predetermined dangerous action includes the actions of the personnel in the vehicle showing at least the following One: distracted actions, uncomfortable state, irregular behavior.
  • an electronic device including a processor, and the processor includes the device for identifying dangerous actions of a person in a vehicle according to any one of the above embodiments.
  • an electronic device including: a memory for storing executable instructions;
  • a processor configured to communicate with the memory to execute the executable instructions to complete the operation of the method for identifying dangerous actions of persons in a vehicle according to any one of the above embodiments.
  • a computer-readable storage medium for storing computer-readable instructions, which, when executed, perform the dangerous actions of persons in a vehicle as described in any one of the foregoing embodiments Identify the operation of the method.
  • a computer program product including computer-readable code, and when the computer-readable code runs on a device, a processor in the device executes to implement any of the foregoing An instruction of the method for identifying dangerous actions of persons in a vehicle according to an embodiment.
  • At least one video stream of persons in the vehicle is obtained by using a camera device, and each video stream includes at least one person in the vehicle ; Based on the video stream to recognize the actions of the people in the car; in response to the result of the action recognition belonging to a predetermined dangerous action, send out prompt messages and/or perform operations to control the vehicle; the predetermined dangerous actions include the actions of the people in the car as at least the following One: Distracted actions, uncomfortable states, irregular behaviors, through action recognition to determine whether the people in the vehicle have made predetermined dangerous actions, and make corresponding prompts and/or operations to the predetermined dangerous actions to control the vehicle, and realize The vehicle safety status is detected as early as possible to reduce the probability of dangerous situations.
  • FIG. 1 is a schematic flowchart of a method for identifying dangerous actions of a vehicle occupant provided by an embodiment of the disclosure.
  • FIG. 2 is a schematic diagram of a part of the process in an optional example of a method for identifying dangerous actions of a person in a vehicle provided by an embodiment of the disclosure.
  • Fig. 3a is a schematic diagram of a part of the process in another optional example of the method for identifying dangerous actions of persons in a vehicle provided by an embodiment of the disclosure.
  • FIG. 3b is a schematic diagram of the target area extracted in the method for recognizing dangerous actions of persons in a vehicle according to an embodiment of the disclosure.
  • FIG. 4 is a schematic structural diagram of the device for identifying dangerous actions of persons in a vehicle provided by an embodiment of the disclosure.
  • Fig. 5 is a schematic structural diagram of an electronic device suitable for implementing the terminal device or server of the embodiment of the present disclosure.
  • the embodiments of the present disclosure can be applied to computer systems/servers, which can operate with numerous other general-purpose or special-purpose computing system environments or configurations.
  • Examples of well-known computing systems, environments and/or configurations suitable for use with computer systems/servers include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, handheld or laptop devices, based Microprocessor systems, set-top boxes, programmable consumer electronics, network personal computers, small computer systems, large computer systems, and distributed cloud computing technology environments including any of the above systems, etc.
  • the computer system/server may be described in the general context of computer system executable instructions (such as program modules) executed by the computer system.
  • program modules may include routines, programs, object programs, components, logic, data structures, etc., which perform specific tasks or implement specific abstract data types.
  • the computer system/server can be implemented in a distributed cloud computing environment. In the distributed cloud computing environment, tasks are executed by remote processing equipment linked through a communication network. In a distributed cloud computing environment, program modules may be located on a storage medium of a local or remote computing system including a storage device.
  • Dangerous action recognition has broad application prospects in the field of vehicle safety monitoring.
  • the dangerous action recognition system can give reminders when the driver makes dangerous actions, so as to warn and avoid possible accidents;
  • the system can monitor some behaviors that are out of compliance or may cause discomfort to passengers in the car, and give reminders and Stop; at the same time, the monitoring of dangerous actions itself reflects the habits and preferences of some drivers, which helps the system build user portraits and perform big data analysis. At the same time, it can monitor the emotional state, fatigue state, and behavior habits of drivers through the identification of dangerous actions.
  • FIG. 1 is a schematic flowchart of a method for identifying dangerous actions of a vehicle occupant provided by an embodiment of the disclosure.
  • the method can be executed by any electronic device, such as a terminal device, a server, a mobile device, a vehicle-mounted device, and so on.
  • the method in this embodiment includes:
  • Step 110 Obtain at least one video stream of a person in the vehicle by using a camera device (eg, a camera, etc.).
  • a camera device eg, a camera, etc.
  • each video stream includes at least one person in the vehicle; the embodiments of the present disclosure may use a camera device (for example, one or more cameras installed in the vehicle to take pictures of the seat position of the vehicle). Capture to obtain a video stream; alternatively, one camera device can be used to collect video streams of multiple people in the vehicle (for example, all people in the vehicle), or set a camera facing one or more rear A camera device that collects images in the row area, or a camera device is installed in front of each seat to collect video streams for at least one person in the car (for example, each person in the car).
  • the processing can realize the action recognition of the personnel in the car respectively.
  • the camera device includes but is not limited to at least one of the following: a visible light camera, an infrared camera, and a near infrared camera.
  • the visible light camera can be used to collect visible light images
  • the infrared camera can be used to collect infrared images
  • the near infrared camera can be used to collect near infrared images.
  • this step 110 may be executed by the processor calling a corresponding instruction stored in the memory, or may be executed by the video capture unit 41 operated by the processor.
  • Step 120 Perform action recognition on people in the vehicle based on the video stream.
  • the action category can be divided into dangerous actions and normal actions.
  • the dangerous actions need to be processed to eliminate possible dangers.
  • the dangerous actions include but are not limited to at least one of the following: distracted actions, uncomfortable states, Irregular behavior, etc.
  • dangerous actions can have the same or different requirements for ordinary non-drivers and drivers.
  • the requirements for drivers are relatively stricter.
  • the independence and safety of the drivers need to be protected.
  • the predetermined Dangerous actions are divided into driver dangerous actions and non-driver dangerous actions; the embodiment of the present disclosure does not limit the specific way of identifying the action category.
  • this step 120 may be executed by the processor calling a corresponding instruction stored in the memory, or may be executed by the action recognition unit 42 executed by the processor.
  • Step 130 In response to the result of the action recognition being a predetermined dangerous action, a prompt message is issued and/or an operation is performed to control the vehicle.
  • the dangerous actions in the embodiments of the present disclosure may be behaviors that cause safety hazards to the personnel in the vehicle or others.
  • the predetermined dangerous actions in the embodiments of the present disclosure include, but are not limited to, the actions performed by the personnel in the vehicle as at least one of the following: distracted actions, uncomfortable states, irregular behaviors, and the like.
  • the distraction action is mainly aimed at the driver. When the driver is driving the vehicle, he needs to concentrate. When the distraction action (such as eating, smoking, etc.) occurs, it will affect the driver’s attention and cause Vehicles are prone to danger; the discomfort state can be aimed at all people in the car.
  • the embodiments of the present disclosure reduce the probability of occurrence of danger by issuing prompt messages or performing corresponding operations to control the vehicle, and improve the safety and/or comfort of the personnel in the vehicle.
  • the presentation form of the prompt information can include but is not limited to at least one of the following: sound prompt information, vibration prompt information, light prompt information, odor prompt information, etc.; for example, when a person in a car smokes, a voice prompt message can be emitted , Prompting no smoking in the car to reduce the danger of smoking to other people in the car; another example: when the people in the car wipe sweat, it means that the temperature inside the car is too high. Intelligent control can be used to reduce the temperature of the air-conditioning in the car to solve the problem. The problem of internal staff discomfort.
  • Dangerous action recognition has an important position and high application value in driver monitoring. At present, in the process of driving by the driver, there are generally many dangerous actions, and these actions often cause the driver to be distracted from driving, thereby posing certain safety risks.
  • this step 130 may be executed by the processor calling the corresponding instruction stored in the memory, or may be executed by the hazard processing unit 43 run by the processor.
  • At least one video stream of the people in the car is obtained by using a camera device, and each video stream includes at least one person in the car;
  • the predetermined dangerous action includes the behavior of the people in the car as at least one of the following: distracted action, discomfort State, irregular behavior, through action recognition to determine whether the personnel in the vehicle have made predetermined dangerous actions, and make corresponding prompts and/or operations to the predetermined dangerous actions to control the vehicle, so as to realize the early detection of vehicle safety status to reduce The probability of a dangerous situation.
  • the persons in the vehicle may include the driver and/or the non-driver.
  • the persons in the vehicle usually include at least one (for example, only the driver).
  • the image or video stream is divided according to different people in the car, so as to realize the corresponding image or video stream for each person in the car.
  • Perform analysis Since the evaluation of the dangerous actions of the driver and the non-driver may be different during the driving of the vehicle, it is optional to first determine whether the person in the vehicle is the driver or the non-driver when identifying whether it is a predetermined dangerous action. The driver.
  • FIG. 2 is a schematic diagram of a part of the process in an optional example of a method for identifying dangerous actions of a person in a vehicle provided by an embodiment of the disclosure.
  • step 120 includes:
  • Step 202 Detect at least one target area included by a person in the vehicle in at least one frame of video image in the video stream.
  • the target area may include but is not limited to at least one of the following: a partial area of a human face, an action interacting object, and a limb area.
  • a local area of a human face is used as a target area, since the movements of the face are usually related to the facial features of the human face.
  • the action of smoking or eating is related to the mouth, and the action of making a phone call is related to the ears; in this example, the target area includes but is not limited to one or any combination of the following parts: mouth, ears, nose , Eyes, eyebrows.
  • the target part of the face can be determined according to requirements.
  • the target site may include one site or multiple sites.
  • the face detection technology can be used to detect the target part of the face.
  • Step 204 Cut a target image corresponding to the target area from at least one frame of video image of the video stream according to the target area obtained by the detection.
  • the target area may be a certain area centered on the target part, for example, a face-based action may be centered on at least one part of the face.
  • the area outside the face in the video stream may include objects related to actions. For example, the action of smoking is centered on the mouth, and the smoke can appear in areas other than the human face in the detection image.
  • the position of the target area in at least one frame of video image can be determined according to the detection result of the target area, and the interception size and size of the target image can be determined according to the position of the target area in at least one frame of video image. / Or intercept location.
  • the embodiments of the present disclosure can intercept the target image corresponding to the target area in the video image according to the set conditions, so that the intercepted target image is more in line with the action recognition requirements.
  • the size of the captured target image can be determined according to the distance between the target area and the set position in the face.
  • the target image of the mouth of character A determines the target image of the mouth of character A, and also use the distance between the mouth of character B and the center point of B’s face to determine The target image of the mouth of the character B. Since the distance between the mouth and the center of the face is related to the characteristics of the face itself, the intercepted target image can be more in line with the characteristics of the face itself.
  • the target image is intercepted according to the position of the target area in the video image, which reduces noise and can also include a more complete image area where the object related to the action is located.
  • Step 206 Perform action recognition on the personnel in the vehicle according to the target image.
  • the features of the target image can be extracted, and based on the extracted features, it is determined whether a person in the vehicle performs a predetermined dangerous action.
  • the predetermined dangerous actions include, but are not limited to, the actions performed by the personnel in the vehicle as at least one of the following: distracted actions, uncomfortable states, irregular behaviors, etc.
  • potential safety hazards may arise.
  • the results of motion recognition can be used to perform safety analysis and other applications on the people in the car. For example, when the driver smokes in the video stream, you can extract the features in the target image of the mouth, and determine whether there is smoke in the video stream based on the features, and determine whether the driver smokes. If the driver smokes Action, it can be considered that there is a security risk.
  • the target area is identified in the video stream, the target image corresponding to the target area is intercepted in the video image according to the detection result of the target area, and whether a person in the vehicle performs a predetermined dangerous action is identified according to the target image.
  • the target image intercepted according to the detection result of the target area can be applied to human bodies with different areas in different video images.
  • the scope of application of the embodiments of the present disclosure is wide.
  • the embodiments of the present disclosure are based on the target image as the basis of action recognition, which is conducive to more accurately obtaining the corresponding feature extraction of dangerous actions, can reduce the detection interference caused by irrelevant areas, and improve the accuracy of action recognition, for example, smoking actions to the driver During the recognition, the smoking action has a great relationship with the mouth area.
  • the mouth and the vicinity of the mouth can be used as the mouth area to recognize the driver's action to confirm whether the driver smokes or not and improve the accuracy of smoking action recognition.
  • Fig. 3a is a schematic diagram of a part of the process in another optional example of a method for identifying dangerous actions of a vehicle occupant provided by an embodiment of the disclosure.
  • step 202 includes:
  • Step 302 Extract the characteristics of the person in the vehicle included in at least one frame of video image of the video stream.
  • the embodiments of the present disclosure are mainly aimed at recognizing some dangerous actions made by people inside the vehicle, and these dangerous actions are usually actions related to limbs and faces.
  • the recognition of these actions cannot pass the detection or detection of key points of the human body.
  • the estimation of human body posture is realized.
  • the embodiments of the present disclosure extract features by performing a convolution operation on a video image, and realize the recognition of actions in the video image according to the extracted features.
  • the characteristics of the above-mentioned dangerous actions are: limbs and/or partial areas of human faces, and action interaction objects. Therefore, it is necessary to take real-time photography of people in the vehicle through a camera device and obtain video images including human faces. Then convolve the video image to extract the action features.
  • Step 304 Extract a target area from at least one frame of video image based on the feature.
  • the target area in this embodiment is a target area that may include actions.
  • the neural network realizes the judgment of whether there are dangerous actions in the video image according to the defined characteristics and the characteristics of the extracted video image.
  • the neural networks in this embodiment are all trained, that is, the features of predetermined actions in the video image can be extracted through the neural network.
  • the neural network will divide the feature area that includes both limbs, partial face area and action interacting object to obtain the target area.
  • the target area may include, but is not limited to, at least one of the following: a partial area of a human face, an action interaction object, a limb area, and the like.
  • the partial area of the human face includes but is not limited to at least one of the following: a mouth area, an ear area, an eye area, and the like.
  • the action interaction object includes but is not limited to at least one of the following: containers, cigarettes, mobile phones, food, tools, beverage bottles, glasses, masks, etc.
  • the limb area includes but is not limited to at least one of the following: a hand area, a foot area, and the like.
  • the above dangerous actions include, but are not limited to: drinking water/drinks, smoking, making a phone call, wearing glasses, wearing a mask, putting on makeup, using tools, eating, putting your feet on the steering wheel, etc.
  • the action characteristics of drinking water may include: hand area, partial face area, and water cup; the action characteristics of smoking may include: hand area, partial face area, and smoke; the action characteristics of calling may include: hand
  • the action characteristics of wearing glasses can include: hand area, partial face area, mobile phone, and glasses; the action characteristics of wearing a mask can include: hand area, partial face area, mask; feet Action features placed on the steering wheel can include: foot area, steering wheel.
  • the actions recognized in the embodiments of the present disclosure may also include fine actions related to the face or limbs.
  • Such fine actions include at least two features of the local area of the face and the action interaction object, for example, the local area of the face and the action interaction.
  • the target area includes: a partial area of a human face, a mobile phone (that is, an action interactive object), and a hand (that is, a limb area).
  • the target action frame may also include: the mouth area and smoke (that is, the action interaction object).
  • FIG. 3b is a schematic diagram of the target area extracted in the method for recognizing dangerous actions of persons in a vehicle according to an embodiment of the disclosure.
  • the method for recognizing dangerous actions of people in the car in the embodiments of the present disclosure can be used to extract the target area from the video image in the video stream to obtain the target area for recognizing the actions.
  • the actions of the people in the car are smoking.
  • the target area obtained is based on the mouth area (partial area of the face) and smoke (action interactor); the target area obtained based on the embodiment of the present disclosure can confirm that the person in the car in Figure 3b is smoking, the present disclosure
  • the noise interference in the area unrelated to the actions of the people in the car such as smoking actions
  • the accuracy of the action recognition of the people in the car is improved. For example, the accuracy of the recognition of smoking actions in this embodiment.
  • the target image may also be preprocessed.
  • the target image is preprocessed through methods such as normalization and equalization; the recognition result of the preprocessed target image is more accurate.
  • the dangerous action may include, but is not limited to, at least one of the following: distracted action, uncomfortable state, irregular behavior, etc.
  • Distracted actions refer to actions that are not related to driving and affect driving concentration while the driver is driving the vehicle.
  • distracted actions include but are not limited to at least one of the following: call, drink, wear Sunglasses, wearing and removing masks, eating, etc.
  • discomfort refers to the physical discomfort of people in the car due to the influence of the environment in the car or their own reasons during the driving of the vehicle.
  • the discomfort includes but is not limited to at least one of the following: wipe sweat , Rubbing eyes, yawning, etc.
  • irregular behavior refers to behaviors that are not in compliance with regulations by people in the car.
  • irregular behaviors include but are not limited to at least one of the following: smoking, sticking hands out of the car, lying On the steering wheel, put your feet on the steering wheel, leave your hands on the steering wheel, hold instruments, interfere with the driver, etc. Since there are many kinds of dangerous actions, when the action category of the people in the vehicle belongs to the dangerous action, you need to determine which kind of dangerous action the action category belongs to. Different dangerous actions can correspond to different processing methods (for example, issuing a prompt message or performing an operation) Control the vehicle).
  • step 130 includes:
  • the result of the response to the action recognition belongs to a predetermined dangerous action.
  • the predetermined dangerous actions are judged at the risk level, optionally, according to preset rules and/or corresponding
  • the relationship determines the danger level of the predetermined dangerous action, and then determines how to operate according to the danger level. For example, different levels of operations are performed according to the level of dangerous actions of the people in the vehicle. For example, if the dangerous action is caused by the driver’s fatigue or physical discomfort, prompt prompts are required to allow the driver to make adjustments and rest in time; when the environment in the car makes the driver feel uncomfortable, you can control the car
  • the ventilation system or air-conditioning system is adjusted to a certain degree.
  • the set risk levels include elementary, intermediate, and advanced. At this time, corresponding prompt messages are issued according to the risk level, and/or operations corresponding to the risk level are performed and the vehicle is controlled according to the operations, including:
  • the operation corresponding to the danger level is performed and the vehicle is controlled according to the operation.
  • the risk level is set to 3 levels.
  • the embodiment of the present disclosure can also set the risk level in more detail, including more levels.
  • the risk level includes the first level and the second level.
  • Level, third level, fourth level, each level corresponds to a different degree of danger. According to different danger levels, it sends out prompt messages and/or performs operations corresponding to the danger levels and controls the vehicle according to the operations. By performing different operations for different danger levels, the sending of prompt information and the control of operations can be made more flexible and adapt to different usage requirements.
  • determining the risk level of a predetermined dangerous action includes:
  • the dangerous action obtained by the action recognition is further abstractly analyzed, and at the same time, according to the duration of the action, or the prior probability of the occurrence of a dangerous situation, it is possible to output whether the passenger's real intention is performing a dangerous action.
  • the embodiment of the present disclosure realizes the measurement of the duration of the action through the frequency and/or duration of the predetermined dangerous action in the video stream. For example, when the driver just scratches his eyes quickly, it can be considered as a quick adjustment and there is no need to call the police; but if the driver rubs his eyes for a long time and yawns and other actions occur, then the driver can be considered more Fatigue should be reminded.
  • the alarm intensity for smoking can be lower than for actions such as lying on the steering wheel or making a phone call.
  • the action includes the duration of the action
  • the warning condition includes: it is recognized that the duration of the action exceeds the duration threshold.
  • the action may include the duration of the action.
  • the duration of the action exceeds the duration threshold, it can be considered that the execution of the action distracts more attention from the subject of the action, and it can be considered a dangerous action, and an alert needs to be sent. information. For example, if the duration of the driver's smoking action exceeds 3 seconds, it can be considered that the smoking action is a dangerous action, which will affect the driver's driving action, and it is necessary to send warning information to the driver.
  • the sending conditions of the prompt information and/or the control conditions of the vehicle can be adjusted, making the sending and operation control of the prompt information more flexible and more adaptable to different usage requirements .
  • the result of the action recognition includes the duration of the action
  • the condition belonging to the predetermined dangerous action includes: the recognition that the duration of the action exceeds the duration threshold.
  • the result of the action recognition includes the number of actions
  • the condition belonging to the predetermined dangerous action includes: the number of actions is recognized to exceed the threshold of the number of times.
  • the number of actions exceeds the threshold of the number of times, it can be considered that the actions of the action execution target are frequent, distracting more attention, and it can be considered as a dangerous action, and warning information needs to be sent.
  • the number of smoking actions of the driver exceeds 5 times, it can be considered that the smoking action is a dangerous action, which will affect the driving action of the driver, and prompt information needs to be sent to the driver.
  • the result of the action recognition includes the duration of the action and the number of actions
  • the conditions belonging to the predetermined dangerous action include: identifying that the duration of the action exceeds the duration threshold and the number of actions exceeds the number threshold.
  • the duration of the action exceeds the duration threshold and the number of actions exceeds the threshold, it can be considered that the action execution object is frequent and the action lasts for a long time, distracting more attention, and can be considered dangerous Action requires sending prompt information and/or controlling the vehicle. Make vehicle control more flexible and adapt to different usage requirements.
  • the embodiments of the present disclosure combine the action category and the category of personnel in the vehicle to prompt or intelligently operate, so as to improve driving safety while not being affected by frequent alarms user experience.
  • it includes: in response to the person in the vehicle being the driver, issuing corresponding first prompt information according to a predetermined dangerous action and/or controlling the vehicle to perform a corresponding first predetermined operation; and/or,
  • the corresponding second prompt message is issued and/or the corresponding second predetermined operation is performed according to the predetermined dangerous action.
  • the people in the vehicle can be divided into two categories: driver and non-driver. Different settings are set for the driver and non-driver. Dangerous actions to achieve flexible alarm and operation.
  • the driver’s distraction actions may include but are not limited to at least one of the following: making a call, drinking water, wearing sunglasses, wearing a mask, eating, etc.;
  • the driver’s discomfort state may include but not limited to at least one of the following : Wiping sweat, rubbing eyes, yawning, etc.;
  • driver’s irregular behavior can include but not limited to at least one of the following: smoking, sticking hands out of the car, lying on the steering wheel, putting your feet on the steering wheel, leaving your hands away from the steering wheel Wait.
  • the non-driver discomfort state may include but is not limited to at least one of the following: wiping sweat, etc.; the non-driver's irregular behavior may include but is not limited to at least one of the following: smoking, sticking a hand out of the car, holding a device , Interfering with the driver, etc.
  • the embodiments of the present disclosure also set different prompt messages and predetermined operations for the driver and the non-driver to realize flexible safety control of the vehicle. For example, when the driver moves his hands off the steering wheel, he can send out a strong While prompting information (e.g., corresponding to the first prompt information), perform automatic driving (e.g., corresponding to the first predetermined operation) to improve the safety of vehicle driving; for non-drivers, for example, when non-drivers appear to wipe sweat During the action, a weaker prompt message (for example, corresponding to the second prompt message) is issued, and/or an operation for adjusting the temperature of the air conditioner in the vehicle is performed (for example, corresponding to the second predetermined operation).
  • a strong While prompting information e.g., corresponding to the first prompt information
  • perform automatic driving e.g., corresponding to the first predetermined operation
  • a weaker prompt message for example, corresponding to the second prompt message
  • an operation for adjusting the temperature of the air conditioner in the vehicle for
  • FIG. 4 is a schematic structural diagram of the device for identifying dangerous actions of persons in a vehicle provided by an embodiment of the disclosure.
  • the device of this embodiment can be used to implement the foregoing method embodiments of the present disclosure. As shown in Figure 4, the device of this embodiment includes:
  • the video acquisition unit 41 is configured to obtain at least one video stream of the personnel in the vehicle by using the camera device.
  • each video stream includes at least one person in the vehicle.
  • the action recognition unit 42 is configured to perform action recognition on persons in the vehicle based on the video stream.
  • the dangerous processing unit 43 is configured to respond to the result of the action recognition belonging to a predetermined dangerous action, send out prompt information and/or perform an operation to control the vehicle.
  • the predetermined dangerous actions include the actions of the people in the vehicle as at least one of the following: distracted actions, uncomfortable states, irregular behaviors, etc.
  • a device for identifying dangerous actions of persons in a vehicle is used to determine whether persons in a vehicle have made predetermined dangerous actions through action recognition, and make corresponding prompts and/or operations to the predetermined dangerous actions to control Vehicles, realize the early detection of vehicle safety status to reduce the probability of dangerous situations.
  • the action recognition unit 42 is configured to detect at least one target area included in the vehicle in at least one frame of the video image of the video stream; according to the detected target area from the video stream A target image corresponding to the target area is intercepted from at least one frame of video image; the actions of the people in the vehicle are recognized according to the target image.
  • the target area is identified in the video stream, the target image corresponding to the target area is intercepted in the video image according to the detection result of the target area, and whether a person in the vehicle performs a predetermined dangerous action is identified according to the target image.
  • the target image intercepted according to the detection result of the target area can be applied to human bodies with different areas in different video images. The scope of application of the embodiments of the present disclosure is wide.
  • the action recognition unit 42 detects at least one target area included in the person in the car in at least one frame of the video image of the video stream, it is used to extract the characteristics of the person in the car included in at least one frame of the video image of the video stream;
  • the target area is extracted from the at least one frame of video image based on the feature, where the target area includes but is not limited to at least one of the following: a partial area of a human face, an action interaction object, a limb area, and the like.
  • the partial area of the human face includes but is not limited to at least one of the following: a mouth area, an ear area, an eye area, and the like.
  • the action interaction object includes but is not limited to at least one of the following: containers, cigarettes, mobile phones, food, tools, beverage bottles, glasses, masks, etc.
  • the distraction action includes but is not limited to at least one of the following: making a phone call, drinking water, wearing sunglasses, wearing a mask, eating, etc.; and/or,
  • the uncomfortable state includes but is not limited to at least one of the following: wiping sweat, rubbing eyes, yawning, etc.; and/or,
  • Irregular behaviors include, but are not limited to, at least one of the following: smoking, sticking hands out of the car, lying on the steering wheel, putting your feet on the steering wheel, leaving your hands on the steering wheel, holding instruments, interfering with the driver, etc.
  • the risk processing unit 43 includes:
  • the level determination module is used to determine the danger level of the predetermined dangerous action in response to the result of the action recognition being a predetermined dangerous action
  • the operation processing module is used to issue corresponding prompt information according to the danger level, and/or execute the operation corresponding to the danger level and control the vehicle according to the operation.
  • the predetermined dangerous actions are judged at the risk level, optionally, according to preset rules and/or corresponding
  • the relationship determines the danger level of the predetermined dangerous action, and then determines how to operate according to the danger level. For example, different levels of operations are performed according to the level of dangerous actions of the people in the vehicle. For example, if the dangerous action is caused by the driver’s fatigue or physical discomfort, prompt prompts are required to allow the driver to make adjustments and rest in time; when the environment in the car makes the driver feel uncomfortable, you can control the car
  • the ventilation system or air-conditioning system is adjusted to a certain degree.
  • the risk levels include elementary, intermediate and advanced
  • the operation processing module is used to send out prompt information in response to the danger level being elementary; in response to the danger level being intermediate, perform operations corresponding to the danger level and control the vehicle according to the operations; in response to the danger level being high, execute the prompt message at the same time Operation corresponding to the dangerous level and control the vehicle according to the operation.
  • the level determination module is configured to obtain the frequency and/or duration of the predetermined dangerous action in the video stream, and determine the risk level of the predetermined dangerous action based on the frequency and/or duration.
  • the result of the action recognition includes the duration of the action
  • the condition belonging to the predetermined dangerous action includes: the recognition that the duration of the action exceeds the duration threshold.
  • the dangerous action obtained by the action recognition is further abstractly analyzed, and at the same time, according to the duration of the action, or the prior probability of the occurrence of a dangerous situation, it is possible to output whether the passenger's real intention is performing a dangerous action.
  • the embodiment of the present disclosure realizes the measurement of the duration of the action through the frequency and/or duration of the predetermined dangerous action in the video stream.
  • the result of the action recognition includes the duration of the action
  • the condition belonging to the predetermined dangerous action includes: the recognition that the duration of the action exceeds the duration threshold.
  • the result of the action recognition includes the number of actions
  • the condition belonging to the predetermined dangerous action includes: the number of actions is recognized to exceed the threshold of the number of times.
  • the result of the action recognition includes the duration of the action and the number of actions
  • the conditions belonging to the predetermined dangerous action include: it is recognized that the duration of the action exceeds the duration threshold, and the number of actions exceeds the number threshold.
  • the persons in the vehicle include the driver and/or non-driver of the vehicle.
  • the hazard processing unit 43 is configured to respond to the person in the vehicle being the driver, send corresponding first prompt information according to a predetermined dangerous action and/or control the vehicle to perform the corresponding first predetermined operation; and/or, respond If the person in the vehicle is a non-driver, the corresponding second prompt message and/or the corresponding second predetermined operation are issued according to the predetermined dangerous action.
  • an electronic device including a processor, and the processor includes the device for identifying dangerous actions of a person in a vehicle as provided in any of the above embodiments.
  • an electronic device including: a memory for storing executable instructions;
  • the processor is configured to communicate with the memory to execute executable instructions to complete the operation of the method for identifying dangerous actions of persons in a vehicle provided by any of the above embodiments.
  • a computer-readable storage medium for storing computer-readable instructions.
  • the instructions When the instructions are executed, the method for identifying dangerous actions of persons in a vehicle provided by any of the above embodiments is executed. operating.
  • a computer program product which includes computer-readable code.
  • the processor in the device executes to implement any one of the above embodiments. Instructions for the identification method of dangerous actions of people in the car.
  • the embodiments of the present disclosure also provide an electronic device, which may be a mobile terminal, a personal computer (PC), a tablet computer, a server, etc., for example.
  • an electronic device which may be a mobile terminal, a personal computer (PC), a tablet computer, a server, etc., for example.
  • FIG. 5 it shows a schematic structural diagram of an electronic device 500 suitable for implementing a terminal device or a server of the embodiments of the present disclosure:
  • the electronic device 500 includes one or more processors and a communication unit.
  • the one or more processors are, for example: one or more central processing units (CPU) 501, and/or one or more image processors (acceleration units) 513, etc.
  • the processors may be stored in a read-only memory according to The executable instructions in the (ROM) 502 or the executable instructions loaded from the storage part 508 to the random access memory (RAM) 503 execute various appropriate actions and processes.
  • the communication unit 512 may include but is not limited to a network card, and the network card may include but is not limited to an IB (Infiniband) network card.
  • the processor can communicate with the read-only memory 502 and/or the random access memory 503 to execute executable instructions, is connected to the communication unit 512 via the bus 504, and communicates with other target devices via the communication unit 512, thereby completing the embodiments of the present disclosure.
  • the operation corresponding to any one of the methods for example, using a camera device to obtain at least one video stream of a person in the car, each video stream includes at least one person in the car; performing action recognition on the person in the car based on the video stream; responding to the action
  • the recognized result belongs to a predetermined dangerous action, and a prompt message is issued and/or an operation is performed to control the vehicle;
  • the predetermined dangerous action includes the behavior of the personnel in the vehicle as at least one of the following: distracted action, uncomfortable state, and irregular behavior.
  • RAM 503 can also store various programs and data required for device operation.
  • the CPU501, ROM502, and RAM503 are connected to each other through a bus 504.
  • ROM502 is an optional module.
  • the RAM 503 stores executable instructions, or writes executable instructions into the ROM 502 during runtime, and the executable instructions cause the central processing unit 501 to perform operations corresponding to the aforementioned communication methods.
  • An input/output (I/O) interface 505 is also connected to the bus 504.
  • the communication unit 512 can be integrated, or can be configured to have multiple sub-modules (for example, multiple IB network cards) and be on the bus link.
  • the following components are connected to the I/O interface 505: an input part 506 including a keyboard, a mouse, etc.; an output part 507 including a cathode ray tube (CRT), a liquid crystal display (LCD), etc., and a speaker; a storage part 508 including a hard disk, etc. ; And a communication section 509 including a network interface card such as a LAN card, a modem, etc. The communication section 509 performs communication processing via a network such as the Internet.
  • the driver 510 is also connected to the I/O interface 505 as needed.
  • a removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is installed on the drive 510 as needed, so that the computer program read therefrom is installed into the storage portion 508 as needed.
  • the architecture shown in Figure 5 is only an optional implementation.
  • the number and types of components in Figure 5 can be selected, deleted, added or replaced according to actual needs; Different functional components can also be set separately or integrated.
  • the acceleration unit 513 and the CPU501 can be set separately or the acceleration unit 513 can be integrated on the CPU501, and the communication unit can be set separately or integrated on the CPU501. Or on the acceleration unit 513, and so on.
  • the process described above with reference to the flowchart can be implemented as a computer software program.
  • the embodiments of the present disclosure include a computer program product, which includes a computer program tangibly contained on a machine-readable medium.
  • the computer program includes program code for executing the method shown in the flowchart.
  • the program code may include a corresponding Execute the instructions corresponding to the method steps provided by the embodiments of the present disclosure, for example, use a camera device to obtain at least one video stream of a person in the car, and each video stream includes at least one person in the car; perform action recognition on the person in the car based on the video stream ; In response to the result of the action recognition belonging to a predetermined dangerous action, a prompt message is issued and/or an operation is performed to control the vehicle; the predetermined dangerous action includes at least one of the following: distracted action, uncomfortable state, no Standardize behavior.
  • the computer program may be downloaded and installed from the network through the communication part 509, and/or installed from the removable medium 511.
  • the central processing unit (CPU) 501 the operations of the above-mentioned functions defined in the method of the present disclosure are executed.
  • the method and apparatus of the present disclosure may be implemented in many ways.
  • the method and apparatus of the present disclosure can be implemented by software, hardware, firmware or any combination of software, hardware, and firmware.
  • the above-mentioned order of the steps for the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above, unless specifically stated otherwise.
  • the present disclosure may also be implemented as programs recorded in a recording medium, and these programs include machine-readable instructions for implementing the method according to the present disclosure.
  • the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Evolutionary Computation (AREA)
  • Ophthalmology & Optometry (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Eye Examination Apparatus (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)

Abstract

一种车内人员危险动作识别方法和装置、电子设备、存储介质,其中,方法包括:利用车载摄像装置获得车内人员的至少一个视频流(110),每个所述视频流中包括至少一个车内人员;基于所述视频流对所述车内人员进行动作识别(120);响应于所述动作识别的结果属于预定的危险动作,发出提示信息和/或执行操作以控制车辆(130)。

Description

车内人员危险动作识别方法和装置、电子设备、存储介质
本申请要求在2019年2月28日提交中国专利局、申请号为CN 201910152525.X、发明名称为“车内人员危险动作识别方法和装置、电子设备、存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及计算机视觉技术,尤其是一种车内人员危险动作识别方法和装置、电子设备、存储介质。
背景技术
随着车内人员工智能的发展日新月异,各种AI技术纷纷落地,目前市场上对于驾驶员监控的需求愈发急切。驾驶员监控主要的功能模块大致可以归结为车内人员脸识别、疲劳检测等模块。通过对驾驶员的状态进行监控,可以及时发现危险信号,提前对可能发生的危险进行预防和处理,以提高驾驶安全性。
发明内容
本公开实施例提供了一种车内人员危险动作识别技术。
根据本公开实施例的一个方面,提供的一种危险动作识别方法,包括:
利用摄像装置获得车内人员的至少一个视频流,每个所述视频流中包括至少一个车内人员;
基于所述视频流对所述车内人员进行动作识别;
响应于所述动作识别的结果属于预定的危险动作,发出提示信息和/或执行操作以控制车辆;所述预定的危险动作包括所述车内人员的动作表现为以下至少之一:分心动作、不适状态、不规范行为。
根据本公开实施例的另一个方面,提供的一种车内人员危险动作识别装置,包括:
视频采集单元,用于利用摄像装置获得车内人员的至少一个视频流,每个所述视频流中包括至少一个车内人员;
动作识别单元,用于基于所述视频流对所述车内人员进行动作识别;
危险处理单元,用于响应于所述动作识别的结果属于预定的危险动作,发出提示信息 和/或执行操作以控制车辆;所述预定的危险动作包括所述车内人员的动作表现为以下至少之一:分心动作、不适状态、不规范行为。
根据本公开实施例的又一个方面,提供的一种电子设备,包括处理器,所述处理器包括上述任意一项实施例所述的车内人员危险动作识别装置。
根据本公开实施例的还一个方面,提供的一种电子设备,包括:存储器,用于存储可执行指令;
以及处理器,用于与所述存储器通信以执行所述可执行指令从而完成上述任意一项实施例所述车内人员危险动作识别方法的操作。
根据本公开实施例的再一个方面,提供的一种计算机可读存储介质,用于存储计算机可读取的指令,所述指令被执行时执行上述任意一项实施例所述车内人员危险动作识别方法的操作。
根据本公开实施例的再一个方面,提供的一种计算机程序产品,包括计算机可读代码,当所述计算机可读代码在设备上运行时,所述设备中的处理器执行用于实现上述任意一项实施例所述车内人员危险动作识别方法的指令。
基于本公开上述实施例提供的一种车内人员危险动作识别方法和装置、电子设备、存储介质,利用摄像装置获得车内人员的至少一个视频流,每个视频流中包括至少一个车内人员;基于视频流对车内人员进行动作识别;响应于动作识别的结果属于预定的危险动作,发出提示信息和/或执行操作以控制车辆;预定的危险动作包括车内人员的动作表现为以下至少之一:分心动作、不适状态、不规范行为,通过动作识别确定车内人员是否做出预定的危险动作,并对预定的危险动作做出对应的提示和/或操作以控制车辆,实现对车辆安全状况尽早发现,以降低发生危险情况的概率。
下面通过附图和实施例,对本公开的技术方案做进一步的详细描述。
附图说明
构成说明书的一部分的附图描述了本公开的实施例,并且连同描述一起用于解释本公开的原理。
参照附图,根据下面的详细描述,可以更加清楚地理解本公开,其中:
图1为本公开实施例提供的车内人员危险动作识别方法的一个流程示意图。
图2为本公开实施例提供的车内人员危险动作识别方法的一个可选示例中部分流程示意图。
图3a为本公开实施例提供的车内人员危险动作识别方法的另一可选示例中部分流程 示意图。
图3b为本公开实施例的车内人员危险动作识别方法中提取的目标区域的一个示意图。
图4为本公开实施例提供的车内人员危险动作识别装置的一个结构示意图。
图5为适于用来实现本公开实施例的终端设备或服务器的电子设备的结构示意图。
具体实施方式
现在将参照附图来详细描述本公开的各种示例性实施例。应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本公开的范围。
同时,应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本公开及其应用或使用的任何限制。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
本公开实施例可以应用于计算机***/服务器,其可与众多其它通用或专用计算***环境或配置一起操作。适于与计算机***/服务器一起使用的众所周知的计算***、环境和/或配置的例子包括但不限于:个人计算机***、服务器计算机***、瘦客户机、厚客户机、手持或膝上设备、基于微处理器的***、机顶盒、可编程消费电子产品、网络个人电脑、小型计算机***﹑大型计算机***和包括上述任何***的分布式云计算技术环境,等等。
计算机***/服务器可以在由计算机***执行的计算机***可执行指令(诸如程序模块)的一般语境下描述。通常,程序模块可以包括例程、程序、目标程序、组件、逻辑、数据结构等等,它们执行特定的任务或者实现特定的抽象数据类型。计算机***/服务器可以在分布式云计算环境中实施,分布式云计算环境中,任务是由通过通信网络链接的远程处理设备执行的。在分布式云计算环境中,程序模块可以位于包括存储设备的本地或远程计算***存储介质上。
危险动作识别在车内安全监控领域有着广泛的应用前景。首先,危险动作识别***可 以在驾驶员做出危险动作时给予提示,从而预警和规避可能发生的事故;再者,该***可以监控一些不合规范或可能引起车内乘客不适的行为,予以提醒和制止;同时危险动作的监控本身反映了一些司机的习惯和嗜好,有助于***建立用户画像并进行大数据分析,同时可以通过危险动作识别,监控到司机的情感状态、疲劳状态、行为习惯。
图1为本公开实施例提供的车内人员危险动作识别方法的一个流程示意图。该方法可以由任意电子设备执行,例如终端设备、服务器、移动设备、车载设备等等。如图1所示,该实施例方法包括:
步骤110,利用摄像装置(如,摄像头等)获得车内人员的至少一个视频流。
其中,每个视频流中包括至少一个车内人员;本公开实施例可通过摄像装置(例如,设置在车辆内部对车辆座椅位置进行拍摄的一个或多个摄像头)对车内人员的图像进行采集,获得视频流;可选地,可以基于一个摄像装置采集整个车辆内的多个车内人人员(如,所有车内人员)的视频流,或者在车内设置一个面向一个或多个后排区域进行图像采集的摄像装置,或者分别在每个座椅前方设置一个摄像装置,分别针对至少一个车内人员(如,每个车内人员)进行采集视频流,通过对采集的视频流进行处理可实现分别对车内人员进行动作识别。
在实际应用中,还可能存在基于车外摄像装置(如,路上设置的摄像头)对车内人员进行视频流采集的情况。
可选地,摄像装置包括但不限于以下至少之一:可见光摄像头、红外摄像头、近红外摄像头。其中,可见光摄像头可以用于采集可见光图像,红外摄像头可以用于采集红外图像,近红外摄像头可以用于采集近红外图像。
在一个可选示例中,该步骤110可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的视频采集单元41执行。
步骤120,基于视频流对车内人员进行动作识别。
在车辆行驶过程中,如果车内人员中的一个或多个做出危险动作,可能会导致车辆发生危险,尤其是驾驶员如果做出一些危险动作,将对整个车辆造成危险,导致车辆和车内人员发生危险,因此,需要对车内人员的动作进行识别,以确保车辆的安全。有些动作基于视频流中的单帧图像帧就可以确定,而有些动作需要连续多帧才能识别,因此,本公开实施例通过视频流对动作进行识别,以减少误判,提高了动作识别的准确率。
可选地,可将动作类别分为危险动作和正常动作,对于危险动作需要进行处理,以排除可能发生的危险,其中,危险动作包括但不限于以下至少之一:分心动作、不适状态、 不规范行为等。其中危险动作对于普通非驾驶员和驾驶员的要求可以相同或不同,例如,对于驾驶员的要求相对更加严格,同时,需要对驾驶员的独立性和安全性进行保护,例如,可将预定的危险动作分为驾驶员危险动作和非驾驶员危险动作;本公开实施例不限制具体识别动作类别的方式。
在一个可选示例中,该步骤120可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的动作识别单元42执行。
步骤130,响应于动作识别的结果属于预定的危险动作,发出提示信息和/或执行操作以控制车辆。
本公开实施例中的危险动作可以是对车内人员自身或他人造成安全隐患的行为。可选地,本公开实施例中的预定的危险动作包括但不限于车内人员的动作表现为以下至少之一:分心动作、不适状态、不规范行为等。可选地,分心动作主要针对驾驶员,在驾驶员驾驶车辆过程中,需要精力集中,当出现分心动作(如,吃东西、吸烟等动作)时,会影响驾驶员的注意力,导致车辆容易发生危险;不适状态可针对所有车内人员,对于车内人员出现不适状态时,基于人体安全考量,有些较为危险的情况,需要及时进行处理,例如,驾驶员频繁打哈欠、或乘客擦汗等情况;不规范行为可以是不符合安全驾驶规定的行为,还有可能是对驾驶员或其他车内人员可能产生危险的行为等。为了克服预定的危险动作会产生的不良影响,本公开实施例通过发出提示信息或执行相应操作控制车辆来降低危险发生的概率,提高车内人员的安全性和/或舒适度。
可选地,提示信息的表现形式可以包括但不限于以下至少之一:声音提示信息、震动提示信息、光线提示信息、气味提示信息等等;例如:当车内人员吸烟,可发出声音提示信息,提示车内不可吸烟,以减少吸烟对其他车内人员产生的危险;又例如:当车内人员擦汗,说明车内温度过高,可通过智能控制,降低车内空调温度,以解决车内人员不适的问题。
危险动作识别在驾驶员监控中具有重要地位和很高的应用价值。目前,在驾驶员开车的过程中,普遍存在着许多危险动作,这些动作时常会导致驾驶员驾驶分心,从而存在一定的安全隐患。
在一个可选示例中,该步骤130可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的危险处理单元43执行。
基于本公开上述实施例提供的一种车内人员危险动作识别方法,利用摄像装置获得车内人员的至少一个视频流,每个视频流中包括至少一个车内人员;基于视频流对车内人员 进行动作识别;响应于动作识别的结果属于预定的危险动作,发出提示信息和/或执行操作以控制车辆;预定的危险动作包括车内人员的动作表现为以下至少之一:分心动作、不适状态、不规范行为,通过动作识别确定车内人员是否做出预定的危险动作,并对预定的危险动作做出对应的提示和/或操作以控制车辆,实现对车辆安全状况尽早发现,以降低发生危险情况的概率。
可选地,车内人员可以包括驾驶员和/或非驾驶员,车内人员通常包括至少一个(例如:仅包括驾驶员),为了分别对每个车内人员的动作进行识别,在获取到图像或视频流之后,可选地,按照不同位置(例如:车座椅位置)将图像或视频流按照不同的车内人员进行分割,以实现分别对每个车内人员对应的图像或视频流进行分析。由于在车辆行驶过程中,对驾驶员和非驾驶员的危险动作评价可能是不同的,因此,在识别是否是预定的危险动作时,可选的,先判断该车内人员是驾驶员还是非驾驶员。
图2为本公开实施例提供的车内人员危险动作识别方法的一个可选示例中部分流程示意图。如图2所示,步骤120包括:
步骤202,在视频流的至少一帧视频图像中检测车内人员包括的至少一个目标区域。
在一种可能的实现方式中,为了实现动作识别,目标区域可以包括但不限于以下至少之一:人脸局部区域、动作交互物和肢体区域等。例如,以人脸局部区域为目标区域时,由于脸部的动作通常与人脸中的五官相关。例如,抽烟或进食的动作与嘴部相关、打电话的动作与耳部相关;在该例子中目标区域包括但不限于以下部位中的其中一种或任意组合:嘴部、耳部、鼻部、眼部、眉部。可选地,可以根据需求确定人脸上的目标部位。目标部位可以包括一个部位或多个部位。可以利用人脸检测技术检测出人脸中的目标部位。
步骤204,根据检测获得的目标区域从视频流的至少一帧视频图像中截取与目标区域对应的目标图像。
在一种可能的实现方式中,目标区域可以是目标部位为中心的一定区域,例如,基于脸部的动作可以以脸部的至少一个部位为中心。在视频流中人脸外的区域中可以包括与动作相关的物体。例如,抽烟的动作以嘴部为中心,烟可以出现在检测图像中人脸以外的区域中。
在一种可能的实现方式中,可以根据目标区域的检测结果,确定目标区域在至少一帧视频图像中的位置,可以根据目标区域在至少一帧视频图像中的位置确定目标图像的截取尺寸和/或截取位置。本公开实施例可以根据设定的条件在视频图像中截取与目标区域对应的目标图像,以使截取到的目标图像更加符合动作识别需求。例如,可以根据目标区域与 人脸中的设定位置之间的距离,确定所截取的目标图像的大小。例如,利用人物A的嘴部与A的人脸中心点之间的距离,确定人物A的嘴部的目标图像,同样利用人物B的嘴部与B的人脸中心点之间的距离,确定人物B的嘴部的目标图像。由于嘴部与人脸中心之间的距离与人脸自身的特征相关,可以使得截取到的目标图像更加符合人脸自身的特征。根据目标区域在视频图像中的位置截取得到的目标图像,减少了噪声,同时还可以包括更加完整的与动作相关的物体所在的图像区域。
步骤206,根据目标图像对车内人员进行动作识别。
在一种可能的实现方式中,可以提取目标图像的特征,并根据提取到的特征确定车内人员是否执行预定的危险动作。
在一种可能的实现方式中,预定的危险动作包括但不限于车内人员的动作表现为以下至少之一:分心动作、不适状态、不规范行为等。当车内人员在执行预定的危险动作时,可能产生安全隐患。可以利用动作识别的结果,对车内人员进行安全分析等应用。例如,当视频流中驾驶员有抽烟的动作时,可以通过提取嘴部的目标图像中的特征,并根据特征判断视频流中是否有烟的特征,确定驾驶员是否抽烟,如果驾驶员有抽烟动作,可以认为存在安全隐患。
在本实施例中,在视频流中识别目标区域,根据目标区域的检测结果在视频图像中截取与目标区域对应的目标图像,并根据目标图像识别车内人员是否执行预定的危险动作。根据目标区域的检测结果截取到的目标图像,可以适用于不同的视频图像中面积大小不同的人体。本公开实施例的适用范围广。本公开实施例基于目标图像作为动作识别的基础,有利于更准确的获得危险动作相应的特征提取,可以减少无关区域带来的检测干扰,提高动作识别的准确性,例如,对驾驶员吸烟动作进行识别时,吸烟动作与嘴部区域关系较大,可通过对嘴部及嘴部附近作为嘴部区域对驾驶员动作进行识别,以确认驾驶员是否吸烟,提高吸烟动作识别的准确性。
图3a为本公开实施例提供的车内人员危险动作识别方法的另一可选示例中部分流程示意图。如图3a所示,上述实施例提供的方法中,步骤202包括:
步骤302,提取视频流的至少一帧视频图像中包括的车内人员的特征。
本公开实施例主要针对车内人员在车辆内部时所做的一些危险动作进行识别,而这些危险动作通常是与肢体、人脸有关的动作,这些动作的识别无法通过对人体关键点的检测或人体姿态的估计实现。本公开实施例通过对视频图像进行卷积操作提取出特征,并根据提取到的特征实现视频图像中动作的识别。例如,上述危险动作的特征为:肢体和/或人脸 局部区域、动作交互物,因此,需通过摄像装置对车内人员进行实时拍摄,并获取包括有人脸的视频图像。再对视频图像进行卷积操作,提取出动作特征。
步骤304,基于特征从至少一帧视频图像中提取目标区域。
可选地,本实施例中的目标区域为可能包括动作的目标区域。
首先对上述危险动作的特征进行定义,神经网络再根据定义的特征和提取到的视频图像的特征,实现对视频图像中是否存在危险动作的判断。本实施例中的神经网络均是训练好的,即经神经网络可将视频图像中的预定动作的特征提取出来。
若上述提取的特征包括:肢体区域、人脸局部区域和动作交互物,神经网络会将同时包含肢体、人脸局部区域和动作交互物的特征区域划分出来,获得目标区域。其中,目标区域可以包括但不限于以下至少之一:人脸局部区域、动作交互物、肢体区域等。可选地,人脸局部区域,包括但不限于以下至少之一:嘴部区域、耳部区域、眼部区域等。可选地,动作交互物,包括但不限于以下至少之一:容器、烟、手机、食物、工具、饮料瓶、眼镜、口罩等。可选地,肢体区域包括但不限于以下至少之一:手部区域、脚部区域等。例如,上述危险动作包括但不限于:喝水/饮料、抽烟、打电话、戴眼镜、戴口罩、化妆、使用工具、进食、双脚放在方向盘上等。示例性地,喝水的动作特征可以包括:手部区域、人脸局部区域、水杯;抽烟的动作特征可以包括:手部区域、人脸局部区域、烟;打电话的动作特征可以包括:手部区域、人脸局部区域、手机,戴眼镜的动作特征可以包括:手部区域、人脸局部区域、眼镜;戴口罩的动作特征可以包括:手部区域、人脸局部区域、口罩;双脚放在方向盘上的动作特征可以包括:脚部区域、方向盘。
本公开实施例识别的动作还可以包括与人脸或肢体有关的精细动作,这类精细动作至少包括人脸的局部区域和动作交互物这二个特征,例如包括人脸的局部区域和动作交互物这二个特征,或者,包括人脸的局部区域、动作交互物以及肢体这三个特征中的两个特征,等等,因此,精细动作是指相似性较高的多个动作,例如,吸烟和打哈欠都是主要基于嘴部区域进行识别,都包括张嘴闭嘴的动作,其区别仅在于是否还包括香烟(动作交互物),因此,本公开实施例通过提取目标区域实现动作识别,实现了对精细动作的识别。例如,对于打电话动作,目标区域内包括:人脸的局部区域、手机(即动作交互物)以及手(即肢体区域)。又例如,对于抽烟动作,目标动作框内也可能包括:嘴部区域,烟(即动作交互物)。
图3b为本公开实施例的车内人员危险动作识别方法中提取的目标区域的一个示意图。可以利用本公开实施例中的车内人员危险动作识别方法,对视频流中的视频图像进行目标 区域提取,以获得对动作进行识别的目标区域,本公开实施例中车内人员的动作为吸烟动作,因此,获得的目标区域是基于嘴部区域(人脸局部区域)和烟(动作交互物);基于本公开实施例获得的目标区域可确认图3b中的车内人员在吸烟,本公开实施例中通过获得目标区域,基于目标区域进行动作识别,去除了整张图像中与车内人员动作(如,吸烟动作)无关的区域的噪声干扰,提高车内人员的动作识别的准确性,如,本实施例中对吸烟动作识别的准确性。
可选地,根据目标图像对车内人员进行动作识别之前,还可以对目标图像进行预处理。例如,通过归一化、均衡化等方法对目标图像进行预处理;经过预处理的目标图像获得的识别结果更准确。
可选地,危险动作可以包括但不限于以下至少之一:分心动作、不适状态、不规范行为等。分心动作是指驾驶员在驾驶车辆的同时,还做出与驾驶无关并且会影响驾驶专注度的动作,例如:分心动作包括但不限于以下至少之一:打电话、喝水、戴摘墨镜、戴摘口罩、吃东西等;不适状态是指在车辆行驶过程中车内人员由于车内环境影响或自身原因导致的身体不适,例如:不适状态包括但不限于以下至少之一:擦汗、揉眼睛、打哈欠等;不规范行为是指车内人员做出的不符合规定的行为,例如,不规范行为包括但不限于以下至少之一:抽烟、将手伸出车外、趴在方向盘上、双脚放在方向盘上、双手离开方向盘、手持器械、干扰驾驶员等。由于危险动作包括多种,当车内人员的动作类别属于危险动作时,首先需要确定该动作类别属于哪种危险动作,不同的危险动作可以对应不同的处理方式(如,发出提示信息或执行操作控制车辆)。
在一个或多个可选的实施例中,步骤130包括:
响应于动作识别的结果属于预定的危险动作。
确定预定的危险动作的危险级别。
根据危险级别发出对应的提示信息,和/或执行危险级别对应的操作并根据操作控制所述车辆。
可选地,本公开实施例根据动作识别的结果确定车内人员的动作属于预定的危险动作时,对预定的危险动作进行危险级别判断,可选地,根据预先设定的规则和/或对应关系确定预定的危险动作的危险级别,再根据危险级别确定如何操作。例如,根据车内人员的危险动作级别进行不同程度的操作。例如,如果是因为驾驶员疲劳、身体不适引起的危险动作,需及时进行提示,从而让驾驶员及时进行调整和休息;当出现由于车内的环境令驾驶员感觉不适时,可以通过控制车内的通风***或空调***进行一定程度的调整。可选地, 设置危险级别包括初级、中级和高级,此时,根据危险级别发出对应的提示信息,和/或执行危险级别对应的操作并根据操作控制车辆,包括:
响应于危险级别为初级,发出提示信息;
响应于危险级别为中级,执行危险级别对应的操作并根据操作控制车辆;
响应于危险级别为高级,发出提示信息的同时,执行危险级别对应的操作并根据操作控制所述车辆。
在本公开实施例中,将危险级别设置为3个级别,可选地,本公开实施例还可以将危险级别设置的更加详细,包括更多级别,例如,危险级别包括第一级别、第二级别、第三级别、第四级别,每个级别对应不同的危险程度。根据不同危险级别发出提示信息和/或执行危险级别对应的操作并根据操作控制车辆。通过为不同的危险级别执行不同的操作,可以使得提示信息的发送和操作的控制更加灵活,适应不同的使用需求。
可选地,确定预定的危险动作的危险级别,包括:
获取预定的危险动作在视频流中出现的频率和/或时长,基于频率和/或时长确定预定的危险动作的危险级别。
在本公开实施例中,通过对动作识别得到的危险动作进一步进行抽象分析,同时根据动作的持续程度,或出现危险情况的先验概率,输出乘车人的真实意图是否在进行危险动作,可选地,本公开实施例通过预定的危险动作在视频流中出现的频率和/或时长来实现对动作持续程度的度量。例如,当驾驶员只是快速挠了一下眼睛时,可以认为只是一个迅速的调整,可以不用报警;但是如果驾驶员长时间进行揉眼,并伴随打哈欠等动作的发生,那么可以认为驾驶员较为疲劳,应进行提醒。又例如,对于抽烟报警强度可以低于趴在方向盘、打电话等动作。
在一种可能的实现方式中,动作包括动作持续时长,预警条件包括:识别到动作持续时长超过时长阈值。
在一种可能的实现方式中,动作可以包括动作持续时长,当动作持续时长超过时长阈值时,可以认为动作的执行分散了动作执行对象的较多注意力,可以认为是危险动作,需要发送预警信息。例如,驾驶员的抽烟动作的时长超过3秒,可以认为抽烟动作为危险动作,会影响到驾驶员的驾驶动作,需要向驾驶员发送预警信息。
在本实施例中,根据预定的危险动作持续时长和时长阈值,可以调整提示信息的发送条件和/或车辆的控制条件,使得提示信息的发送和操作的控制更加灵活,更适应不同的使用需求。
在一种可能的实现方式中,动作识别的结果包括动作持续时长,属于预定的危险动作的条件包括:识别到动作持续时长超过时长阈值。有些动作在短时间内不会对车内人员及车辆产生安全隐患,只有当该动作持续时间达到设定的时长阈值时,才将该动作确认为预定的危险动作,例如,驾驶员的闭眼的动作,当闭眼时长较短(例如,0.5秒),可认为是正常眨眼,而当闭眼时长超过时长阈值(可根据需要进行设定,例如,设置为3秒),可认为属于预定的危险动作,发出相应的提示信息。
在一种可能的实现方式中,动作识别的结果包括动作次数,属于预定的危险动作的条件包括:识别到动作次数超过次数阈值。当动作次数超过次数阈值时,可以认为动作执行对象的动作频繁,分散较多注意力,可以认为是危险动作,需要发送预警信息。例如,驾驶员的抽烟动作的次数超过5次,可以认为抽烟动作为危险动作,会影响到驾驶员的驾驶动作,需要向驾驶员发送提示信息。
在一种可能的实现方式中,动作识别的结果包括动作持续时长和动作次数,属于预定的危险动作的条件包括:识别到动作持续时长超过时长阈值,且动作次数超过次数阈值。
在一种可能的实现方式中,当动作的持续时长超过时长阈值,且动作次数超过次数阈值时,可以认为动作执行对象的动作频繁且动作持续时长长,分散较多注意力,可以认为是危险动作,需要发送提示信息和/或对车辆进行控制。使车辆控制更加灵活,适应不同的使用需求。
不同的车内人员对应的危险动作会有所不同,例如,对于在驾驶位的驾驶员,要求不能分心,分心动作属于危险动作;而其他位置的车内人员的分心动作不属于危险动作;因此,本公开实施例为了实现更准确的报警和智能控制,结合动作类别和车内人员的类别进行提示或智能操作,以实现在提高行驶安全性的同时,不会因为频繁报警而影响用户体验。可选地,包括:响应于车内人员是驾驶员,根据预定的危险动作发出对应的第一提示信息和/或控制车辆执行对应的第一预定操作;和/或,
响应于车内人员是非驾驶员,根据预定的危险动作发出对应的第二提示信息和/或执行对应的第二预定操作。
由于驾驶员担负了整车的安全,为了提高车辆的行驶安全和乘车人的自由度,可将车内人员分为驾驶员和非驾驶员两类,对驾驶员和非驾驶员分别设置不同的危险动作,以实现灵活报警和操作。可选地,驾驶员分心动作可以包括但不限于以下至少之一:打电话、喝水、戴摘墨镜、戴摘口罩、吃东西等;驾驶员不适状态可以包括但不限于以下至少之一:擦汗、揉眼睛、打哈欠等;驾驶员不规范行为可以包括但不限于以下至少之一:抽烟、将 手伸出车外、趴在方向盘上、双脚放在方向盘上、双手离开方向盘等。
可选地,非驾驶员不适状态可以包括但不限于以下至少之一:擦汗等;非驾驶员不规范行为可以包括但不限于以下至少之一:抽烟、将手伸出车外、手持器械、干扰驾驶员等。
本公开实施例还为驾驶员和非驾驶员分别设置了不同的提示信息以及预定操作,以实现对车辆灵活的安全控制,例如,当驾驶员出现双手离开方向盘的动作时,可发出较强的提示信息(如,对应第一提示信息)的同时,执行自动驾驶(如,对应第一预定操作),以提高车辆行驶的安全性;而对于非驾驶员,例如,当非驾驶员出现擦汗的动作时,发出较弱的提示信息(如,对应第二提示信息),和/或执行调节车内空调温度的操作(如,对应第二预定操作)。
本领域普通技术车内人员员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
图4为本公开实施例提供的车内人员危险动作识别装置的一个结构示意图。该实施例的装置可用于实现本公开上述各方法实施例。如图4所示,该实施例的装置包括:
视频采集单元41,用于利用摄像装置获得车内人员的至少一个视频流。
其中,每个视频流中包括至少一个车内人员。
动作识别单元42,用于基于视频流对车内人员进行动作识别。
危险处理单元43,用于响应于动作识别的结果属于预定的危险动作,发出提示信息和/或执行操作以控制车辆。
其中,预定的危险动作包括车内人员的动作表现为以下至少之一:分心动作、不适状态、不规范行为等。
基于本公开上述实施例提供的一种车内人员危险动作识别装置,通过动作识别确定车内人员是否做出预定的危险动作,并对预定的危险动作做出对应的提示和/或操作以控制车辆,实现对车辆安全状况尽早发现,以降低发生危险情况的概率。
在一个或多个可选的实施例中,动作识别单元42,用于在视频流的至少一帧视频图像中检测车内人员包括的至少一个目标区域;根据检测获得的目标区域从视频流的至少一帧视频图像中截取与目标区域对应的目标图像;根据目标图像对车内人员进行动作识别。
在本实施例中,在视频流中识别目标区域,根据目标区域的检测结果在视频图像中截取与目标区域对应的目标图像,并根据目标图像识别车内人员是否执行预定的危险动作。 根据目标区域的检测结果截取到的目标图像,可以适用于不同的视频图像中面积大小不同的人体。本公开实施例的适用范围广。
可选地,动作识别单元42在视频流的至少一帧视频图像中检测车内人员包括的至少一个目标区域时,用于提取视频流的至少一帧视频图像中包括的车内人员的特征;基于特征从所述至少一帧视频图像中提取目标区域,其中,目标区域包括但不限于以下至少之一:人脸局部区域、动作交互物、肢体区域等。
可选地,人脸局部区域,包括但不限于以下至少之一:嘴部区域、耳部区域、眼部区域等。
可选地,动作交互物,包括但不限于以下至少之一:容器、烟、手机、食物、工具、饮料瓶、眼镜、口罩等。
在一个或多个可选的实施例中,分心动作包括但不限于以下至少之一:打电话、喝水、戴摘墨镜、戴摘口罩、吃东西等;和/或,
不适状态包括但不限于以下至少之一:擦汗、揉眼睛、打哈欠等;和/或,
不规范行为包括但不限于以下至少之一:抽烟、将手伸出车外、趴在方向盘上、双脚放在方向盘上、双手离开方向盘、手持器械、干扰驾驶员等。
在一个或多个可选的实施例中,危险处理单元43,包括:
级别确定模块,用于响应于动作识别的结果属于预定的危险动作,确定预定的危险动作的危险级别;
操作处理模块,用于根据危险级别发出对应的提示信息,和/或执行危险级别对应的操作并根据操作控制车辆。
可选地,本公开实施例根据动作识别的结果确定车内人员的动作属于预定的危险动作时,对预定的危险动作进行危险级别判断,可选地,根据预先设定的规则和/或对应关系确定预定的危险动作的危险级别,再根据危险级别确定如何操作。例如,根据车内人员的危险动作级别进行不同程度的操作。例如,如果是因为驾驶员疲劳、身体不适引起的危险动作,需及时进行提示,从而让驾驶员及时进行调整和休息;当出现由于车内的环境令驾驶员感觉不适时,可以通过控制车内的通风***或空调***进行一定程度的调整。
可选地,危险级别包括初级、中级和高级;
操作处理模块,用于响应于危险级别为初级,发出提示信息;响应于危险级别为中级,执行危险级别对应的操作并根据操作控制车辆;响应于危险级别为高级,发出提示信息的同时,执行危险级别对应的操作并根据操作控制车辆。
可选地,级别确定模块,用于获取预定的危险动作在视频流中出现的频率和/或时长,基于频率和/或时长确定预定的危险动作的危险级别。
在一个或多个可选的实施例中,动作识别的结果包括动作持续时长,属于预定的危险动作的条件包括:识别到动作持续时长超过时长阈值。
在本公开实施例中,通过对动作识别得到的危险动作进一步进行抽象分析,同时根据动作的持续程度,或出现危险情况的先验概率,输出乘车人的真实意图是否在进行危险动作,可选地,本公开实施例通过预定的危险动作在视频流中出现的频率和/或时长来实现对动作持续程度的度量。
可选地,动作识别的结果包括动作持续时长,属于预定的危险动作的条件包括:识别到动作持续时长超过时长阈值。
可选地,动作识别的结果包括动作次数,属于预定的危险动作的条件包括:识别到动作次数超过次数阈值。
可选地,动作识别的结果包括动作持续时长和动作次数,属于预定的危险动作的条件包括:识别到动作持续时长超过时长阈值,且动作次数超过次数阈值。
可选地,车内人员包括车辆的驾驶员和/或非驾驶员。
可选地,危险处理单元43,用于响应于车内人员是驾驶员,根据预定的危险动作发出对应的第一提示信息和/或控制车辆执行对应的第一预定操作;和/或,响应于车内人员是非驾驶员,根据预定的危险动作发出对应的第二提示信息和/或执行对应的第二预定操作。
本公开实施例提供的车内人员危险动作识别装置任一实施例的工作过程、设置方式及相应技术效果,均可以参照本公开上述相应方法实施例的具体描述,限于篇幅,在此不再赘述。
根据本公开实施例的又一个方面,提供的一种电子设备,包括处理器,该处理器包括如上任意一实施例提供的车内人员危险动作识别装置。
根据本公开实施例的还一个方面,提供的一种电子设备,包括:存储器,用于存储可执行指令;
以及处理器,用于与存储器通信以执行可执行指令从而完成如上任意一实施例提供的车内人员危险动作识别方法的操作。
根据本公开实施例的再一个方面,提供的一种计算机可读存储介质,用于存储计算机可读取的指令,指令被执行时执行如上任意一实施例提供的车内人员危险动作识别方法的操作。
根据本公开实施例的又一个方面,提供的一种计算机程序产品,包括计算机可读代码,当计算机可读代码在设备上运行时,设备中的处理器执行用于实现如上任意一实施例提供的车内人员危险动作识别方法的指令。
本公开实施例还提供了一种电子设备,例如可以是移动终端、个人计算机(PC)、平板电脑、服务器等。下面参考图5,其示出了适于用来实现本公开实施例的终端设备或服务器的电子设备500的结构示意图:如图5所示,电子设备500包括一个或多个处理器、通信部等,所述一个或多个处理器例如:一个或多个中央处理单元(CPU)501,和/或一个或多个图像处理器(加速单元)513等,处理器可以根据存储在只读存储器(ROM)502中的可执行指令或者从存储部分508加载到随机访问存储器(RAM)503中的可执行指令而执行各种适当的动作和处理。通信部512可包括但不限于网卡,所述网卡可包括但不限于IB(Infiniband)网卡。
处理器可与只读存储器502和/或随机访问存储器503中通信以执行可执行指令,通过总线504与通信部512相连、并经通信部512与其他目标设备通信,从而完成本公开实施例提供的任一项方法对应的操作,例如,利用摄像装置获得车内人员的至少一个视频流,每个视频流中包括至少一个车内人员;基于视频流对车内人员进行动作识别;响应于动作识别的结果属于预定的危险动作,发出提示信息和/或执行操作以控制车辆;预定的危险动作包括车内人员的动作表现为以下至少之一:分心动作、不适状态、不规范行为。
此外,在RAM 503中,还可存储有装置操作所需的各种程序和数据。CPU501、ROM502以及RAM503通过总线504彼此相连。在有RAM503的情况下,ROM502为可选模块。RAM503存储可执行指令,或在运行时向ROM502中写入可执行指令,可执行指令使中央处理单元501执行上述通信方法对应的操作。输入/输出(I/O)接口505也连接至总线504。通信部512可以集成设置,也可以设置为具有多个子模块(例如多个IB网卡),并在总线链接上。
以下部件连接至I/O接口505:包括键盘、鼠标等的输入部分506;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分507;包括硬盘等的存储部分508;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分509。通信部分509经由诸如因特网的网络执行通信处理。驱动器510也根据需要连接至I/O接口505。可拆卸介质511,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器510上,以便于从其上读出的计算机程序根据需要被安装入存储部分508。
需要说明的,如图5所示的架构仅为一种可选实现方式,在具体实践过程中,可根据 实际需要对上述图5的部件数量和类型进行选择、删减、增加或替换;在不同功能部件设置上,也可采用分离设置或集成设置等实现方式,例如加速单元513和CPU501可分离设置或者可将加速单元513集成在CPU501上,通信部可分离设置,也可集成设置在CPU501或加速单元513上,等等。这些可替换的实施方式均落入本公开公开的保护范围。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括有形地包含在机器可读介质上的计算机程序,计算机程序包含用于执行流程图所示的方法的程序代码,程序代码可包括对应执行本公开实施例提供的方法步骤对应的指令,例如,利用摄像装置获得车内人员的至少一个视频流,每个视频流中包括至少一个车内人员;基于视频流对车内人员进行动作识别;响应于动作识别的结果属于预定的危险动作,发出提示信息和/或执行操作以控制车辆;预定的危险动作包括车内人员的动作表现为以下至少之一:分心动作、不适状态、不规范行为。在这样的实施例中,该计算机程序可以通过通信部分509从网络上被下载和安装,和/或从可拆卸介质511被安装。在该计算机程序被中央处理单元(CPU)501执行时,执行本公开的方法中限定的上述功能的操作。
本说明书中各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似的部分相互参见即可。对于***实施例而言,由于其与方法实施例基本对应,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
可能以许多方式来实现本公开的方法和装置。例如,可通过软件、硬件、固件或者软件、硬件、固件的任何组合来实现本公开的方法和装置。用于所述方法的步骤的上述顺序仅是为了进行说明,本公开的方法的步骤不限于以上具体描述的顺序,除非以其它方式特别说明。此外,在一些实施例中,还可将本公开实施为记录在记录介质中的程序,这些程序包括用于实现根据本公开的方法的机器可读指令。因而,本公开还覆盖存储用于执行根据本公开的方法的程序的记录介质。
本公开的描述是为了示例和描述起见而给出的,而并不是无遗漏的或者将本公开限于所公开的形式。很多修改和变化对于本领域的普通技术人员而言是显然的。选择和描述实施例是为了更好说明本公开的原理和实际应用,并且使本领域的普通技术人员能够理解本公开从而设计适于特定用途的带有各种修改的各种实施例。

Claims (32)

  1. 一种车内人员危险动作识别方法,其特征在于,包括:
    利用摄像装置获得车内人员的至少一个视频流,每个所述视频流中包括至少一个车内人员;
    基于所述视频流对所述车内人员进行动作识别;
    响应于所述动作识别的结果属于预定的危险动作,发出提示信息和/或执行操作以控制车辆;所述预定的危险动作包括所述车内人员的动作表现为以下至少之一:分心动作、不适状态、不规范行为。
  2. 根据权利要求1所述的方法,其特征在于,所述基于所述视频流对所述车内人员进行动作识别,包括:
    在所述视频流的至少一帧视频图像中检测所述车内人员包括的至少一个目标区域;
    根据所述检测获得的目标区域从所述视频流的至少一帧视频图像中截取与所述目标区域对应的目标图像;
    根据所述目标图像对所述车内人员进行动作识别。
  3. 根据权利要求2所述的方法,其特征在于,所述在所述视频流的至少一帧视频图像中检测所述车内人员包括的至少一个目标区域,包括:
    提取所述视频流的至少一帧视频图像中包括的车内人员的特征;
    基于所述特征从所述至少一帧视频图像中提取目标区域,其中,所述目标区域包括以下至少之一:人脸局部区域、动作交互物、肢体区域。
  4. 根据权利要求3所述的方法,其特征在于,所述人脸局部区域,包括以下至少之一:嘴部区域、耳部区域、眼部区域。
  5. 根据权利要求3或4所述的方法,其特征在于,所述动作交互物,包括以下至少之一:容器、烟、手机、食物、工具、饮料瓶、眼镜、口罩。
  6. 根据权利要求1-5任一所述的方法,其特征在于,所述分心动作包括以下至少之一:打电话、喝水、戴摘墨镜、戴摘口罩、吃东西;和/或,
    所述不适状态包括以下至少之一:擦汗、揉眼睛、打哈欠;和/或,
    所述不规范行为包括以下至少之一:抽烟、将手伸出车外、趴在方向盘上、双脚放在方向盘上、双手离开方向盘、手持器械、干扰驾驶员。
  7. 根据权利要求1-6任一所述的方法,其特征在于,所述响应于所述动作识别的结果属于预定的危险动作,发出提示信息和/或执行操作以控制车辆,包括:
    响应于所述动作识别的结果属于预定的危险动作;
    确定所述预定的危险动作的危险级别;
    根据所述危险级别发出对应的提示信息,和/或执行所述危险级别对应的操作并根据所述操作控制所述车辆。
  8. 根据权利要求7所述的方法,其特征在于,所述危险级别包括初级、中级和高级;
    所述根据所述危险级别发出对应的提示信息,和/或执行所述危险级别对应的操作并根据所述操作控制所述车辆,包括:
    响应于所述危险级别为初级,发出提示信息;
    响应于所述危险级别为中级,执行所述危险级别对应的操作并根据所述操作控制所述车辆;
    响应于所述危险级别为高级,发出提示信息的同时,执行所述危险级别对应的操作并根据所述操作控制所述车辆。
  9. 根据权利要求7或8所述的方法,其特征在于,所述确定所述预定的危险动作的危险级别,包括:
    获取所述预定的危险动作在所述视频流中出现的频率和/或时长,基于所述频率和/或时长确定所述预定的危险动作的危险级别。
  10. 根据权利要求1-9任一所述的方法,其特征在于,所述动作识别的结果包括动作持续时长,所述属于预定的危险动作的条件包括:识别到动作持续时长超过时长阈值。
  11. 根据权利要求1-9任一所述的方法,其特征在于,所述动作识别的结果包括动作次数,所述属于预定的危险动作的条件包括:识别到动作次数超过次数阈值。
  12. 根据权利要求1-9任一所述的方法,其特征在于,所述动作识别的结果包括动作持续时长和动作次数,所述属于预定的危险动作的条件包括:识别到动作持续时长超过时长阈值,且动作次数超过次数阈值。
  13. 根据权利要求1-12任一所述的方法,其特征在于,所述车内人员包括所述车辆的驾驶员和/或非驾驶员。
  14. 根据权利要求13所述的方法,其特征在于,所述响应于所述动作识别的结果属于预定的危险动作,发出提示信息和/或执行操作以控制车辆,包括:
    响应于所述车内人员是驾驶员,根据所述预定的危险动作发出对应的第一提示信息和/或控制所述车辆执行对应的第一预定操作;和/或,
    响应于所述车内人员是非驾驶员,根据所述预定的危险动作发出对应的第二提示信息 和/或执行对应的第二预定操作。
  15. 一种车内人员危险动作识别装置,其特征在于,包括:
    视频采集单元,用于利用摄像装置获得车内人员的至少一个视频流,每个所述视频流中包括至少一个车内人员;
    动作识别单元,用于基于所述视频流对所述车内人员进行动作识别;
    危险处理单元,用于响应于所述动作识别的结果属于预定的危险动作,发出提示信息和/或执行操作以控制车辆;所述预定的危险动作包括所述车内人员的动作表现为以下至少之一:分心动作、不适状态、不规范行为。
  16. 根据权利要求15所述的装置,其特征在于,所述动作识别单元,用于在所述视频流的至少一帧视频图像中检测所述车内人员包括的至少一个目标区域;根据所述检测获得的目标区域从所述视频流的至少一帧视频图像中截取与所述目标区域对应的目标图像;根据所述目标图像对所述车内人员进行动作识别。
  17. 根据权利要求16所述的装置,其特征在于,所述动作识别单元在所述视频流的至少一帧视频图像中检测所述车内人员包括的至少一个目标区域时,用于提取所述视频流的至少一帧视频图像中包括的车内人员的特征;基于所述特征从所述至少一帧视频图像中提取目标区域,其中,所述目标区域包括以下至少之一:人脸局部区域、动作交互物、肢体区域。
  18. 根据权利要求17所述的装置,其特征在于,所述人脸局部区域,包括以下至少之一:嘴部区域、耳部区域、眼部区域。
  19. 根据权利要求17或18所述的装置,其特征在于,所述动作交互物,包括以下至少之一:容器、烟、手机、食物、工具、饮料瓶、眼镜、口罩。
  20. 根据权利要求15-19任一所述的装置,其特征在于,所述分心动作包括以下至少之一:打电话、喝水、戴摘墨镜、戴摘口罩、吃东西;和/或,
    所述不适状态包括以下至少之一:擦汗、揉眼睛、打哈欠;和/或,
    所述不规范行为包括以下至少之一:抽烟、将手伸出车外、趴在方向盘上、双脚放在方向盘上、双手离开方向盘、手持器械、干扰驾驶员。
  21. 根据权利要求15-20任一所述的装置,其特征在于,所述危险处理单元,包括:
    级别确定模块,用于响应于所述动作识别的结果属于预定的危险动作,确定所述预定的危险动作的危险级别;
    操作处理模块,用于根据所述危险级别发出对应的提示信息,和/或执行所述危险级别 对应的操作并根据所述操作控制所述车辆。
  22. 根据权利要求21所述的装置,其特征在于,所述危险级别包括初级、中级和高级;
    所述操作处理模块,用于响应于所述危险级别为初级,发出提示信息;响应于所述危险级别为中级,执行所述危险级别对应的操作并根据所述操作控制所述车辆;响应于所述危险级别为高级,发出提示信息的同时,执行所述危险级别对应的操作并根据所述操作控制所述车辆。
  23. 根据权利要求21或22所述的装置,其特征在于,所述级别确定模块,用于获取所述预定的危险动作在所述视频流中出现的频率和/或时长,基于所述频率和/或时长确定所述预定的危险动作的危险级别。
  24. 根据权利要求15-23任一所述的装置,其特征在于,所述动作识别的结果包括动作持续时长,所述属于预定的危险动作的条件包括:识别到动作持续时长超过时长阈值。
  25. 根据权利要求15-24任一所述的装置,其特征在于,所述动作识别的结果包括动作次数,所述属于预定的危险动作的条件包括:识别到动作次数超过次数阈值。
  26. 根据权利要求15-24任一所述的装置,其特征在于,所述动作识别的结果包括动作持续时长和动作次数,所述属于预定的危险动作的条件包括:识别到动作持续时长超过时长阈值,且动作次数超过次数阈值。
  27. 根据权利要求15-26任一所述的装置,其特征在于,所述车内人员包括所述车辆的驾驶员和/或非驾驶员。
  28. 根据权利要求27所述的装置,其特征在于,所述危险处理单元,用于响应于所述车内人员是驾驶员,根据所述预定的危险动作发出对应的第一提示信息和/或控制所述车辆执行对应的第一预定操作;和/或,响应于所述车内人员是非驾驶员,根据所述预定的危险动作发出对应的第二提示信息和/或执行对应的第二预定操作。
  29. 一种电子设备,其特征在于,包括处理器,所述处理器包括权利要求15至28任意一项所述的车内人员危险动作识别装置。
  30. 一种电子设备,其特征在于,包括:存储器,用于存储可执行指令;
    以及处理器,用于与所述存储器通信以执行所述可执行指令从而完成权利要求1至14任意一项所述车内人员危险动作识别方法的操作。
  31. 一种计算机可读存储介质,用于存储计算机可读取的指令,其特征在于,所述指令被执行时执行权利要求1至14任意一项所述车内人员危险动作识别方法的操作。
  32. 一种计算机程序产品,包括计算机可读代码,其特征在于,当所述计算机可读代码在设备上运行时,所述设备中的处理器执行用于实现权利要求1至14任意一项所述车内人员危险动作识别方法的指令。
PCT/CN2019/129370 2017-08-10 2019-12-27 车内人员危险动作识别方法和装置、电子设备、存储介质 WO2020173213A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2020551547A JP2021517313A (ja) 2017-08-10 2019-12-27 車両乗員の危険動作の認識方法及び装置、電子機器、並びに記憶媒体
KR1020207027781A KR20200124278A (ko) 2017-08-10 2019-12-27 차량 내 인원의 위험 동작 인식 방법 및 장치, 전자 기기, 저장 매체
SG11202009720QA SG11202009720QA (en) 2017-08-10 2019-12-27 Method and apparatus for identifying dangerous actions of persons in vehicle, electronic device and storage medium
US17/034,290 US20210009150A1 (en) 2017-08-10 2020-09-28 Method for recognizing dangerous action of personnel in vehicle, electronic device and storage medium

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
PCT/CN2017/096957 WO2019028798A1 (zh) 2017-08-10 2017-08-10 驾驶状态监控方法、装置和电子设备
CN201910152525.X 2019-02-28
CN201910152525.XA CN110399767A (zh) 2017-08-10 2019-02-28 车内人员危险动作识别方法和装置、电子设备、存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/034,290 Continuation US20210009150A1 (en) 2017-08-10 2020-09-28 Method for recognizing dangerous action of personnel in vehicle, electronic device and storage medium

Publications (1)

Publication Number Publication Date
WO2020173213A1 true WO2020173213A1 (zh) 2020-09-03

Family

ID=65273075

Family Applications (3)

Application Number Title Priority Date Filing Date
PCT/CN2017/096957 WO2019028798A1 (zh) 2017-08-10 2017-08-10 驾驶状态监控方法、装置和电子设备
PCT/CN2018/084526 WO2019029195A1 (zh) 2017-08-10 2018-04-25 驾驶状态监测方法和装置、驾驶员监控***、车辆
PCT/CN2019/129370 WO2020173213A1 (zh) 2017-08-10 2019-12-27 车内人员危险动作识别方法和装置、电子设备、存储介质

Family Applications Before (2)

Application Number Title Priority Date Filing Date
PCT/CN2017/096957 WO2019028798A1 (zh) 2017-08-10 2017-08-10 驾驶状态监控方法、装置和电子设备
PCT/CN2018/084526 WO2019029195A1 (zh) 2017-08-10 2018-04-25 驾驶状态监测方法和装置、驾驶员监控***、车辆

Country Status (8)

Country Link
US (5) US10853675B2 (zh)
EP (1) EP3666577A4 (zh)
JP (2) JP6933668B2 (zh)
KR (2) KR102391279B1 (zh)
CN (3) CN109803583A (zh)
SG (2) SG11202002549WA (zh)
TW (1) TWI758689B (zh)
WO (3) WO2019028798A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022077282A (ja) * 2020-11-11 2022-05-23 株式会社コムテック 警報システム

Families Citing this family (101)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018033137A1 (zh) * 2016-08-19 2018-02-22 北京市商汤科技开发有限公司 在视频图像中展示业务对象的方法、装置和电子设备
WO2019028798A1 (zh) * 2017-08-10 2019-02-14 北京市商汤科技开发有限公司 驾驶状态监控方法、装置和电子设备
JP6888542B2 (ja) * 2017-12-22 2021-06-16 トヨタ自動車株式会社 眠気推定装置及び眠気推定方法
US10850746B2 (en) * 2018-07-24 2020-12-01 Harman International Industries, Incorporated Coordinating delivery of notifications to the driver of a vehicle to reduce distractions
JP6906717B2 (ja) * 2018-12-12 2021-07-21 三菱電機株式会社 状態判定装置、状態判定方法、及び状態判定プログラム
US11170240B2 (en) * 2019-01-04 2021-11-09 Cerence Operating Company Interaction system and method
US10657396B1 (en) * 2019-01-30 2020-05-19 StradVision, Inc. Method and device for estimating passenger statuses in 2 dimension image shot by using 2 dimension camera with fisheye lens
CN111626087A (zh) * 2019-02-28 2020-09-04 北京市商汤科技开发有限公司 神经网络训练及眼睛睁闭状态检测方法、装置及设备
CN111661059B (zh) * 2019-03-08 2022-07-08 虹软科技股份有限公司 分心驾驶监测方法、***及电子设备
CN110001652B (zh) * 2019-03-26 2020-06-23 深圳市科思创动科技有限公司 驾驶员状态的监测方法、装置及终端设备
KR102610759B1 (ko) * 2019-04-03 2023-12-08 현대자동차주식회사 졸음 운전 관리 장치, 그를 포함한 시스템 및 그 방법
CN111845749A (zh) * 2019-04-28 2020-10-30 郑州宇通客车股份有限公司 一种自动驾驶车辆的控制方法及***
CN109977930B (zh) * 2019-04-29 2021-04-02 中国电子信息产业集团有限公司第六研究所 疲劳驾驶检测方法及装置
GB2583742B (en) * 2019-05-08 2023-10-25 Jaguar Land Rover Ltd Activity identification method and apparatus
CN110263641A (zh) * 2019-05-17 2019-09-20 成都旷视金智科技有限公司 疲劳检测方法、装置及可读存储介质
US11281920B1 (en) * 2019-05-23 2022-03-22 State Farm Mutual Automobile Insurance Company Apparatuses, systems and methods for generating a vehicle driver signature
CN110188655A (zh) * 2019-05-27 2019-08-30 上海蔚来汽车有限公司 驾驶状态评价方法、***及计算机存储介质
WO2020237664A1 (zh) * 2019-05-31 2020-12-03 驭势(上海)汽车科技有限公司 驾驶提醒方法、驾驶状态检测方法和计算设备
CN112241645A (zh) * 2019-07-16 2021-01-19 广州汽车集团股份有限公司 一种疲劳驾驶检测方法及其***、电子设备
JP7047821B2 (ja) * 2019-07-18 2022-04-05 トヨタ自動車株式会社 運転支援装置
US10991130B2 (en) * 2019-07-29 2021-04-27 Verizon Patent And Licensing Inc. Systems and methods for implementing a sensor based real time tracking system
FR3100640B1 (fr) * 2019-09-10 2021-08-06 Faurecia Interieur Ind Procédé et dispositif de détection de bâillements d’un conducteur d’un véhicule
CN112758098B (zh) * 2019-11-01 2022-07-22 广州汽车集团股份有限公司 基于驾驶员状态等级的车辆驾驶权限接管控制方法及装置
CN110942591B (zh) * 2019-11-12 2022-06-24 博泰车联网科技(上海)股份有限公司 驾驶安全提醒***以及方法
CN110826521A (zh) * 2019-11-15 2020-02-21 爱驰汽车有限公司 驾驶员疲劳状态识别方法、***、电子设备和存储介质
CN110837815A (zh) * 2019-11-15 2020-02-25 济宁学院 一种基于卷积神经网络的驾驶员状态监测方法
CN110968718B (zh) * 2019-11-19 2023-07-14 北京百度网讯科技有限公司 目标检测模型负样本挖掘方法、装置及电子设备
CN110909715B (zh) * 2019-12-06 2023-08-04 重庆商勤科技有限公司 基于视频图像识别吸烟的方法、装置、服务器及存储介质
CN111160126B (zh) * 2019-12-11 2023-12-19 深圳市锐明技术股份有限公司 驾驶状态确定方法、装置、车辆及存储介质
JP2021096530A (ja) * 2019-12-13 2021-06-24 トヨタ自動車株式会社 運転支援装置、運転支援プログラムおよび運転支援システム
CN111191573A (zh) * 2019-12-27 2020-05-22 中国电子科技集团公司第十五研究所 一种基于眨眼规律识别的驾驶员疲劳检测方法
CN111160237A (zh) * 2019-12-27 2020-05-15 智车优行科技(北京)有限公司 头部姿态估计方法和装置、电子设备和存储介质
CN113128295A (zh) * 2019-12-31 2021-07-16 湖北亿咖通科技有限公司 一种车辆驾驶员危险驾驶状态识别方法及装置
CN113126296B (zh) * 2020-01-15 2023-04-07 未来(北京)黑科技有限公司 一种提高光利用率的抬头显示设备
CN111243236A (zh) * 2020-01-17 2020-06-05 南京邮电大学 一种基于深度学习的疲劳驾驶预警方法及***
US11873000B2 (en) 2020-02-18 2024-01-16 Toyota Motor North America, Inc. Gesture detection for transport control
CN115188183A (zh) * 2020-02-25 2022-10-14 华为技术有限公司 特殊路况的识别方法、装置、电子设备和存储介质
JP7402084B2 (ja) * 2020-03-05 2023-12-20 本田技研工業株式会社 乗員行動判定装置
CN111783515A (zh) * 2020-03-18 2020-10-16 北京沃东天骏信息技术有限公司 行为动作识别的方法和装置
US11912307B2 (en) * 2020-03-18 2024-02-27 Waymo Llc Monitoring head movements of drivers tasked with monitoring a vehicle operating in an autonomous driving mode
CN111460950B (zh) * 2020-03-25 2023-04-18 西安工业大学 自然驾驶通话行为中基于头-眼证据融合的认知分心方法
JP7380380B2 (ja) * 2020-03-26 2023-11-15 いすゞ自動車株式会社 運転支援装置
CN111626101A (zh) * 2020-04-13 2020-09-04 惠州市德赛西威汽车电子股份有限公司 一种基于adas的吸烟监测方法及***
WO2021240668A1 (ja) * 2020-05-27 2021-12-02 三菱電機株式会社 ジェスチャ検出装置およびジェスチャ検出方法
JP7289406B2 (ja) * 2020-05-27 2023-06-09 三菱電機株式会社 ジェスチャ検出装置およびジェスチャ検出方法
CN111611970B (zh) * 2020-06-01 2023-08-22 城云科技(中国)有限公司 一种基于城管监控视频的乱扔垃圾行为检测方法
CN111652128B (zh) * 2020-06-02 2023-09-01 浙江大华技术股份有限公司 一种高空电力作业安全监测方法、***和存储装置
CN111767823A (zh) * 2020-06-23 2020-10-13 京东数字科技控股有限公司 一种睡岗检测方法、装置、***及存储介质
JP7359087B2 (ja) * 2020-07-02 2023-10-11 トヨタ自動車株式会社 ドライバモニタ装置及びドライバモニタ方法
CN111785008A (zh) * 2020-07-04 2020-10-16 苏州信泰中运物流有限公司 一种基于gps和北斗定位的物流监控管理方法、装置及计算机可读存储介质
CN113920576A (zh) * 2020-07-07 2022-01-11 奥迪股份公司 车上人员的丢物行为识别方法、装置、设备及存储介质
US20220414796A1 (en) * 2020-07-08 2022-12-29 Pilot Travel Centers, LLC Computer implemented oil field logistics
CN111797784B (zh) * 2020-07-09 2024-03-05 斑马网络技术有限公司 驾驶行为监测方法、装置、电子设备及存储介质
US11776319B2 (en) * 2020-07-14 2023-10-03 Fotonation Limited Methods and systems to predict activity in a sequence of images
CN111860280A (zh) * 2020-07-15 2020-10-30 南通大学 一种基于深度学习的驾驶员违章行为识别***
CN111860292B (zh) * 2020-07-16 2024-06-07 科大讯飞股份有限公司 基于单目相机的人眼定位方法、装置以及设备
CN111832526B (zh) * 2020-07-23 2024-06-11 浙江蓝卓工业互联网信息技术有限公司 一种行为检测方法及装置
CN112061065B (zh) * 2020-07-27 2022-05-10 大众问问(北京)信息科技有限公司 一种车内行为识别报警方法、设备、电子设备及存储介质
CN111931653B (zh) * 2020-08-11 2024-06-11 沈阳帝信人工智能产业研究院有限公司 安全监测方法、装置、电子设备和可读存储介质
US11651599B2 (en) * 2020-08-17 2023-05-16 Verizon Patent And Licensing Inc. Systems and methods for identifying distracted driver behavior from video
CN112069931A (zh) * 2020-08-20 2020-12-11 深圳数联天下智能科技有限公司 一种状态报告的生成方法及状态监控***
CN112016457A (zh) * 2020-08-27 2020-12-01 青岛慕容信息科技有限公司 驾驶员分神以及危险驾驶行为识别方法、设备和存储介质
CN114201985A (zh) * 2020-08-31 2022-03-18 魔门塔(苏州)科技有限公司 一种人体关键点的检测方法及装置
CN112084919A (zh) * 2020-08-31 2020-12-15 广州小鹏汽车科技有限公司 目标物检测方法、装置、车辆及存储介质
CN112163470A (zh) * 2020-09-11 2021-01-01 高新兴科技集团股份有限公司 基于深度学习的疲劳状态识别方法、***、存储介质
CN112307920B (zh) * 2020-10-22 2024-03-22 东云睿连(武汉)计算技术有限公司 一种高风险工种作业人员行为预警装置及方法
CN112149641A (zh) * 2020-10-23 2020-12-29 北京百度网讯科技有限公司 监控驾驶状态的方法、装置、设备和存储介质
CN112347891B (zh) * 2020-10-30 2022-02-22 南京佑驾科技有限公司 基于视觉的舱内喝水状态检测方法
CN112270283A (zh) * 2020-11-04 2021-01-26 北京百度网讯科技有限公司 异常驾驶行为确定方法、装置、设备、车辆和介质
CN112356839A (zh) * 2020-11-06 2021-02-12 广州小鹏自动驾驶科技有限公司 一种驾驶状态监测方法、***及汽车
KR102443980B1 (ko) * 2020-11-17 2022-09-21 주식회사 아르비존 차량 제어 방법
TWI739675B (zh) * 2020-11-25 2021-09-11 友達光電股份有限公司 影像辨識方法及裝置
CN112455452A (zh) * 2020-11-30 2021-03-09 恒大新能源汽车投资控股集团有限公司 驾驶状态的检测方法、装置及设备
CN112528792B (zh) * 2020-12-03 2024-05-31 深圳地平线机器人科技有限公司 疲劳状态检测方法、装置、介质及电子设备
CN112766050B (zh) * 2020-12-29 2024-04-16 富泰华工业(深圳)有限公司 着装及作业检查方法、计算机装置及存储介质
CN112660141A (zh) * 2020-12-29 2021-04-16 长安大学 一种通过驾驶行为数据的驾驶员驾驶分心行为识别方法
CN112754498B (zh) * 2021-01-11 2023-05-26 一汽解放汽车有限公司 驾驶员的疲劳检测方法、装置、设备及存储介质
CN112668548A (zh) * 2021-01-15 2021-04-16 重庆大学 一种驾驶员发呆检测方法及***
JP2022130086A (ja) * 2021-02-25 2022-09-06 トヨタ自動車株式会社 タクシー車両およびタクシーシステム
CN114005104A (zh) * 2021-03-23 2022-02-01 深圳市创乐慧科技有限公司 一种基于人工智能的智能驾驶方法、装置及相关产品
CN113313019A (zh) * 2021-05-27 2021-08-27 展讯通信(天津)有限公司 一种分神驾驶检测方法、***及相关设备
CN113139531A (zh) * 2021-06-21 2021-07-20 博泰车联网(南京)有限公司 困倦状态检测方法及装置、电子设备、可读存储介质
CN113486759B (zh) * 2021-06-30 2023-04-28 上海商汤临港智能科技有限公司 危险动作的识别方法及装置、电子设备和存储介质
CN113537135A (zh) * 2021-07-30 2021-10-22 三一重机有限公司 一种驾驶监测方法、装置、***及可读存储介质
CN113734173B (zh) * 2021-09-09 2023-06-20 东风汽车集团股份有限公司 车辆智能监控方法、设备及存储介质
KR102542683B1 (ko) * 2021-09-16 2023-06-14 국민대학교산학협력단 손 추적 기반 행위 분류 방법 및 장치
FR3127355B1 (fr) * 2021-09-20 2023-09-29 Renault Sas procédé de sélection d’un mode de fonctionnement d’un dispositif de capture d’images pour reconnaissance faciale
KR102634012B1 (ko) * 2021-10-12 2024-02-07 경북대학교 산학협력단 딥러닝 기반 객체 분류를 이용한 운전자 행동 검출 장치
CN114162130B (zh) * 2021-10-26 2023-06-20 东风柳州汽车有限公司 驾驶辅助模式切换方法、装置、设备及存储介质
CN114187581B (zh) * 2021-12-14 2024-04-09 安徽大学 一种基于无监督学习的驾驶员分心细粒度检测方法
CN114005105B (zh) * 2021-12-30 2022-04-12 青岛以萨数据技术有限公司 驾驶行为检测方法、装置以及电子设备
US11999233B2 (en) * 2022-01-18 2024-06-04 Toyota Jidosha Kabushiki Kaisha Driver monitoring device, storage medium storing computer program for driver monitoring, and driver monitoring method
CN114582090A (zh) * 2022-02-27 2022-06-03 武汉铁路职业技术学院 一种轨道车辆驾驶监测预警***
CN114666378A (zh) * 2022-03-03 2022-06-24 武汉科技大学 一种重型柴油车车载远程监控***
KR20230145614A (ko) 2022-04-07 2023-10-18 한국기술교육대학교 산학협력단 운전자 안전 모니터링 시스템 및 방법
CN115035502A (zh) * 2022-07-08 2022-09-09 北京百度网讯科技有限公司 驾驶员的行为监测方法、装置、电子设备及存储介质
CN114898341B (zh) * 2022-07-14 2022-12-20 苏州魔视智能科技有限公司 疲劳驾驶预警方法、装置、电子设备及存储介质
CN115601709B (zh) * 2022-11-07 2023-10-27 北京万理软件开发有限公司 煤矿员工违规统计***、方法、装置以及存储介质
CN116311181B (zh) * 2023-03-21 2023-09-12 重庆利龙中宝智能技术有限公司 一种异常驾驶的快速检测方法及***
CN116052136B (zh) * 2023-03-27 2023-09-05 中国科学技术大学 分心检测方法、车载控制器和计算机存储介质
CN116645732B (zh) * 2023-07-19 2023-10-10 厦门工学院 一种基于计算机视觉的场地危险活动预警方法及***

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106585624A (zh) * 2016-12-07 2017-04-26 深圳市元征科技股份有限公司 驾驶员状态监控方法及装置
CN106709420A (zh) * 2016-11-21 2017-05-24 厦门瑞为信息技术有限公司 一种监测营运车辆驾驶人员驾驶行为的方法
US20180029612A1 (en) * 2016-08-01 2018-02-01 Fujitsu Ten Limited Safe driving behavior notification system and safe driving behavior notification method
CN107933471A (zh) * 2017-12-04 2018-04-20 惠州市德赛西威汽车电子股份有限公司 事故主动呼叫救援的方法及车载自动求救***
CN110399767A (zh) * 2017-08-10 2019-11-01 北京市商汤科技开发有限公司 车内人员危险动作识别方法和装置、电子设备、存储介质

Family Cites Families (108)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2546415B2 (ja) * 1990-07-09 1996-10-23 トヨタ自動車株式会社 車両運転者監視装置
US7738678B2 (en) * 1995-06-07 2010-06-15 Automotive Technologies International, Inc. Light modulation techniques for imaging objects in or around a vehicle
JP3843513B2 (ja) * 1996-12-24 2006-11-08 トヨタ自動車株式会社 車両用警報装置
JPH11161798A (ja) * 1997-12-01 1999-06-18 Toyota Motor Corp 車両運転者監視装置
JP3383594B2 (ja) * 1998-09-29 2003-03-04 沖電気工業株式会社 眼の開度測定装置
JP3495934B2 (ja) * 1999-01-08 2004-02-09 矢崎総業株式会社 事故防止システム
US20120231773A1 (en) * 1999-08-27 2012-09-13 Lipovski Gerald John Jack Cuboid-based systems and methods for safe mobile texting.
US20030163233A1 (en) * 2000-05-04 2003-08-28 Jin-Ho Song Automatic vehicle management apparatus and method using wire and wireless communication network
JP2003131785A (ja) * 2001-10-22 2003-05-09 Toshiba Corp インタフェース装置および操作制御方法およびプログラム製品
US6926429B2 (en) * 2002-01-30 2005-08-09 Delphi Technologies, Inc. Eye tracking/HUD system
EP2314207A1 (en) * 2002-02-19 2011-04-27 Volvo Technology Corporation Method for monitoring and managing driver attention loads
US6873714B2 (en) * 2002-02-19 2005-03-29 Delphi Technologies, Inc. Auto calibration and personalization of eye tracking system using larger field of view imager with higher resolution
JP2004017939A (ja) * 2002-06-20 2004-01-22 Denso Corp 車両用情報報知装置及びプログラム
JP3951231B2 (ja) * 2002-12-03 2007-08-01 オムロン株式会社 安全走行情報仲介システムおよびそれに用いる安全走行情報仲介装置と安全走行情報の確認方法
US7639148B2 (en) * 2003-06-06 2009-12-29 Volvo Technology Corporation Method and arrangement for controlling vehicular subsystems based on interpreted driver activity
KR100494848B1 (ko) 2004-04-16 2005-06-13 에이치케이이카 주식회사 차량 탑승자가 차량 내부에서 수면을 취하는지 여부를감지하는 방법 및 장치
DE102005018697A1 (de) * 2004-06-02 2005-12-29 Daimlerchrysler Ag Verfahren und Vorrichtung zur Warnung eines Fahrers im Falle eines Verlassens der Fahrspur
JP4564320B2 (ja) 2004-09-29 2010-10-20 アイシン精機株式会社 ドライバモニタシステム
CN1680779A (zh) * 2005-02-04 2005-10-12 江苏大学 驾驶员疲劳监测方法及装置
US7253739B2 (en) * 2005-03-10 2007-08-07 Delphi Technologies, Inc. System and method for determining eye closure state
CA2611408A1 (en) * 2005-06-09 2006-12-14 Drive Diagnostics Ltd. System and method for displaying a driving profile
US20070041552A1 (en) * 2005-06-13 2007-02-22 Moscato Jonathan D Driver-attentive notification system
JP2007237919A (ja) * 2006-03-08 2007-09-20 Toyota Motor Corp 車両用入力操作装置
CN101489467B (zh) * 2006-07-14 2011-05-04 松下电器产业株式会社 视线方向检测装置和视线方向检测方法
US20130150004A1 (en) * 2006-08-11 2013-06-13 Michael Rosen Method and apparatus for reducing mobile phone usage while driving
CN100462047C (zh) * 2007-03-21 2009-02-18 汤一平 基于全方位计算机视觉的安全驾驶辅助装置
CN101030316B (zh) * 2007-04-17 2010-04-21 北京中星微电子有限公司 一种汽车安全驾驶监控***和方法
JP2008302741A (ja) * 2007-06-05 2008-12-18 Toyota Motor Corp 運転支援装置
US20130275899A1 (en) * 2010-01-18 2013-10-17 Apple Inc. Application Gateway for Providing Different User Interfaces for Limited Distraction and Non-Limited Distraction Contexts
JP5208711B2 (ja) 2008-12-17 2013-06-12 アイシン精機株式会社 眼開閉判別装置及びプログラム
CN101540090B (zh) * 2009-04-14 2011-06-15 华南理工大学 基于多元信息融合的驾驶员疲劳监测方法
US10019634B2 (en) * 2010-06-04 2018-07-10 Masoud Vaziri Method and apparatus for an eye tracking wearable computer
US9460601B2 (en) * 2009-09-20 2016-10-04 Tibet MIMAR Driver distraction and drowsiness warning and sleepiness reduction for accident avoidance
CN101877051A (zh) * 2009-10-30 2010-11-03 江苏大学 驾驶人注意力状态监测方法和装置
CN101692980B (zh) * 2009-10-30 2011-06-08 深圳市汉华安道科技有限责任公司 疲劳驾驶检测方法及装置
US20110224875A1 (en) * 2010-03-10 2011-09-15 Cuddihy Mark A Biometric Application of a Polymer-based Pressure Sensor
US10592757B2 (en) * 2010-06-07 2020-03-17 Affectiva, Inc. Vehicular cognitive data collection using multiple devices
US10074024B2 (en) * 2010-06-07 2018-09-11 Affectiva, Inc. Mental state analysis using blink rate for vehicles
CN101950355B (zh) * 2010-09-08 2012-09-05 中国人民解放军国防科学技术大学 基于数字视频的驾驶员疲劳状态检测方法
JP5755012B2 (ja) 2011-04-21 2015-07-29 キヤノン株式会社 情報処理装置、その処理方法、プログラム及び撮像装置
US11270699B2 (en) * 2011-04-22 2022-03-08 Emerging Automotive, Llc Methods and vehicles for capturing emotion of a human driver and customizing vehicle response
CN102985302A (zh) * 2011-07-11 2013-03-20 丰田自动车株式会社 车辆的紧急避险装置
US8744642B2 (en) * 2011-09-16 2014-06-03 Lytx, Inc. Driver identification based on face data
CN102436715B (zh) * 2011-11-25 2013-12-11 大连海创高科信息技术有限公司 疲劳驾驶检测方法
KR20140025812A (ko) * 2012-08-22 2014-03-05 삼성전기주식회사 졸음 운전 감지 장치 및 방법
JP2014048760A (ja) * 2012-08-29 2014-03-17 Denso Corp 車両の運転者に情報を提示する情報提示システム、情報提示装置、および情報センター
JP6036065B2 (ja) * 2012-09-14 2016-11-30 富士通株式会社 注視位置検出装置及び注視位置検出方法
US9405982B2 (en) * 2013-01-18 2016-08-02 GM Global Technology Operations LLC Driver gaze detection system
US20140272811A1 (en) * 2013-03-13 2014-09-18 Mighty Carma, Inc. System and method for providing driving and vehicle related assistance to a driver
US10210761B2 (en) * 2013-09-30 2019-02-19 Sackett Solutions & Innovations, LLC Driving assistance systems and methods
JP5939226B2 (ja) * 2013-10-16 2016-06-22 トヨタ自動車株式会社 運転支援装置
KR101537936B1 (ko) * 2013-11-08 2015-07-21 현대자동차주식회사 차량 및 그 제어방법
US10417486B2 (en) * 2013-12-30 2019-09-17 Alcatel Lucent Driver behavior monitoring systems and methods for driver behavior monitoring
JP6150258B2 (ja) * 2014-01-15 2017-06-21 みこらった株式会社 自動運転車
JP6213282B2 (ja) 2014-02-12 2017-10-18 株式会社デンソー 運転支援装置
US20150310758A1 (en) * 2014-04-26 2015-10-29 The Travelers Indemnity Company Systems, methods, and apparatus for generating customized virtual reality experiences
US20160001785A1 (en) * 2014-07-07 2016-01-07 Chin-Jung Hsu Motion sensing system and method
US9714037B2 (en) * 2014-08-18 2017-07-25 Trimble Navigation Limited Detection of driver behaviors using in-vehicle systems and methods
US9796391B2 (en) * 2014-10-13 2017-10-24 Verizon Patent And Licensing Inc. Distracted driver prevention systems and methods
TW201615457A (zh) * 2014-10-30 2016-05-01 鴻海精密工業股份有限公司 車用安全識別反應系統及方法
CN104408879B (zh) * 2014-11-19 2017-02-01 湖南工学院 疲劳驾驶预警处理方法、装置及***
US10614726B2 (en) * 2014-12-08 2020-04-07 Life Long Driver, Llc Behaviorally-based crash avoidance system
CN104574817A (zh) * 2014-12-25 2015-04-29 清华大学苏州汽车研究院(吴江) 一种适用于智能手机的基于机器视觉疲劳驾驶预警***
JP2016124364A (ja) * 2014-12-26 2016-07-11 本田技研工業株式会社 覚醒装置
US10705521B2 (en) * 2014-12-30 2020-07-07 Visteon Global Technologies, Inc. Autonomous driving interface
DE102015200697A1 (de) * 2015-01-19 2016-07-21 Robert Bosch Gmbh Verfahren und Vorrichtung zum Erkennen von Sekundenschlaf eines Fahrers eines Fahrzeugs
CN104688251A (zh) * 2015-03-02 2015-06-10 西安邦威电子科技有限公司 一种多姿态下的疲劳及非正常姿态驾驶检测方法
FR3033303B1 (fr) * 2015-03-03 2017-02-24 Renault Sas Dispositif et procede de prediction d'un niveau de vigilance chez un conducteur d'un vehicule automobile.
WO2016169582A1 (en) * 2015-04-20 2016-10-27 Bayerische Motoren Werke Aktiengesellschaft Apparatus and method for controlling a user situation awareness modification of a user of a vehicle, and a user situation awareness modification processing system
CN105139583A (zh) * 2015-06-23 2015-12-09 南京理工大学 基于便携式智能设备的车辆危险提醒方法
CN106327801B (zh) * 2015-07-07 2019-07-26 北京易车互联信息技术有限公司 疲劳驾驶检测方法和装置
CN204915314U (zh) * 2015-07-21 2015-12-30 戴井之 一种汽车安全驾驶装置
CN105096528B (zh) * 2015-08-05 2017-07-11 广州云从信息科技有限公司 一种疲劳驾驶检测方法及***
WO2017040519A1 (en) * 2015-08-31 2017-03-09 Sri International Method and system for monitoring driving behaviors
CN105261153A (zh) * 2015-11-03 2016-01-20 北京奇虎科技有限公司 车辆行驶监控方法和装置
CN105354985B (zh) * 2015-11-04 2018-01-12 中国科学院上海高等研究院 疲劳驾驶监控装置及方法
JP6641916B2 (ja) * 2015-11-20 2020-02-05 オムロン株式会社 自動運転支援装置、自動運転支援システム、自動運転支援方法および自動運転支援プログラム
CN105574487A (zh) * 2015-11-26 2016-05-11 中国第一汽车股份有限公司 基于面部特征的驾驶人注意力状态检测方法
CN105654753A (zh) * 2016-01-08 2016-06-08 北京乐驾科技有限公司 一种智能车载安全驾驶辅助方法及***
CN105769120B (zh) * 2016-01-27 2019-01-22 深圳地平线机器人科技有限公司 疲劳驾驶检测方法和装置
FR3048544B1 (fr) * 2016-03-01 2021-04-02 Valeo Comfort & Driving Assistance Dispositif et methode de surveillance d'un conducteur d'un vehicule automobile
US10108260B2 (en) * 2016-04-01 2018-10-23 Lg Electronics Inc. Vehicle control apparatus and method thereof
WO2017208529A1 (ja) * 2016-06-02 2017-12-07 オムロン株式会社 運転者状態推定装置、運転者状態推定システム、運転者状態推定方法、運転者状態推定プログラム、対象者状態推定装置、対象者状態推定方法、対象者状態推定プログラム、および記録媒体
US20180012090A1 (en) * 2016-07-07 2018-01-11 Jungo Connectivity Ltd. Visual learning system and method for determining a driver's state
CN106218405A (zh) * 2016-08-12 2016-12-14 深圳市元征科技股份有限公司 疲劳驾驶监控方法及云端服务器
CN106446811A (zh) * 2016-09-12 2017-02-22 北京智芯原动科技有限公司 基于深度学习的驾驶员疲劳检测方法及装置
JP6940612B2 (ja) * 2016-09-14 2021-09-29 ナウト, インコーポレイテッドNauto, Inc. ニアクラッシュ判定システムおよび方法
CN106355838A (zh) * 2016-10-28 2017-01-25 深圳市美通视讯科技有限公司 一种疲劳驾驶检测方法和***
EP3535646A4 (en) * 2016-11-07 2020-08-12 Nauto, Inc. SYSTEM AND METHOD FOR DETERMINING DRIVER DISTRACTION
US10467488B2 (en) * 2016-11-21 2019-11-05 TeleLingo Method to analyze attention margin and to prevent inattentive and unsafe driving
CN106585629B (zh) * 2016-12-06 2019-07-12 广东泓睿科技有限公司 一种车辆控制方法和装置
CN106781282A (zh) * 2016-12-29 2017-05-31 天津中科智能识别产业技术研究院有限公司 一种智能行车驾驶员疲劳预警***
CN106909879A (zh) * 2017-01-11 2017-06-30 开易(北京)科技有限公司 一种疲劳驾驶检测方法及***
CN106985750A (zh) * 2017-01-17 2017-07-28 戴姆勒股份公司 用于车辆的车内安全监控***及汽车
FR3063557B1 (fr) * 2017-03-03 2022-01-14 Valeo Comfort & Driving Assistance Dispositif de determination de l'etat d'attention d'un conducteur de vehicule, systeme embarque comportant un tel dispositif, et procede associe
WO2018167991A1 (ja) * 2017-03-14 2018-09-20 オムロン株式会社 運転者監視装置、運転者監視方法、学習装置及び学習方法
US10922566B2 (en) * 2017-05-09 2021-02-16 Affectiva, Inc. Cognitive state evaluation for vehicle navigation
US10289938B1 (en) * 2017-05-16 2019-05-14 State Farm Mutual Automobile Insurance Company Systems and methods regarding image distification and prediction models
US10402687B2 (en) * 2017-07-05 2019-09-03 Perceptive Automata, Inc. System and method of predicting human interaction with vehicles
US10592785B2 (en) * 2017-07-12 2020-03-17 Futurewei Technologies, Inc. Integrated system for detection of driver condition
JP6666892B2 (ja) * 2017-11-16 2020-03-18 株式会社Subaru 運転支援装置及び運転支援方法
CN108407813A (zh) * 2018-01-25 2018-08-17 惠州市德赛西威汽车电子股份有限公司 一种基于大数据的车辆抗疲劳安全驾驶方法
US10322728B1 (en) * 2018-02-22 2019-06-18 Futurewei Technologies, Inc. Method for distress and road rage detection
US10776644B1 (en) * 2018-03-07 2020-09-15 State Farm Mutual Automobile Insurance Company Image analysis technologies for assessing safety of vehicle operation
US10915769B2 (en) * 2018-06-04 2021-02-09 Shanghai Sensetime Intelligent Technology Co., Ltd Driving management methods and systems, vehicle-mounted intelligent systems, electronic devices, and medium
US10970571B2 (en) * 2018-06-04 2021-04-06 Shanghai Sensetime Intelligent Technology Co., Ltd. Vehicle control method and system, vehicle-mounted intelligent system, electronic device, and medium
JP6870660B2 (ja) * 2018-06-08 2021-05-12 トヨタ自動車株式会社 ドライバ監視装置
CN108961669A (zh) * 2018-07-19 2018-12-07 上海小蚁科技有限公司 网约车的安全预警方法及装置、存储介质、服务器

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180029612A1 (en) * 2016-08-01 2018-02-01 Fujitsu Ten Limited Safe driving behavior notification system and safe driving behavior notification method
CN106709420A (zh) * 2016-11-21 2017-05-24 厦门瑞为信息技术有限公司 一种监测营运车辆驾驶人员驾驶行为的方法
CN106585624A (zh) * 2016-12-07 2017-04-26 深圳市元征科技股份有限公司 驾驶员状态监控方法及装置
CN110399767A (zh) * 2017-08-10 2019-11-01 北京市商汤科技开发有限公司 车内人员危险动作识别方法和装置、电子设备、存储介质
CN107933471A (zh) * 2017-12-04 2018-04-20 惠州市德赛西威汽车电子股份有限公司 事故主动呼叫救援的方法及车载自动求救***

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022077282A (ja) * 2020-11-11 2022-05-23 株式会社コムテック 警報システム

Also Published As

Publication number Publication date
US10853675B2 (en) 2020-12-01
US20190065873A1 (en) 2019-02-28
EP3666577A1 (en) 2020-06-17
SG11202009720QA (en) 2020-10-29
SG11202002549WA (en) 2020-04-29
CN110399767A (zh) 2019-11-01
US20210009150A1 (en) 2021-01-14
KR20200124278A (ko) 2020-11-02
KR20200051632A (ko) 2020-05-13
US20210049386A1 (en) 2021-02-18
JP2021517313A (ja) 2021-07-15
KR102391279B1 (ko) 2022-04-26
CN109937152A (zh) 2019-06-25
TW202033395A (zh) 2020-09-16
US20210049387A1 (en) 2021-02-18
US20210049388A1 (en) 2021-02-18
TWI758689B (zh) 2022-03-21
WO2019028798A1 (zh) 2019-02-14
EP3666577A4 (en) 2020-08-19
WO2019029195A1 (zh) 2019-02-14
JP6933668B2 (ja) 2021-09-08
JP2019536673A (ja) 2019-12-19
CN109937152B (zh) 2022-03-25
CN109803583A (zh) 2019-05-24

Similar Documents

Publication Publication Date Title
WO2020173213A1 (zh) 车内人员危险动作识别方法和装置、电子设备、存储介质
KR102446686B1 (ko) 승객 상태 분석 방법 및 장치, 차량, 전자 기기, 저장 매체
KR102469234B1 (ko) 운전 상태 분석 방법 및 장치, 운전자 모니터링 시스템 및 차량
JP7146959B2 (ja) 運転状態検出方法及び装置、運転者監視システム並びに車両
KR102305914B1 (ko) 운전 관리 방법 및 시스템, 차량 탑재 지능형 시스템, 전자 기기, 매체
TWI741512B (zh) 駕駛員注意力監測方法和裝置及電子設備
US10915769B2 (en) Driving management methods and systems, vehicle-mounted intelligent systems, electronic devices, and medium
KR101276770B1 (ko) 사용자 적응형 특이행동 검출기반의 안전운전보조시스템
US11783600B2 (en) Adaptive monitoring of a vehicle using a camera
US11403879B2 (en) Method and apparatus for child state analysis, vehicle, electronic device, and storage medium
US20200247422A1 (en) Inattentive driving suppression system
CN104149620A (zh) 一种基于生物特征识别的汽车安全***及其使用方法
KR20120074820A (ko) 얼굴 인식 기능을 이용한 차량 제어 시스템
KR102367399B1 (ko) 영유아 상태 알림 장치 및 방법
JP2022149287A (ja) ドライバ監視装置、ドライバ監視方法及びドライバ監視用コンピュータプログラム
Ujir et al. Real-time driver’s monitoring mobile application through head pose, drowsiness and angry detection
US20240051465A1 (en) Adaptive monitoring of a vehicle using a camera
CN115359466A (zh) 异常驾驶行为识别方法、装置、设备和存储介质

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2020551547

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20207027781

Country of ref document: KR

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19917460

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19917460

Country of ref document: EP

Kind code of ref document: A1