WO2021159269A1 - 基于视觉的安全驾驶预警方法、***及存储介质 - Google Patents

基于视觉的安全驾驶预警方法、***及存储介质 Download PDF

Info

Publication number
WO2021159269A1
WO2021159269A1 PCT/CN2020/074703 CN2020074703W WO2021159269A1 WO 2021159269 A1 WO2021159269 A1 WO 2021159269A1 CN 2020074703 W CN2020074703 W CN 2020074703W WO 2021159269 A1 WO2021159269 A1 WO 2021159269A1
Authority
WO
WIPO (PCT)
Prior art keywords
driver
safe driving
early warning
vision
gaze direction
Prior art date
Application number
PCT/CN2020/074703
Other languages
English (en)
French (fr)
Inventor
张锐
唐虹刚
李升林
孙立林
Original Assignee
云图技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 云图技术有限公司 filed Critical 云图技术有限公司
Priority to PCT/CN2020/074703 priority Critical patent/WO2021159269A1/zh
Publication of WO2021159269A1 publication Critical patent/WO2021159269A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention

Definitions

  • This specification relates to the technical field of assisted driving of vehicles, in particular to a vision-based safe driving early warning method, system and storage medium.
  • the behavior state of the driver is one of the important factors affecting driving safety.
  • technical solutions such as early warning of safe driving by judging the degree of eye fatigue have emerged.
  • statistical studies have shown that in traffic accidents related to the behavior of the driver, physiological factors such as fatigue driving are not important factors leading to traffic accidents.
  • inattention is one of the important factors leading to traffic accidents. Therefore, how to determine whether the driver's gaze is concentrated in order to carry out corresponding safe driving warnings has become a technical problem that needs to be solved urgently.
  • the purpose of the embodiments of this specification is to provide a vision-based safe driving early warning method, system and storage medium to improve driving safety.
  • the embodiments of this specification provide a vision-based safe driving early warning method, including:
  • the embodiments of this specification also provide a vision-based safe driving early warning system, including:
  • Image acquisition device used to track the driver's facial image
  • An image processing device for acquiring a three-dimensional map of the cockpit, and determining the projection position of the driver’s gaze direction on the three-dimensional map according to the eye features contained in the facial image; when the projection position is located at a designated When the position is out of range, an early warning command will be issued;
  • the early warning execution device outputs safe driving warning information according to the early warning instruction.
  • the embodiment of this specification also provides a computer storage medium on which a computer program is stored, and the computer program is executed by a processor to execute the above safe driving early warning method.
  • the embodiments of this specification can determine the projection position of the driver’s gaze direction on the cockpit 3D map based on the eye features tracked in the driver’s facial image; when When the projection position is outside the specified position range, it can be confirmed that the driver's attention is not focused on the correct line of sight area, and the driver can be warned of safe driving based on this, that is, the driver can not concentrate on the situation , Provide safe driving early warning to the driver in time, thereby improving the safety of driving and effectively reducing traffic accidents caused by inattention during driving.
  • Figure 1 is a structural block diagram of a vision-based safe driving early warning system in some embodiments of this specification
  • Figure 2 is a schematic diagram of the driver's gaze direction in an embodiment of this specification
  • FIG. 3a is a schematic diagram of the gaze direction after weighting the feature values of the primary and secondary eyes in an embodiment of this specification
  • FIG. 3b is a schematic diagram of the gaze direction after weighting the feature values of the primary and secondary eyes in another embodiment of this specification;
  • Fig. 4 is a schematic diagram of a designated position range in an embodiment of the specification.
  • FIG. 5 is a schematic diagram of judging whether the projection position of the gaze direction on the three-dimensional map exceeds the specified position range in an embodiment of the specification
  • Fig. 6 is a flowchart of a vision-based safe driving early warning method in some embodiments of this specification.
  • some embodiments of this specification may take a car as an example for description.
  • the embodiments of this specification can be adapted to any ground vehicle with a cockpit, for example, it can include, but is not limited to, various automobiles and the like.
  • the ground vehicle can be controlled by the driver.
  • some ground vehicles may have automatic driving functions, in driving scenarios where the automatic driving mode is not turned on, the ground vehicles still need to be manipulated by the driver. Therefore, the embodiments of this specification can also be applied to Ground vehicles with autonomous driving functions.
  • a vision-based safe driving early warning system may include an image acquisition device 11, an image processing device 12 and an early warning execution device 13.
  • the image acquisition device 11 can be used to track the facial image of the driver.
  • the image processing device 12 may be used to obtain a three-dimensional map of the cockpit, and determine the projection position of the driver’s gaze direction on the three-dimensional map according to the eye features contained in the facial image; when the projection position is located When the specified position is outside the range, an early warning command is issued.
  • the early warning execution device 13 can output safe driving warning information according to the early warning instruction.
  • the embodiments of this specification can determine the projection position of the driver’s gaze direction on the cockpit 3D map based on the eye features tracked to the driver’s facial image; when the projection position is outside the specified position range It can be confirmed that the driver’s attention is not focused on the correct line of sight area. Based on this, the driver can be warned of safe driving, that is, the driver can be safely driven in time when the driver’s attention is not concentrated. Early warning improves driving safety and helps effectively reduce traffic accidents caused by inattention during driving.
  • the image acquisition device 11 can realize real-time acquisition of cockpit images from different shooting angles, so as to obtain the three-dimensional spatial distribution in the cockpit.
  • the image acquisition device 11 may be a camera that can freely rotate 360 degrees to shoot, which may help reduce the implementation cost.
  • the camera can have a face tracking and shooting function to realize real-time collection of cockpit images including faces from different shooting angles.
  • the image acquisition device 11 may also be composed of multiple cameras, and the shooting angles of these cameras may be fixed and may be different, so as to cooperate to complete Real-time acquisition of the three-dimensional spatial distribution in the cockpit.
  • the image acquisition device 11 may be an infrared camera, which can reduce changes in the lighting environment (for example, at night or at night). The influence of the darkening of the light caused by cloudy sky, etc.) on the system, which is beneficial to improve the stability of the system. Not only that, compared with ordinary white light, infrared light can eliminate the driver's dazzling feeling, so as not to cause dazzling interference to the driver's vision.
  • the image processing device 12 may be a processor.
  • the processor may include, but is not limited to, a central processing unit (CPU), a single chip microcomputer, a micro control unit (MCU), a digital signal processor (DSP), and a programmable logic controller ( PLC) and so on.
  • CPU central processing unit
  • MCU micro control unit
  • DSP digital signal processor
  • PLC programmable logic controller
  • the image processing device 12 can control the image acquisition device 11 to collect cockpit images, and can build a three-dimensional map of the cockpit based on the cockpit images, so that the driver can see in real time
  • the facial image is displayed on a three-dimensional map.
  • the image processing device 12 can calculate the relative spatial position of the driver's gaze direction in the three-dimensional map based on the eye features contained in the facial image.
  • the projection position of the driver's gaze direction on the three-dimensional map is also determined.
  • the image processing device 12 can process the cockpit image based on any suitable real-time positioning and map construction algorithm (Simultaneous Localization And Mapping, SLAM for short) to create and maintain a three-dimensional map of the cockpit.
  • SLAM Simultaneous Localization And Mapping
  • the SLAM algorithm may be ORB-SLAM, LSD-SLAM, or the like.
  • the SLAM algorithm (especially the open source SLAM algorithm) is easy to implement and helps reduce the cost of implementation.
  • the SLAM algorithm is taken as an example for illustration, and it should not be understood as a limitation of the specification.
  • any three-dimensional mapping method that can construct and maintain a three-dimensional map of the cockpit in real time can be applied. In the examples of this specification.
  • the determining the projection position of the driver's gaze direction on the three-dimensional map according to the eye features contained in the facial image may include the following steps:
  • the image processing device 12 can extract the feature values of the eyes from the facial image.
  • the characteristics of different positions of the face are different, for example, the eyes, nose, mouth, etc. each have their own shape characteristics.
  • the image processing device 12 locks the driver’s face through the image acquisition device 11
  • the feature values of the facial image collected by the image acquisition device 11 can be used to easily identify the eye position, so that the eye position can be extracted from the image. Eigenvalues.
  • the orientation of the eyeball in the three-dimensional space is different. For example, when the human eye is looking upward, the eyeball is upward; when the human eye is looking downward, the eyeball is downward, and so on. Therefore, the pitch angle and yaw angle of the eyeball center point relative to the reference axis (or the front and back offset distance, left and right offset distance, and up and down offset distance of the eye center point relative to the reference axis) can be used to determine the gaze direction of the eyeball.
  • the center point of the interpupillary distance can be used as the starting point and the same as the gaze direction of the eyes.
  • the direction of the parallel rays is regarded as the driver's gaze direction (for example, as shown in Fig. 2).
  • the reference axis may refer to the gaze direction of the driver when the driver is looking straight ahead, that is, the straight line that passes through the center of the eyeball and is parallel to the ground when the driver is looking straight ahead.
  • the above ray direction that passes through the center of the interpupillary distance and is parallel to the direction of the eyes is taken as the gaze direction of the driver, which is only an exemplary illustration.
  • the driver's gaze direction may be any suitable direction provided that the above recognition confusion can be overcome.
  • the gaze direction of the driver can also be determined based on the weighted sum of the characteristics of the primary eyeball and the characteristics of the secondary eyeball in the eyes.
  • the dominant eye or the dominant eye for short.
  • the other eye can be called a secondary eye (referred to as secondary eye). Therefore, determining the gaze direction of the driver based on the weighted sum of the features of the primary eyeball and the feature of the secondary eyeball in both eyes can help more accurately identify the gaze direction of the driver.
  • which eye of the driver is the dominant eye can be preset through configuration parameters, or it can be automatically determined by the system.
  • the weight of the feature value of the left eye can be increased, and the weight of the feature value of the right eye can be reduced accordingly, so that the starting point of the driver’s gaze direction is closer to the center of the left eye instead of Then it is located at the center of the interpupillary distance, as shown in Figure 3a, for example.
  • Fig. 3a 31 indicates the center of the left eye
  • 32 indicates the center of the right eye
  • 33 indicates the center of the interpupillary distance
  • the arrow direction indicates the gaze direction.
  • the weight of the feature value of the right eye can be increased, and the weight of the feature value of the left eye can be reduced accordingly, so that the starting point of the driver's gaze direction is closer to the center of the right eye, and It is no longer at the center of the interpupillary distance, as shown in Figure 3b, for example.
  • 31 represents the eyeball center of the left eye
  • 32 represents the eyeball center of the right eye
  • 33 represents the center point of the interpupillary distance
  • the arrow direction represents the gaze direction.
  • the adjustment of the weights of the characteristic values of the primary and secondary eyes can be set as required.
  • the projection position of the driver's gaze direction on the three-dimensional map refers to the intersection of the driver's gaze direction and the surface of the cockpit.
  • the projection position is the intersection of the driver’s gaze direction and the roof
  • the projection position is the driver’s gaze direction and the front windshield. The intersection of the windshield, etc.
  • the correct line of sight area can generally include the area covered by the driver when looking ahead.
  • the area covered by the front windshield of the vehicle may be taken as the designated position range (for example, as shown by the area enclosed by the dashed line in FIG. 4).
  • the area covered by the front windshield of the vehicle is also taken as an example of the designated position range.
  • the projection position of the driver’s gaze direction on the three-dimensional map is outside the designated position range (for example, Figure 5 As shown in point B)
  • the driver can be warned of safe driving to remind the driver to drive safely.
  • the projection position of the driver's gaze direction on the three-dimensional map is within the designated position range (for example, as shown at point A in FIG. 5)
  • the driver may not be given a safe driving warning and the detection may continue.
  • the designated position range may also include a certain tolerance margin to reduce the probability of misjudgment.
  • the behavior of the driver occasionally observing the road conditions of the adjacent lane through the side view rear mirror is generally a correct driving behavior.
  • the driver’s gaze direction is on the side view mirror, it will be considered that the driver’s gaze direction is not focused on the correct The sight area, which is obviously not what the user expects. Therefore, at least the area covered by the side view rear mirror can be used as a tolerance margin, that is, when the driver looks at the side view rear mirror, the driver's gaze direction can still be regarded as falling in the correct line of sight area.
  • the timing can be started; when the projection position is outside the designated position range, the duration exceeds At the time threshold, a safe driving warning can be given to the driver to remind the driver to drive safely.
  • the duration of the projection position outside the designated position range does not exceed the duration threshold, the driver may not be given a safe driving warning to reduce the frequency of safe driving warnings, reduce the driving interference to the driver, and improve the user experience.
  • the image processing device 12 needs to give the driver a safe driving warning in time to avoid causing a traffic accident.
  • the image processing device 12 may respond to driving The driver gives a warning for safe driving.
  • the early warning execution device 13 may be any suitable voice prompt device (such as a voice alarm, a buzzer), a light prompt device (such as a flashing indicator light, a color indicator light, etc.), and or graphics.
  • voice prompt device such as a voice alarm, a buzzer
  • light prompt device such as a flashing indicator light, a color indicator light, etc.
  • graphics such as graphics.
  • Prompt device such as display screen
  • the safe driving warning information output by the early warning execution device 13 may be associated with the severity of the driver's gaze direction not focusing on the correct line of sight area. For example, when the frequency of the driver's gaze direction not focusing on the correct line of sight area reaches the set frequency threshold (or the duration reaches another duration threshold), the early warning execution device 13 may output stronger safety driving warning information . In order to draw the driver's sufficient attention to the safe driving warning information. For example, taking verbal prompts as an example, when the driver's gaze direction is not focused on the correct line of sight area, the early warning execution device 13 may output safe driving warning information with stronger tone, rush and/or louder.
  • this specification also provides a vision-based safe driving early warning method.
  • the vision-based safe driving early warning method can be applied to the image processing device 12 side. As shown in FIG. 6, in some embodiments of this specification, the vision-based safe driving early warning method may include the following steps:
  • S603 Determine the projection position of the driver's gaze direction on the three-dimensional map according to the eye features contained in the facial image
  • S604 Determine whether the projection position is outside the designated position range.
  • the acquiring a three-dimensional map of the cockpit may include:
  • the cockpit image is processed based on a preset slam algorithm to create a three-dimensional map of the cockpit.
  • the determining the projection position of the driver's gaze direction on the three-dimensional map according to the eye features contained in the facial image may include :
  • the gaze direction of the driver is determined according to the characteristic values of the eyes.
  • the determining the gaze direction of the driver according to the characteristic values of the eyes includes:
  • the gaze direction of the driver is determined according to the weighted sum of the features of the primary eyeballs and the features of the secondary eyeballs in the two eyes.
  • the gaze direction may include:
  • giving a safe driving early warning to the driver may include:
  • the projection position is outside the designated position range, which may include:
  • the projection position is outside the designated position range and exceeds the preset offset margin.
  • the vision-based safe driving early warning method may further include:
  • the driver is given a safe driving warning.
  • the facial image may include an infrared image.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • the computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-permanent memory in a computer readable medium, random access memory (RAM) and/or non-volatile memory, such as read-only memory (ROM) or flash memory (flash RAM). Memory is an example of computer readable media.
  • RAM random access memory
  • ROM read-only memory
  • flash RAM flash memory
  • Computer-readable media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology.
  • the information can be computer-readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical storage, Magnetic cartridges, disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices.
  • PRAM phase change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • flash memory or other memory technology
  • CD-ROM compact disc
  • this specification can be provided as a method, a system or a computer program product. Therefore, this specification may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, this specification can take the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • program modules include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific abstract data types.
  • This specification can also be practiced in distributed computing environments where tasks are performed by remote processing devices connected through a communication network.
  • program modules can be located in local and remote computer storage media including storage devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)

Abstract

提供一种基于视觉的安全驾驶预警方法、***及存储介质。该方法包括:获取驾驶舱的三维地图;跟踪驾驶员的面部图像;根据面部图像所包含的眼部特征,确定驾驶员的注视方向在三维地图中的投射位置;当投射位置位于指定位置范围之外时,对驾驶员进行安全驾驶预警。实现了在驾驶员注意力不集中的情况下,及时对驾驶员进行安全驾驶预警,从而提高了驾驶的安全性,有利于有效减少因驾驶过程中注意力不集中而导致的交通事故。

Description

基于视觉的安全驾驶预警方法、***及存储介质 技术领域
本说明书涉及交通工具辅助驾驶技术领域,尤其是涉及一种基于视觉的安全驾驶预警方法、***及存储介质。
背景技术
驾驶员的行为状态是影响驾驶安全的重要因素之一。为了提高驾驶安全,目前已出现诸如通过判断眼睛的疲劳程度来进行安全驾驶预警的技术方案。然而,统计研究表明,在与驾驶员的行为状态有关的交通事故中,类似于疲劳驾驶等生理性因素并非是导致交通事故的重要因素。在很多情况下,注意力不集中是导致交通事故的重要因素之一。因此,如何判断驾驶员是否注视方向集中,以便于据此进行相应的安全驾驶预警,已成为目前亟待解决的技术问题。
发明内容
本说明书实施例的目的在于提供一种基于视觉的安全驾驶预警方法、***及存储介质,以提高驾驶安全性。
为达到上述目的,一方面,本说明书实施例提供了一种基于视觉的安全驾驶预警方法,包括:
获取驾驶舱的三维地图;
跟踪驾驶员的面部图像;
根据所述面部图像所包含的眼部特征,确定所述驾驶员的注视方向在所述三维地图中的投射位置;
当所述投射位置位于指定位置范围之外时,对所述驾驶员进行安全驾驶预警。
另一方面,本说明书实施例还提供了一种基于视觉的安全驾驶预警***,包括:
图像采集装置,用于跟踪驾驶员的面部图像;
图像处理装置,用于获取驾驶舱的三维地图,根据所述面部图像所包含的眼部特征,确定所述驾驶员的注视方向在所述三维地图中的投射位置;当所述投射位置位于指定位置范围之外时,发出预警指令;
预警执行装置,根据所述预警指令输出安全驾驶警示信息。
另一方面,本说明书实施例还提供了一种计算机存储介质,其上存储有计算机程序,所述计算机程序被处理器执行上述的安全驾驶预警方法。
由以上本说明书实施例提供的技术方案可见,本说明书的实施例可以基于跟踪到驾驶员的面部图像中的眼部特征,来确定驾驶员的注视方向在驾驶舱三维地图中的投射位置;当投射位置位于指定位置范围之外时,可以确认驾驶员的注意力未能集中在正确的视线区域,据此可以对驾驶员进行安全驾驶预警,即实现了在驾驶员注意力不集中的情况下,及时对驾驶员进行安全驾驶预警,从而提高了驾驶的安全性,有利于有效减少因驾驶过程中注意力不集中而导致的交通事故。
附图说明
为了更清楚地说明本说明书实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本说明书中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。在附图中:
图1为本说明书一些实施例中基于视觉的安全驾驶预警***的结构框图;
图2为本说明书一实施例中驾驶员的注视方向示意图;
图3a为本说明书一实施例中主副眼特征值加权后的注视方向示意图;
图3b为本说明书另一实施例中主副眼特征值加权后的注视方向示意图;
图4为本说明书一实施例中指定位置范围的示意图;
图5为本说明书一实施例中注视方向在三维地图中的投射位置是否超出指定位置范围的判断示意图;
图6为本说明书一些实施例中基于视觉的安全驾驶预警方法的流程图。
具体实施方式
为了使本技术领域的人员更好地理解本说明书中的技术方案,下面将结合本说明书实施例中的附图,对本说明书实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本说明书一部分实施例,而不是全部的实施例。基于本说明书中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都应当属于本说明书保护的范围。
为便于理解和说明,在本说明书一些实施例中可能是以小汽车为例进行说明的。但, 本领域技术人员可以理解,本说明书的实施例可以适应于任何具有驾驶舱室的地面交通工具,例如可以包括但不限于各种汽车等。该地面交通工具可以由驾驶员进行操控。在一些情况下,虽然某些地面交通工具可能具有自动驾驶功能,但在未开启自动驾驶模式的行驶场景中,该地面交通工具仍需要驾驶员进行操控,因此,本说明书实施例也可以适用于具有自动驾驶功能的地面交通工具。
参考图1所所示,在本说明书一些实施例中,基于视觉的安全驾驶预警***可以包括图像采集装置11、图像处理装置12和预警执行装置13。其中,图像采集装置11可以用于跟踪驾驶员的面部图像。图像处理装置12可以用于获取驾驶舱的三维地图,根据所述面部图像所包含的眼部特征,确定所述驾驶员的注视方向在所述三维地图中的投射位置;当所述投射位置位于指定位置范围之外时,发出预警指令。预警执行装置13可以根据所述预警指令输出安全驾驶警示信息。
由此可见,本说明书的实施例可以基于跟踪到驾驶员的面部图像中的眼部特征,来确定驾驶员的注视方向在驾驶舱三维地图中的投射位置;当投射位置位于指定位置范围之外时,可以确认驾驶员的注意力未能集中在正确的视线区域,据此可以对驾驶员进行安全驾驶预警,即实现了在驾驶员注意力不集中的情况下,及时对驾驶员进行安全驾驶预警,从而提高了驾驶的安全性,有利于有效减少因驾驶过程中注意力不集中而导致的交通事故。
在本说明书一些实施例中,图像采集装置11可以实现从不同拍摄角度实时采集驾驶舱图像,以获得驾驶舱内的三维空间分布。例如,在一示例性实施例中,图像采集装置11可以为一个可360度自由转动拍摄的摄像头,如此可以有利于降低实现成本。该摄像头可具备面部跟踪拍摄功能,以实现可以从不同拍摄角度实时采集包含面部在内的驾驶舱图像。当然,这里仅是举例说明,本说明书对此不作限定,例如在其他实施例中,图像采集装置11也可以由多个摄像头组成,这些摄像头的拍摄角度可以固定且可以各不相同,从而配合完成对驾驶舱内的三维空间分布的实时采集。
由于红外光对不同的光照条件具有良好的鲁棒性,因此,在本说明书一些实施例中,较佳地,所述图像采集装置11可以为红外摄像头,如此可以降低光照环境变化(例如夜晚或阴天等导致的光线变暗)对***的影响,从而有利于提高***的稳定性。不仅如此,与普通白光相比,红外光可以消除驾驶员的炫目感,从而不会对驾驶员的视觉造成炫目干扰。
在本说明书一些实施例中,图像处理装置12可以为一个处理器。例如,本说明书一 示例性实施例中,所述处理器可以包括但不限于中央处理器(CPU)、单片机、微控制单元(MCU)、数字信号处理器(DSP)、可编程逻辑控制器(PLC)等。
在本说明书一些实施例中,在***使能后,图像处理装置12可以控制图像采集装置11采集驾驶舱图像,并可以据此驾驶舱图像建立驾驶舱的三维地图,从而可以将驾驶员实时的面部图像在三维地图中展示。当图像采集装置11实时采集的面部图像有更新时,三维地图也会进行相应的更新。如此,图像处理装置12就可以根据面部图像所包含的眼部特征,计算驾驶员的注视方向在三维地图中的相对空间位置。当驾驶员的注视方向在三维地图中的相对空间位置确定后,驾驶员的注视方向在三维地图中的投射位置也就确定了。
在本说明书一示例性实施例中,图像处理装置12可以基于任何合适的即时定位与地图构建算法(Simultaneous Localization And Mapping,简称SLAM)处理驾驶舱图像,以创建并维护驾驶舱的三维地图,本说明书对此不作限定,具体可以根据需要选择。例如,在一示例性实施例中,SLAM算法可以为ORB-SLAM、LSD-SLAM等。基于SLAM算法(尤其是开源SLAM算法)易于实现,有利于降低实现成本。应当指出,这里以SLAM算法为例进行说明,不应当理解为对说明书的限制,在本说明书的其他实施例中,任何可以实时构建并维护驾驶舱的三维地图的三维建图方法,均可以适用于本说明书的实施例。
在本说明书一些实施例中,所述根据所述面部图像所包含的眼部特征,确定所述驾驶员的注视方向在所述三维地图中的投射位置可以包括如下步骤:
1)、图像处理装置12可以从面部图像中提取双眼的特征值。
一般地,人脸不同位置的特征是不同的,例如眼睛、鼻子、嘴巴等各有各自的形状特征。当图像处理装置12通过图像采集装置11锁定驾驶员的面部后,利用图像采集装置11采集到的面部图像的特征值,可以很容易地识别出眼部位置,从而可以从图像中提取到双眼的特征值。
2)、根据所述双眼的特征值确定所述驾驶员的注视方向。
当人眼注释不同方向时,眼球在三维空间中的朝向是不同的。例如当人眼注视上方时,眼球朝上;当人眼注视下方时,眼球朝下,等等。因此,可以利用眼球中心点相对于基准轴的俯仰角和偏转角(或者眼球中心点相对于基准轴的前后偏移距离、左右偏移距离和上下偏移距离)来判断眼球的注视方向。由于双眼之间是有瞳距的,在一些情况下,可能会出现一只眼球在三维地图中的投射位置位于指定位置范围内,而另一只眼球 在三维地图中的投射位置位于指定位置范围外的情况。如此,就给识别驾驶员的注视方向带来了识别困惑。因此,在本说明书一实施例中,考虑到人的双眼的注视方向通常是一致的,为了克服上述识别困惑问题,并有利于简化计算,可以将以瞳距中心点为起点,且与双眼的朝向平行的射线方向,视为驾驶员的注视方向(例如图2所示)。
其中,基准轴可以是指驾驶员正视前方时的注视方向,即驾驶员正视前方时,过眼球中心点且平行于地面的直线。应当指出的是,以上将过瞳距中心点且与双眼的朝向平行的射线方向,作为驾驶员的注视方向,仅是示例性说明。在本说明书其他实施例中,在能够克服上述识别困惑的前提下,驾驶员的注视方向可以为任何合适方向。例如,还可以根据双眼中主眼球的特征与副眼球的特征的加权和,确定驾驶员的注视方向。研究表明,几乎每个人的双眼并非被平均使用,大脑更习惯于主要用其中一只眼进行成像分析和定位物体,这只眼可以称之为主视眼(简称主眼)。相应的,另一只眼睛可以称为副视眼(简称副眼)。因此,根据双眼中主眼球的特征与副眼球的特征的加权和,来确定驾驶员的注视方向,可以有利于更准确地识别驾驶员的注视方向。在本说明书的实施例中,驾驶员的哪只眼是主眼可以通过配置参数预先设定,也可以由***自动测定。
例如,当驾驶员的主眼为左眼时,可以增加左眼眼球的特征值权重,并相应减少右眼眼球的特征值权重,从而使得驾驶员的注视方向起点更靠近左眼眼球中心,而不再是位于瞳距中心点,例如图3a所示。在图3a中,31表示左眼眼球中心,32表示右眼眼球中心,33表示瞳距中心点,箭头方向表示注视方向。类似地,当驾驶员的主眼为右眼时,可以增加右眼眼球的特征值权重,并相应减少左眼眼球的特征值权重,从而使得驾驶员的注视方向起点更靠近右眼眼球中心,而不再是位于瞳距中心点,例如图3b所示。同样,在图3b中,31表示左眼眼球中心,32表示右眼眼球中心,33表示瞳距中心点,箭头方向表示注视方向。在本说明书的实施例中,主副眼的特征值权重的调整,可以根据需要设定。
在本说明书一些实施例中,驾驶员的注视方向在三维地图中的投射位置,是指驾驶员的注视方向与驾驶舱表面的交点。例如,当驾驶员的注视方向为车顶时,投射位置为驾驶员的注视方向与车顶的交点;当驾驶员的注视方向为前挡风玻璃时,投射位置为驾驶员的注视方向与前挡风玻璃的交点,等等。而实验研究表明,当驾驶员习惯性地将注视方向集中于正确的视线区域时,驾驶员因注意力不集中而引发交通事故的概率得以大大降低。反之,当驾驶员习惯性地将注视方向集中于正确的视线区域之外时,驾驶员因注意力不集中而引发交通事故的概率会大大增加。其中,正确的视线区域一般可以包括 驾驶员注视前方时所覆盖的范围。例如,在一示例性实施例中,可以将车辆的前挡风玻璃所覆盖的区域范围作为指定位置范围(例如图4中的虚线所围成的区域所示)。结合图5所示,还以车辆的前挡风玻璃所覆盖的区域范围作为指定位置范围为例,当驾驶员的注视方向在三维地图中的投射位置位于指定位置范围之外时(例如图5中的B点所示),可以对驾驶员进行安全驾驶预警,以提醒驾驶员进行安全驾驶。当驾驶员的注视方向在三维地图中的投射位置位于指定位置范围之内时(例如图5中的A点所示),则可以不对驾驶员进行安全驾驶预警,并可以继续进行检测。
在本说明书另一些实施例中,指定位置范围还可以包含一定的容忍余量,以降低误判概率。例如,驾驶员偶尔通过侧边观后镜观察相邻车道的路况的行为,一般属于正确的驾驶行为。在以将车辆的前挡风玻璃所覆盖的区域范围作为指定位置范围情况下,如果驾驶员的注视方向在侧边观后镜上时,则会被认为驾驶员的注视方向未集中于正确的视线区域,这显然不是用户所期望的。因此,至少可以将侧边观后镜所覆盖的区域作为容忍余量,即当驾驶员注视侧边观后镜时,仍可以将驾驶员的注视方向视为落在了正确的视线区域。
指望驾驶员在整个驾驶过程中,始终将注视方向集中于正确的视线区域是不现实的,当驾驶员的注视方向集中于正确的视线区域之外的持续时间极短(例如不超过0.01秒)时,因此而引发交通事故的概率并不会有显著的增加。因此,在本说明书另一些实施例中,当驾驶员的注视方向在三维地图中的投射位置位于指定位置范围之外时,可以开始进行计时;当投射位置位于指定位置范围外的持续时间超过了时长阈值时,可以对驾驶员进行安全驾驶预警,以提醒驾驶员进行安全驾驶。当投射位置位于指定位置范围外的持续时间未超过时长阈值时,可以不对驾驶员进行安全驾驶预警,以降低安全驾驶预警频率,减少对驾驶员的驾车干扰,提高用户体验。
在一些特殊情况下,当驾驶员视线被遮挡,驾驶员回头注视后方,或驾驶员因瞌睡等原因而闭上眼睛时,可能会出现捕捉不到驾驶员眼睛,甚至是捕捉不到驾驶员面部的情况。在此情况下,图像处理装置12需要及时对驶员进行安全驾驶预警,以免引发交通事故。例如,在本说明书一示例性实施例中,当未跟踪到面部图像的持续时间超过时长阈值,或跟踪到的未包含双眼的面部图像的持续时间超过时长阈值时,图像处理装置12可以对驾驶员进行安全驾驶预警。
在本说明书一些实施例中,预警执行装置13可以为任何合适的语音提示装置(例如语音报警器、蜂鸣器)、光提示装置(例如闪光指示灯、颜色指示灯等)、和或图文提 示装置(例如显示屏)。
此外,为了有利于提高安全驾驶预警提示的有效性,预警执行装置13输出的安全驾驶警示信息,可以与驾驶员的注视方向未集中于正确的视线区域的严重程度相关联。例如,当驾驶员的注视方向未集中于正确的视线区域的频度达到设定频度阈值(或持续时间达到另一个时长阈值)时,预警执行装置13可以输出更为强烈的安全驾驶警示信息,以引起驾驶员对安全驾驶警示信息的足够注意。例如,以语言提示为例,当驾驶员的注视方向未集中于正确的视线区域较为严重时,预警执行装置13可以输出语气更为强硬、更急促和/或响度更大的安全驾驶警示信息。
为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本说明书时可以把各单元的功能在同一个或多个软件和/或硬件中实现。
与上述基于视觉的安全驾驶预警***,本说明书还提供了基于视觉的安全驾驶预警方法,所述基于视觉的安全驾驶预警方法可以应用于图像处理装置12侧。参考图6所示,在本说明书一些实施例中,基于视觉的安全驾驶预警方法可以包括以下步骤:
S601、获取驾驶舱的三维地图;
S602、跟踪驾驶员的面部图像;
S603、根据所述面部图像所包含的眼部特征,确定所述驾驶员的注视方向在所述三维地图中的投射位置;
S604、判断所述投射位置是否位于指定位置范围之外。
S605、当所述投射位置位于指定位置范围之外时,对所述驾驶员进行安全驾驶预警。
而当所述投射位置位于指定位置范围之内时,不对所述驾驶员进行安全驾驶预警,并可以继续跟踪驾驶员的面部图像,以继续进行检测,即跳转执行步骤S602。
在本说明一实施例的基于视觉的安全驾驶预警方法中,所述获取驾驶舱的三维地图,可以包括:
获取从不同角度采集的驾驶舱图像;
基于预设的slam算法处理所述驾驶舱图像,以创建所述驾驶舱的三维地图。
在本说明一实施例的基于视觉的安全驾驶预警方法中,所述根据所述面部图像所包含的眼部特征,确定所述驾驶员的注视方向在所述三维地图中的投射位置,可以包括:
从所述面部图像中提取双眼的特征值;
根据所述双眼的特征值确定所述驾驶员的注视方向。
在本说明一实施例的基于视觉的安全驾驶预警方法中,所述根据所述双眼的特征值 确定所述驾驶员的注视方向,包括:
根据所述双眼中主眼球的特征与副眼球的特征的加权和,确定所述驾驶员的注视方向。
在本说明一实施例的基于视觉的安全驾驶预警方法中,所述注视方向可以包括:
以瞳距中心点为起点,与所述双眼的朝向平行的射线方向。
在本说明一实施例的基于视觉的安全驾驶预警方法中,所述当所述投射位置位于指定位置范围之外时,对所述驾驶员进行安全驾驶预警,可以包括:
当所述投射位置位于指定位置范围外的持续时间超过时长阈值时,对所述驾驶员进行安全驾驶预警。
在本说明一实施例的基于视觉的安全驾驶预警方法中,所述投射位置位于指定位置范围之外,可以包括:
所述投射位置位于指定位置范围外且超出预设的偏移余量。
在本说明一实施例的基于视觉的安全驾驶预警方法还可以包括:
当未跟踪到面部图像的持续时间超过时长阈值,或跟踪到的未包含双眼的面部图像的持续时间超过时长阈值时,对所述驾驶员进行安全驾驶预警。
在本说明一实施例的基于视觉的安全驾驶预警方法中,所述面部图像可以包括红外图像。
虽然上文描述的过程流程包括以特定顺序出现的多个操作,但是,应当清楚了解,这些过程可以包括更多或更少的操作,这些操作可以顺序执行或并行执行(例如使用并行处理器或多线程环境)。
本发明是参照根据本发明实施例的方法、设备(***)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或 多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁盘式存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法或者设备中还存在另外的相同要素。
本领域技术人员应明白,本说明书的实施例可提供为方法、***或计算机程序产品。因此,本说明书可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本说明书可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本说明书可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序 模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本说明书,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于***实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本说明书的实施例而已,并不用于限制本说明书。对于本领域技术人员来说,本说明书可以有各种更改和变化。凡在本说明书的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本说明书的权利要求范围之内。

Claims (19)

  1. 一种基于视觉的安全驾驶预警方法,其特征在于,包括:
    获取驾驶舱的三维地图;
    跟踪驾驶员的面部图像;
    根据所述面部图像所包含的眼部特征,确定所述驾驶员的注视方向在所述三维地图中的投射位置;
    当所述投射位置位于指定位置范围之外时,对所述驾驶员进行安全驾驶预警。
  2. 如权利要求1所述的基于视觉的安全驾驶预警方法,其特征在于,所述获取驾驶舱的三维地图,包括:
    获取从不同角度采集的驾驶舱图像;
    基于预设的slam算法处理所述驾驶舱图像,以创建所述驾驶舱的三维地图。
  3. 如权利要求1所述的基于视觉的安全驾驶预警方法,其特征在于,所述根据所述面部图像所包含的眼部特征,确定所述驾驶员的注视方向在所述三维地图中的投射位置,包括:
    从所述面部图像中提取双眼的特征值;
    根据所述双眼的特征值确定所述驾驶员的注视方向。
  4. 如权利要求3所述的基于视觉的安全驾驶预警方法,其特征在于,所述根据所述双眼的特征值确定所述驾驶员的注视方向,包括:
    根据所述双眼中主眼球的特征与副眼球的特征的加权和,确定所述驾驶员的注视方向。
  5. 如权利要求3所述的基于视觉的安全驾驶预警方法,其特征在于,所述注视方向包括:
    以瞳距中心点为起点,与所述双眼的朝向平行的射线方向。
  6. 如权利要求1所述的基于视觉的安全驾驶预警方法,其特征在于,所述当所述投射位置位于指定位置范围之外时,对所述驾驶员进行安全驾驶预警,包括:
    当所述投射位置位于指定位置范围外的持续时间超过时长阈值时,对所述驾驶员进行安全驾驶预警。
  7. 如权利要求1或6所述的基于视觉的安全驾驶预警方法,其特征在于,所述投射位置位于指定位置范围之外,包括:
    所述投射位置位于指定位置范围外且超出预设的偏移余量。
  8. 如权利要求1所述的基于视觉的安全驾驶预警方法,其特征在于,还包括:
    当未跟踪到面部图像的持续时间超过时长阈值,或跟踪到的未包含双眼的面部图像的持续时间超过时长阈值时,对所述驾驶员进行安全驾驶预警。
  9. 如权利要求1所述的基于视觉的安全驾驶预警方法,其特征在于,所述面部图像包括红外图像。
  10. 一种基于视觉的安全驾驶预警***,其特征在于,包括:
    图像采集装置,用于跟踪驾驶员的面部图像;
    图像处理装置,用于获取驾驶舱的三维地图,根据所述面部图像所包含的眼部特征,确定所述驾驶员的注视方向在所述三维地图中的投射位置;当所述投射位置位于指定位置范围之外时,发出预警指令;
    预警执行装置,根据所述预警指令输出安全驾驶警示信息。
  11. 如权利要求10所述的基于视觉的安全驾驶预警***,其特征在于,所述获取驾驶舱的三维地图,包括:
    获取从不同角度采集的驾驶舱图像;
    基于预设的slam算法处理所述驾驶舱图像,以创建所述驾驶舱的三维地图。
  12. 如权利要求10所述的基于视觉的安全驾驶预警***,其特征在于,所述根据所述面部图像所包含的眼部特征,确定所述驾驶员的注视方向在所述三维地图中的投射位置,包括:
    从所述面部图像中提取双眼的特征值;
    根据所述双眼的特征值确定所述驾驶员的注视方向。
  13. 如权利要求12所述的基于视觉的安全驾驶预警***,其特征在于,所述根据所述双眼的特征值确定所述驾驶员的注视方向,包括:
    根据所述双眼中主眼球的特征与副眼球的特征的加权和,确定所述驾驶员的注视方向。
  14. 如权利要求12所述的基于视觉的安全驾驶预警***,其特征在于,所述注视方向包括:
    以瞳距中心点为起点,且与所述双眼的朝向平行的射线方向。
  15. 如权利要求10所述的基于视觉的安全驾驶预警***,其特征在于,所述当所述投射位置位于指定位置范围之外时,对所述驾驶员进行安全驾驶预警,包括:
    当所述投射位置位于指定位置范围外的持续时间超过时长阈值时,对所述驾驶员进行安全驾驶预警。
  16. 如权利要求10或15所述的基于视觉的安全驾驶预警***,其特征在于,所述投射位置位于指定位置范围之外,包括:
    所述投射位置位于指定位置范围外且超出预设的偏移余量。
  17. 如权利要求10所述的基于视觉的安全驾驶预警***,其特征在于,所述图像处理装置还用于:
    当未跟踪到面部图像的持续时间超过时长阈值,或跟踪到的未包含双眼的面部图像的持续时间超过时长阈值时,对所述驾驶员进行安全驾驶预警。
  18. 如权利要求10所述的基于视觉的安全驾驶预警***,其特征在于,所述图像采集装置包括红外图像采集装置。
  19. 一种计算机存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行权利要求1-9任意一项所述的安全驾驶预警方法。
PCT/CN2020/074703 2020-02-11 2020-02-11 基于视觉的安全驾驶预警方法、***及存储介质 WO2021159269A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/074703 WO2021159269A1 (zh) 2020-02-11 2020-02-11 基于视觉的安全驾驶预警方法、***及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/074703 WO2021159269A1 (zh) 2020-02-11 2020-02-11 基于视觉的安全驾驶预警方法、***及存储介质

Publications (1)

Publication Number Publication Date
WO2021159269A1 true WO2021159269A1 (zh) 2021-08-19

Family

ID=77293126

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/074703 WO2021159269A1 (zh) 2020-02-11 2020-02-11 基于视觉的安全驾驶预警方法、***及存储介质

Country Status (1)

Country Link
WO (1) WO2021159269A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113938613A (zh) * 2021-11-26 2022-01-14 上海徐工智能科技有限公司 一种工程机械用户数据采集装置及方法

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100462047C (zh) * 2007-03-21 2009-02-18 汤一平 基于全方位计算机视觉的安全驾驶辅助装置
CN102073857A (zh) * 2011-01-24 2011-05-25 沈阳工业大学 多模态驾驶员疲劳检测方法及其专用设备
CN104574817A (zh) * 2014-12-25 2015-04-29 清华大学苏州汽车研究院(吴江) 一种适用于智能手机的基于机器视觉疲劳驾驶预警***
CN104688251A (zh) * 2015-03-02 2015-06-10 西安邦威电子科技有限公司 一种多姿态下的疲劳及非正常姿态驾驶检测方法
CN108764034A (zh) * 2018-04-18 2018-11-06 浙江零跑科技有限公司 一种基于驾驶室近红外相机的分神驾驶行为预警方法
US20190382003A1 (en) * 2018-06-13 2019-12-19 Toyota Jidosha Kabushiki Kaisha Collision avoidance for a connected vehicle based on a digital behavioral twin
CN110638474A (zh) * 2019-09-25 2020-01-03 中控智慧科技股份有限公司 一种驾驶状态检测的方法、***、设备及可读存储介质
CN110705416A (zh) * 2019-09-24 2020-01-17 武汉工程大学 一种基于驾驶员面部图像建模的安全驾驶预警方法及***

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100462047C (zh) * 2007-03-21 2009-02-18 汤一平 基于全方位计算机视觉的安全驾驶辅助装置
CN102073857A (zh) * 2011-01-24 2011-05-25 沈阳工业大学 多模态驾驶员疲劳检测方法及其专用设备
CN104574817A (zh) * 2014-12-25 2015-04-29 清华大学苏州汽车研究院(吴江) 一种适用于智能手机的基于机器视觉疲劳驾驶预警***
CN104688251A (zh) * 2015-03-02 2015-06-10 西安邦威电子科技有限公司 一种多姿态下的疲劳及非正常姿态驾驶检测方法
CN108764034A (zh) * 2018-04-18 2018-11-06 浙江零跑科技有限公司 一种基于驾驶室近红外相机的分神驾驶行为预警方法
US20190382003A1 (en) * 2018-06-13 2019-12-19 Toyota Jidosha Kabushiki Kaisha Collision avoidance for a connected vehicle based on a digital behavioral twin
CN110705416A (zh) * 2019-09-24 2020-01-17 武汉工程大学 一种基于驾驶员面部图像建模的安全驾驶预警方法及***
CN110638474A (zh) * 2019-09-25 2020-01-03 中控智慧科技股份有限公司 一种驾驶状态检测的方法、***、设备及可读存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113938613A (zh) * 2021-11-26 2022-01-14 上海徐工智能科技有限公司 一种工程机械用户数据采集装置及方法

Similar Documents

Publication Publication Date Title
US11977675B2 (en) Primary preview region and gaze based driver distraction detection
US11182629B2 (en) Machine learning based driver assistance
WO2020029444A1 (zh) 一种驾驶员驾驶时注意力检测方法和***
CN110703904B (zh) 一种基于视线跟踪的增强虚拟现实投影方法及***
JP7099037B2 (ja) データ処理装置、モニタリングシステム、覚醒システム、データ処理方法、及びデータ処理プログラム
CN110544368B (zh) 一种疲劳驾驶增强现实预警装置及预警方法
US11685384B2 (en) Driver alertness detection method, device and system
JP2016210212A (ja) 情報提供装置、情報提供方法及び情報提供用制御プログラム
US20230356728A1 (en) Using gestures to control machines for autonomous systems and applications
JP7154959B2 (ja) 走行状況判断情報基盤の運転者状態認識装置及び方法
WO2021159269A1 (zh) 基于视觉的安全驾驶预警方法、***及存储介质
US11072324B2 (en) Vehicle and control method thereof
GB2547512B (en) Warning a vehicle occupant before an intense movement
CN107323343A (zh) 一种安全驾驶预警方法及***、汽车和可读存储介质
CN111267865B (zh) 基于视觉的安全驾驶预警方法、***及存储介质
JP7342637B2 (ja) 車両制御装置および運転者状態判定方法
JP7342636B2 (ja) 車両制御装置および運転者状態判定方法
JP2022047580A (ja) 情報処理装置
CN116572846A (zh) 一种车载电子后视镜的显示方法、***及存储介质
CN113525402B (zh) 高级辅助驾驶及无人驾驶视场智能响应方法及***
US11908208B2 (en) Interface sharpness distraction mitigation method and system
WO2021024905A1 (ja) 画像処理装置、モニタリング装置、制御システム、画像処理方法、コンピュータプログラム、及び記憶媒体
KR20160056189A (ko) 보행자 검출 경보 장치 및 방법
WO2021262166A1 (en) Operator evaluation and vehicle control based on eyewear data
CN112733572A (zh) 一种基于结构光的驾驶员监控方法及***

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20918635

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20918635

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12/07/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20918635

Country of ref document: EP

Kind code of ref document: A1