WO2020029444A1 - Method and system for detecting attention of driver while driving - Google Patents

Method and system for detecting attention of driver while driving Download PDF

Info

Publication number
WO2020029444A1
WO2020029444A1 PCT/CN2018/113670 CN2018113670W WO2020029444A1 WO 2020029444 A1 WO2020029444 A1 WO 2020029444A1 CN 2018113670 W CN2018113670 W CN 2018113670W WO 2020029444 A1 WO2020029444 A1 WO 2020029444A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
time
real
driver
image acquisition
Prior art date
Application number
PCT/CN2018/113670
Other languages
French (fr)
Chinese (zh)
Inventor
侯喆
Original Assignee
初速度(苏州)科技有限公司
北京魔门塔科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 初速度(苏州)科技有限公司, 北京魔门塔科技有限公司 filed Critical 初速度(苏州)科技有限公司
Publication of WO2020029444A1 publication Critical patent/WO2020029444A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Definitions

  • the cameras can be arranged in front of the subjects in an array to facilitate taking multiple photos at the same time; the camera's shooting angle selection is specific, and each camera is aimed at the human face at a specific angle. In order to ensure that the face images of different angles can be captured at the same time.
  • keypoints are selected in order to obtain the direction of human eyes and more detailed information on the face. The more keypoints you select, the more effective information you can extract for later stages. Other image learning and recognition, including the direction of the human eye's line of sight, is possible.
  • the 68 key points mentioned above can be selected, and facial poses can be obtained after training through the neural network, such as blinking, staring, raising eyebrows, frowning, nodding, shaking head and other postures.
  • the image acquisition system used in this application includes multiple cameras, it improves the accuracy of determining the focus of human eyes, can be applied to the field of intelligent driving, and improves safety; the specific implementation is as follows: the image acquisition device has 9 cameras
  • step S2 the training of the classifier first proceeds to step S21, the processing of the image. Since the relevant format and size have been unified in the process of collecting the image, the processing here only involves gridding the image. For example, a 100 * 100 grid line will be inserted in the picture; then step S22, step S23, and finally in step S24, in 9 pictures of the same sample, in the feature value comparison, the face height, Width, two pupil lines, left and right eye size and other data.
  • Comparison and evaluation unit Most drivers ’eyes and eye focus range values are compared with real-time driving scene information under various traffic conditions to compare with the real-time driving scene information. However, it is necessary to pay attention to the driver ’s image and the outside of the vehicle. The images must have the same time stamp. For example, in the state of overtaking, the human eye should look at the area of the left rearview mirror at this time. During the turn, the human eye should look at the area around the vehicle's front windshield corresponding to the turn direction.
  • a calculation unit is used to compare the position coordinates of the human eye gaze with the current eye gaze area output by the real-time driving scene monitoring unit to determine whether the actual position of the human eye is within the area; finally, the comparison and evaluation unit may also compare Human eye attention is divided into different levels, such as concentration, slight inattention, fatigue driving, drunk driving, etc.
  • the area that the current human eye should look at can also be obtained by using big data.
  • a database is stored in the comparison and evaluation unit, and the database stores areas under the eyes of most drivers under various traffic conditions. For example, when a right turn is required, the position of the eyes of most drivers at this time should be the position of the right rearview mirror. When the driver needs to reverse, the position of the eyes of most drivers also includes the position of the rear-view mirror. When parking and getting off, the position of most drivers' eyes should include the position of the left rearview mirror. The locations of most collected drivers' eyes should be classified into driving situations and stored in the database. For example, as shown in Table 1:

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

A driving assistance method and system, relating to monitoring the attention of a driver in the field of driving assistance. The driving assistance system comprises a real-time human eye line of sight monitoring unit (1), a real-time driving scenario monitoring unit (2), a comparison and evaluation unit (3) and an alarm and emergency processing unit (4). An image acquisition device employs multiple cameras to take photos from different angles at the same time, and the obtained set of images acquired at the same time is used as a sample in a sample set so as to train a classifier, which improves output precision and finally enables the system to effectively detect the attention of a driver.

Description

一种驾驶员驾驶时注意力检测方法和***Method and system for driver's attention detection while driving 技术领域Technical field
本发明涉及辅助驾驶领域,特别是涉及辅助驾驶中对驾驶员注意力的监控。The present invention relates to the field of assisted driving, and in particular to monitoring of driver's attention in assisted driving.
背景技术Background technique
目前,现有技术中对于辅助驾驶领域中驾驶员注意力的监控多采用设置在驾驶位置的各种监控传感器完成。例如检测驾驶员是否将手我握在方向盘位置并持续较长的时间,方向盘是否有转动,车辆是否感应到了驾驶员加速或刹车信号等。这些传统的传感装置因太灵敏,使车辆做出过激反应,虽然此时驾驶员并没有注意力不集中。此外,虽然传感器感应到驾驶员在控制方向盘,但此时驾驶员注意力放在其他位置,例如在玩手机,接打电话,这种情况下现有技术并没有给出注意力的提醒。因此现有的驾驶员注意力监控并不能胜任驾驶复杂的局面。At present, in the prior art, monitoring of the driver's attention in the field of assisted driving is often performed by using various monitoring sensors provided at the driving position. For example, it is detected whether the driver holds my hand at the steering wheel position for a long time, whether the steering wheel is rotating, whether the vehicle senses the driver's acceleration or braking signal, and the like. These traditional sensing devices are too sensitive, causing the vehicle to react excessively, although the driver is not attentive at this time. In addition, although the sensor senses that the driver is controlling the steering wheel, at this time the driver pays attention to other positions, such as playing a mobile phone and taking a call. In this case, the prior art does not give attention to the attention. Therefore, existing driver attention monitoring is not capable of driving complex situations.
然而目前现有技术中对于图像采集却有诸多亟待解决的问题。However, in the prior art, there are many problems that need to be solved urgently for image acquisition.
例如,1)无法做到高效采集人脸图像:因为需要采集人脸某时刻各个角度的多幅照片,采用相机拍摄时需要相机做二维的往返运动,浪费大量时间。此外,在拍摄时需要做到被试者保持静止,而往往被试者在相机做二维往返运动时,无法长时间保持静止,因此很多采集的图像存在误差。另外,不易采集人脸侧面图像(或大角度图像),如果采集侧面图像需要人转头将近90度,首先体验感不好,其次人眼注视点需要在人的侧面较远地方,会导致承载注视点的装置长度非常大。For example, 1) it is impossible to efficiently collect a face image: because it is necessary to collect multiple photos of a person's face at various angles at a time, the camera needs to perform two-dimensional back and forth motion when shooting with a camera, which wastes a lot of time. In addition, the subjects need to be kept still when shooting, and often the subjects cannot stay still for a long time when the camera is doing two-dimensional back and forth motion, so many of the collected images have errors. In addition, it is not easy to collect the side image (or large-angle image) of the face. If the side image is collected, the person needs to turn their heads nearly 90 degrees. First, the experience is not good. The gaze point has a very large device.
2)现有技术中关于注意力的检测,只是集中在人脸面部检测环节,检测的 参数单一,没有与当前的驾驶情景相结合,如在车辆脱离原来车道进行并线超车,即便是注意力集中的,但如果没有良好的驾驶习惯,如看左右后视镜,则仍然会发生交通事故。而传统的检测方法并不能很好的对这些情况提供预警。2) Attention detection in the prior art is only focused on the face and face detection. The detection parameters are single and are not combined with the current driving situation. Focused, but without good driving habits, such as looking at the left and right rearview mirrors, traffic accidents will still occur. The traditional detection methods can not provide early warning for these situations.
发明内容Summary of the invention
鉴于现有技术中存在的问题,本发明的目的之一在于提高了图像采集装置采集的图像精度,进而能够准确及时的检测到驾驶员的注意力。In view of the problems existing in the prior art, one of the objectives of the present invention is to improve the accuracy of the images collected by the image acquisition device, so that the driver's attention can be accurately and timely detected.
为达此目的,本发明采用以下技术方案:To achieve this, the present invention adopts the following technical solutions:
一种驾驶辅助***,其特征在于:所述***包括人眼视线实时监测单元,实时驾驶场景监测单元,比较评估单元,报警及紧急处理单元;A driving assistance system, characterized in that the system includes a human eye sight real-time monitoring unit, a real-time driving scene monitoring unit, a comparison evaluation unit, an alarm and emergency processing unit;
所述人眼视线实时监测单元包括DMS图像采集单元,分类器单元;The real-time human eye sight monitoring unit includes a DMS image acquisition unit and a classifier unit;
所述DMS图像采集单元将采集的人脸图像输入到已经训练好的所述分类器单元,该分类器单元输出此时人眼注视的位置坐标;The DMS image acquisition unit inputs the collected face image to the trained classifier unit, and the classifier unit outputs the position coordinates of the human eye gaze at this time;
所述位置坐标和所述实时驾驶场景监测单元采集的信息一起输入所述比较评估单元;其中,所述比较评估单元将各种交通状况下经过神经网络训练下的人眼视线聚焦的范围值与实时驾驶场景的信息进行比对,且人脸图像和与其配合的车外图像具有相同的时间戳;The position coordinates and the information collected by the real-time driving scene monitoring unit are input to the comparison and evaluation unit; wherein the comparison and evaluation unit compares the range value of the human eye with the neural network training under various traffic conditions and The real-time driving scene information is compared, and the face image and the matching outside image have the same time stamp;
报警及紧急处理单元根据述比较评估单元的评估结果采取相应的措施。The alarm and emergency processing unit takes corresponding measures according to the evaluation results of the comparative evaluation unit.
优选的,所述实时驾驶场景监测单元包括前景图像采集单元,车辆驾驶状态监控单元,地图单元。Preferably, the real-time driving scene monitoring unit includes a foreground image acquisition unit, a vehicle driving state monitoring unit, and a map unit.
优选的,训练所述分类器单元为采用多台相机呈阵列形式从不同的角度在同一时间拍摄图像,从不同角度同一时间拍摄的图像构成的图像集作为一个样本。Preferably, the classifier unit is trained by using multiple cameras in the form of an array to capture images from different angles at the same time, and an image set composed of images captured from different angles at the same time as a sample.
优选的,训练所述分类器所使用的样本集包括人眼注视的位置在车辆前挡 风玻璃的各个位置以及左、中、右后视镜。Preferably, the sample set used for training the classifier includes positions where the human eye is gazing at various positions of the front windshield of the vehicle and left, middle, and right rearview mirrors.
优选的,所述DMS图像采集单元为单一相机。Preferably, the DMS image acquisition unit is a single camera.
优选的,所述比较评估单元包括一数据库。该数据库存储有各驾驶情景下人眼应该注视的位置区域;当采集的所述人眼注视的位置坐标不在所述位置区域时,所述报警及紧急处理单元对驾驶员进行提醒。Preferably, the comparison and evaluation unit includes a database. The database stores a location area where the human eye should fixate under each driving situation; when the collected position coordinates of the human eye fixation are not in the location area, the alarm and emergency processing unit reminds the driver.
优选的,所述样本集还需要参考环境因素,所述环境因素包括:光照、雨雪和雾。Preferably, the sample set also needs to refer to environmental factors, and the environmental factors include: light, rain, snow, and fog.
优选的,所述DMS图像采集单元和所述实时驾驶场景监测单元所采集的图像都含有时间戳。Preferably, the images collected by the DMS image acquisition unit and the real-time driving scene monitoring unit both include a time stamp.
优选的,所述前景图像采集单元,用于实时采集驾驶员视野内的前方车外图像。Preferably, the foreground image acquisition unit is configured to acquire a real-time image outside the vehicle in front of the driver's field of vision.
优选的,所述地图单元,用于将此时的车辆位置定位在预先采集的地图中,用于得出车辆当前位置的交通状况。Preferably, the map unit is configured to locate a vehicle position at this time in a map collected in advance, and is used to obtain a traffic condition of a current position of the vehicle.
优选的,比较评估单元会记录并监控此时驾驶员的所述经纬度或位置坐标,当发现在设定的阈值条件下所述经纬度和位置坐标没有发生变化,则所述报警及紧急处理单元对驾驶员进行提醒。Preferably, the comparison and evaluation unit records and monitors the latitude, longitude and position coordinates of the driver at this time. When it is found that the latitude, longitude and position coordinates have not changed under the set threshold condition, the alarm and emergency processing unit The driver reminds.
优选的,所述比较评估单元将驾驶员的注意力分为若干等级。Preferably, the comparison and evaluation unit divides the driver's attention into several levels.
一种提高驾驶员驾驶注意力的方法,该方法包括如下步骤:A method for improving driver's driving attention, the method includes the following steps:
步骤S1.通过一人眼视线实时监测单元监测驾驶员的人眼视线位置,该人眼视线实时监测单元包括DMS图像采集单元,分类器单元;所述DMS图像采集单元将采集的人脸图像输入到已经训练好的所述分类器单元,该分类器单元输出此时人眼注视的位置坐标;其中在训练分类器单元的过程中,使用一图像采集***,该图像采集***包括有多个相机,所述多个相机同一时刻拍摄人脸图 像;且每个所述拍摄图像都盖有时间戳,作为一个样本;Step S1. The driver's eye sight position is monitored by a human eye sight real-time monitoring unit, which includes a DMS image acquisition unit and a classifier unit; the DMS image acquisition unit inputs the collected face image to The classifier unit has been trained, and the classifier unit outputs the position coordinates of the human eye gaze at this time; in the process of training the classifier unit, an image acquisition system is used, and the image acquisition system includes multiple cameras, The multiple cameras capture a face image at the same time; and each of the captured images is time stamped as a sample;
步骤S2.将所述人眼注视的位置坐标和所述实时驾驶场景监测单元采集的前景图像信息和车辆驾驶状态信息一起输入所述比较评估单元。Step S2. Input the position coordinates of the human eye gaze, the foreground image information collected by the real-time driving scene monitoring unit, and the vehicle driving state information into the comparison and evaluation unit.
步骤S3.所述评估单元基于输入的所述前景图像信息、所述车辆驾驶状态信息、所述位置坐标进行评估得出评估结果。Step S3. The evaluation unit performs an evaluation based on the input foreground image information, the vehicle driving state information, and the position coordinates to obtain an evaluation result.
步骤S4.一报警及紧急处理单元根据述比较评估单元的评估结果采取相应的措施。Step S4. An alarm and emergency processing unit takes corresponding measures according to the evaluation result of the comparative evaluation unit.
采用了上述技术方案,使其对现有技术做出了实质性的贡献,产生了意料之外的技术效果。这一突出的优势主要体现在如下几点,但本发明的发明点并不限于下述列举的几点。The above technical solution is adopted to make a substantial contribution to the existing technology and produce unexpected technical effects. This outstanding advantage is mainly reflected in the following points, but the invention points of the present invention are not limited to the points listed below.
(1)分类器的训练采取了多台相机从不同角度在同一时候进行拍照,所得同一时刻图像集作为训练用数据。因收集了多角度人脸图像,由此对神经网络的训练更加全面准确。要做到驾驶员在驾驶过程中的人机交互,对于人眼视线的准确跟踪是实现人机交互的基础,而通过神经网络训练后输出的人眼位置坐标,能够满足精准的人机交互对视线位置准确度的要求;同时能够在同一时间内收集大角度包括人脸侧面90度的照片,而不需要被测试者进行过多的动作。因此采集图像时,对分类器的训练采取了多台相机从不同角度在同一时刻进行拍照。由于图像采集装置采取了多台相机从不同角度在同一时候进行拍照,所得同一时候图像集作为样本集中的一个样本来训练分类器,提高了精度,最终使得***可高效的检测驾驶员的注意力。这是本发明的一个发明点。(1) The training of the classifier takes multiple cameras to take pictures from different angles at the same time, and the image set at the same time is used as training data. Because the multi-angle face images are collected, the neural network training is more comprehensive and accurate. To achieve the human-computer interaction of the driver during driving, accurate tracking of human eyes is the basis for human-computer interaction. The position coordinates of the human eye output after training through the neural network can meet the precise human-computer interaction. Requirements for the accuracy of the line of sight position; at the same time, it is possible to collect photos at a large angle, including 90 degrees on the side of the face, at the same time, without requiring the testee to perform excessive actions. Therefore, when capturing images, multiple cameras were taken to train the classifier to take pictures from different angles at the same time. Because the image acquisition device uses multiple cameras to take pictures from different angles at the same time, the obtained image set at the same time is used as a sample in the sample set to train the classifier, which improves the accuracy and finally makes the system efficiently detect the driver's attention . This is an invention point of the present invention.
(2)本发明的优选实施例中使用了一数据库,该数据库是经过人工智能神经网络后学习的数据库,该数据库存储在各种驾驶情景下,大多数驾驶员人眼 注视位置区域。这些区域代表了在相应的驾驶情景下,安全驾驶情况下人眼应该注意的位置区域。当实时监测的人眼注视的位置坐标不属于该位置区域时,则会提醒驾驶员注意。这是本发明的另一个发明点。(2) In a preferred embodiment of the present invention, a database is used. The database is a database learned after the artificial intelligence neural network. The database is stored in various driving scenarios, and most drivers fix their eyes on the location area. These areas represent the areas that the human eye should pay attention to when driving safely under the corresponding driving scenarios. When the real-time monitoring position coordinates of the human eye do not belong to the location area, the driver will be alerted. This is another aspect of the present invention.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1是本发明的驾驶辅助***示意图Figure 1 is a schematic diagram of a driving assistance system of the present invention
图中附图标记如下:1-人眼视线实时监测单元,11-DMS图像采集单元,12-分类器单元,2-实时驾驶场景监测单元,21-前景图像采集单元,22-车辆驾驶状态监控单元,23-地图单元,3-比较评估单元,4-报警及紧急处理单元。The reference signs in the figure are as follows: 1- human eye sight real-time monitoring unit, 11-DMS image acquisition unit, 12-classifier unit, 2-real-time driving scene monitoring unit, 21- foreground image acquisition unit, 22- vehicle driving status monitoring Unit, 23-map unit, 3-comparative evaluation unit, 4-alarm and emergency processing unit.
下面对本发明进一步详细说明。但下述的实例仅仅是本发明的简易例子,并不代表或限制本发明的权利保护范围,本发明的保护范围以权利要求书为准。The present invention is described in further detail below. However, the following examples are merely simple examples of the present invention, and do not represent or limit the scope of protection of the rights of the present invention. The scope of protection of the present invention is subject to the claims.
具体实施方式detailed description
下面结合附图并通过具体实施方式来进一步说明本发明的技术方案。The technical solution of the present invention will be further described below with reference to the accompanying drawings and specific embodiments.
为更好地说明本发明,便于理解本发明的技术方案,本发明的典型但非限制性的实施例如下:In order to better illustrate the present invention and facilitate understanding of the technical solutions of the present invention, typical but non-limiting examples of the present invention are as follows:
本发明采用人工智能的人眼位置监控技术,通过人眼注视位置的监控,实时得到驾驶员的人眼注视位置,通过该位置与预先储存的各种环境中的人眼应该注视位置进行比较,以对此时驾驶员是否处于驾驶注意力集中阶段进行判断。The present invention uses artificial intelligence eye position monitoring technology to obtain the driver's eye gaze position in real time by monitoring the human eye gaze position, and compares this position with the human eye gaze position in various pre-stored environments. To determine whether the driver is in the driving concentration stage at this time.
本发明的驾驶辅助***包括:人眼视线实时监测单元;实时驾驶场景监测单元;比较评估单元;报警及紧急处理单元。The driving assistance system of the present invention includes: a human eye sight real-time monitoring unit; a real-time driving scene monitoring unit; a comparative evaluation unit; an alarm and emergency processing unit.
人眼视线实时监测单元;该单元包括:DMS图像采集单元,用于采集驾驶员实时人脸图像;分类器单元,DMS图像采集单元将采集的人脸图像输入到已 经训练好的分类器单元,该分类器单元输出此时人眼注视的经纬度或位置坐标。Human eye sight real-time monitoring unit; this unit includes: DMS image acquisition unit for collecting real-time face images of the driver; classifier unit, DMS image acquisition unit inputs the collected face images to the trained classifier unit, The classifier unit outputs the latitude, longitude or position coordinates of the human eye watching at this time.
本申请在训练分类器单元的过程中,图像采集***包括有多个相机,并且相机摆放在特定的位置以满足同一时刻拍摄图像的需要。In the process of training the classifier unit in this application, the image acquisition system includes a plurality of cameras, and the cameras are placed in a specific position to meet the needs of capturing images at the same time.
该相机可以以阵列的方式排列在被试者的面前,以方便在同一时刻拍摄多张照片;该机相的拍摄角度选择是特定的,每一相机都是以特定的角度对准人脸,以保证同一时刻拍摄得到不同角度的人脸图像。The cameras can be arranged in front of the subjects in an array to facilitate taking multiple photos at the same time; the camera's shooting angle selection is specific, and each camera is aimed at the human face at a specific angle. In order to ensure that the face images of different angles can be captured at the same time.
当上述的排成阵列的相机在同一时刻拍摄人脸时,至少平行方向排列的两个相机在对准人脸的方向,即该相机的光轴方向间夹角为90度设置。当然,该光轴方向间夹角可大于90度,以方便得到更多的人脸照片信息。When the above-mentioned cameras arranged in an array shoot a human face at the same time, at least two cameras arranged in a parallel direction are aligned with the human face, that is, the angle between the optical axis directions of the cameras is set at 90 degrees. Of course, the included angle between the directions of the optical axes may be greater than 90 degrees to facilitate obtaining more face photo information.
图像采集***还包括:多个支架,该多个支架包括多个横向支架和多个纵向支架;多个相机固定在多个所述横向支架和纵向支架的交叉位置;一轨道结构,其包括横向轨道和纵向轨道,该轨道结构能够在支架上沿水平方向和竖直方向自由移动;一光源固定在横向轨道和纵向轨道的交叉位置;一相机与该光源固定,使相机能随光源移动而移动。The image acquisition system further includes: a plurality of brackets, the plurality of brackets including a plurality of lateral brackets and a plurality of longitudinal brackets; a plurality of cameras fixed at the intersection positions of the plurality of the lateral brackets and the longitudinal brackets; and a track structure including a horizontal Track and longitudinal track, the track structure can move freely in the horizontal and vertical directions on the bracket; a light source is fixed at the intersection of the horizontal track and the vertical track; a camera is fixed to the light source, so that the camera can move with the light source .
优选的,所述固定的工业相机在对准人脸的方向,即该相机的光轴方向各不相同。Preferably, the directions of the fixed industrial cameras directed at the human face, that is, the optical axis directions of the cameras are different.
优选的,该支架为滑轨形式,即各支架间的相互距离可以调整,通过支架间相互距离的调整从而可以改变相机间的相对位置,以适应不同测试场合的需要。Preferably, the bracket is in the form of a slide rail, that is, the mutual distance between the brackets can be adjusted, and the relative position between the cameras can be changed by adjusting the mutual distance between the brackets to meet the needs of different testing occasions.
实施例1Example 1
分类器单元为经过训练可以实现人眼视线聚焦点位置判断的单元,分类器单元的完成需要经过样本采集和分类器训练两个步骤;The classifier unit is a unit that can be trained to determine the position of the focal point of the human eye. The completion of the classifier unit requires two steps: sample collection and classifier training.
步骤S1,样本采集,具体为人脸面部图像的采集,具体采用本***的人脸面部图像采集装置,采集装置中所有的相机必须同时拍照,同时拍的照片的每个图像都必须盖有时间戳,作为一个样本,以图像采集装置有9个相机为例,在时间A1拍的照片,可以依次编号为A1P11,A1P12,A1P13,A1P21,A1P22,A1P23,A1P31,A1P32,A1P33,分别代表左上、中上,右上、中左、正中、中右、左下、中下、右下的图片,并且以这9张图像作为一个整体样本进行判断;采样的图像都采用统一的格式及大小,为后续的比较评估单元的对比带来便利。Step S1, the sample collection is specifically the collection of the facial image, and the facial and facial image acquisition device of the system is specifically adopted. All cameras in the acquisition device must take pictures at the same time, and each image of the photos taken at the same time must be stamped with a time stamp. As a sample, taking the image acquisition device with 9 cameras as an example, the photos taken at time A1 can be sequentially numbered A1P11, A1P12, A1P13, A1P21, A1P22, A1P23, A1P31, A1P32, A1P33, which represent the upper left and middle, respectively. Top, top right, center left, center, center right, bottom left, center bottom, bottom right, and use these 9 images as a whole to judge; the sampled images all use a uniform format and size for subsequent comparisons Comparison of evaluation units facilitates.
由于本申请的图像采集装置采用了多个相机同时拍摄,大大提高了人眼视线聚焦点的识别精度,相应的在样本采集阶段,自然也需要提供更多的数据以支撑识别精度,具体为驾驶员需要对各个观察点都采集样本,例如车辆前挡风玻璃的各个位置以及左、中、右后视镜等,其中前挡风玻璃的各个位置以人眼视线聚焦点所覆盖的面积为最小单位,采集尽可能多的样本。Because the image acquisition device of the present application uses multiple cameras to shoot at the same time, the recognition accuracy of the focus point of the human eye is greatly improved. Accordingly, at the sample collection stage, more data is naturally required to support the recognition accuracy, specifically for driving. The operator needs to collect samples for each observation point, such as the various positions of the front windshield of the vehicle and the left, center, and right rearview mirrors. Among them, the area covered by the focus point of the human eye is the smallest in each position of the front windshield. Unit, collect as many samples as possible.
在样本采集的过程中,必须考虑环境因素,如晴天(光照充足),阴天,雨天,雪天,有雾,隧道等天气或路况,样本集必须覆盖各种环境下驾驶员注意力集中和不集中的的情况;由于不同人种的五官差异较大,如两眼之间的距离等,还需要对不同肤色的人进行区分,以分别采集相关的样本,除此之外,还需要对不同性别的人以及是否佩戴眼镜的人以及不同坐姿分别进行采样,其中性别的判断可以为留胡须的为男士,有长发、口红或是耳环等的为女士;不同坐姿具体指面部是否平视前方以及面部离图像采集装置之间的距离。In the process of sample collection, environmental factors must be considered, such as sunny (adequate sunlight), cloudy, rainy, snowy, foggy, tunnels and other weather or road conditions. The sample set must cover the driver's concentration and Non-concentrated situations; due to the large differences in facial features between different races, such as the distance between two eyes, it is also necessary to distinguish between people of different skin colors in order to collect relevant samples separately. In addition, People of different genders, whether or not wearing glasses, and different sitting positions are sampled separately. The gender judgment can be men with beards and women with long hair, lipstick, or earrings. Different sitting positions refer to whether the face is looking straight ahead. And the distance between the face and the image acquisition device.
步骤S2,分类器训练Step S2, classifier training
优选的,如果使用神经网络来进行相应的训练,可以选择人脸关键点标注工具,例如,人脸关键点标注软件,对样本的人脸关键点进行标注。所述人脸 关键点是指人脸上具有分辨力的点,包括人脸轮廓上的17个关键点(包括眼角、眼睛中心、鼻尖、鼻梁、嘴角、嘴唇、人脸轮廓等)。还可以是眉毛上的10个关键点、眼睛上的12个关键点、鼻子上的9个关键点、嘴巴上的20个关键点,一共68个关键点。Preferably, if a neural network is used for corresponding training, a face keypoint labeling tool may be selected, for example, face keypoint labeling software to label the face keypoints of the sample. The key points on the face refer to points with discrimination on the face, including 17 key points on the contour of the face (including the corners of the eyes, the center of the eyes, the tip of the nose, the bridge of the nose, the corners of the mouth, lips, contours of the face, etc.) It can also be 10 key points on the eyebrows, 12 key points on the eyes, 9 key points on the nose, and 20 key points on the mouth, a total of 68 key points.
这些关键点选择的较多,是为了在得到人眼视线方向的同时,还可以得到人脸部更详细的信息,选择的人脸关键点越多,能够提取的有效信息越多,为后期进行包括人眼视线方向在内的其他图像学习识别提供可能。例如选择上述最多的68个关键点,通过神经网络训练后可以得到面部姿势,例如:眨眼、凝视、扬眉,皱眉,点头,摇头等姿态。Many of these keypoints are selected in order to obtain the direction of human eyes and more detailed information on the face. The more keypoints you select, the more effective information you can extract for later stages. Other image learning and recognition, including the direction of the human eye's line of sight, is possible. For example, the 68 key points mentioned above can be selected, and facial poses can be obtained after training through the neural network, such as blinking, staring, raising eyebrows, frowning, nodding, shaking head and other postures.
优选的,在所述人脸特征点中只使用了5个特征点来确定人眼视线。这5个特征点的选择考虑了因为采用单相机方式,必须要采集到左右眼图像以及另一器官的特征点。因此对于另一器官,这一特征点选择了与左右眼距离基本相等的鼻尖位置。另外,鼻尖在脸部凸起明显,适合于作为特征点选择。另外4个特征点,分别是左眼和右眼的两个眼角。Preferably, only 5 feature points are used in the human face feature points to determine the sight line of the human eye. The selection of these five feature points considered that because of the single-camera method, the left and right eye images and the feature points of another organ must be collected. Therefore, for another organ, this feature point selects the position of the tip of the nose that is substantially equal to the distance between the left and right eyes. In addition, the tip of the nose is prominent on the face, which is suitable for selection as a feature point. The other four characteristic points are the two corners of the left and right eyes.
通过人眼4个点,优选为左眼的外眼角和内眼角,右眼的外眼角和内眼角以及鼻尖,得到人眼在相机坐标系下的3D位置。The 3D position of the human eye in the camera coordinate system is obtained through 4 points of the human eye, preferably the outer and inner corners of the left eye, the outer and inner corners of the right eye, and the tip of the nose.
该分类器的训练还可以采用其他训练方式。例如自适应算法分类器,但选择不同的方式进行训练其效果是不同的,本发明优选采用的是神经网络的深度学习方式进行。这得益于训练用数据量的准确和数据的庞大,这为训练后得到准确的人眼视线位置提供了可能。The classifier can also be trained using other training methods. For example, an adaptive algorithm classifier, but different methods for training have different effects. The present invention preferably uses a deep learning method of a neural network. This is due to the accuracy of the amount of training data and the huge amount of data, which makes it possible to obtain the accurate position of the line of sight of the human eye after training.
通过上述的神经网络产出视线经纬度或位置坐标,配合之前得到眼睛在相机坐标系下3D位置和相机与车体外参信息,可以知道视线在车内哪个位置,包括,仪表盘,前车窗,等。Through the aforementioned neural network, the line of sight latitude and longitude or position coordinates are produced, and the 3D position of the eye in the camera coordinate system and the camera and vehicle external parameters information are obtained before cooperation. Wait.
实施例2Example 2
该实施例与上述实施例中相同的步骤和/或结构就不再重复,本实施例仅仅介绍某些变化或更加具体的步骤和/或结构。The same steps and / or structures in this embodiment as those in the above embodiment will not be repeated, and this embodiment only introduces certain changes or more specific steps and / or structures.
由于本申请采用的图像采集***,包括多台相机,其提高了人眼视线焦点的判定精度,可应用于智能驾驶领域,提高了安全性;具体实现方式如下:以图像采集装置有9个相机为例,在步骤S2,分类器训练中,首先进行步骤S21,图像的处理,由于在采集图像的过程中已经统一了相关的格式及尺寸大小,这里的处理仅仅涉及将图像进行网格化处理,比如将在图片中***100*100的网格线;然后进行步骤S22、步骤S23,最后在步骤S24中,在同一个样品的9张图片中,在特征值对比中,采用脸部高度、宽度、两瞳孔连线、左右眼大小等数据,通过相应的算法计算人眼注意力的方向,选择相对而言人眼注意力正对的图片(从X,Y两个维度),比如选择的照片为P21为人眼注意力正对的图片,即左中的照片,再根据其相对XY平面倾斜的角度,即网格线的刻度,来判断人眼视线的聚焦点的范围,判断的方法为采用分类器与样品集中的左中图片进行比对为主,以其他照片辅助比对,最后得到人眼视线的聚焦点的具体范围,例如为左后视镜。正是由于本申请采用了具备多台相机的图像采集装置,使得可以从各个角度准确判定驾驶员的视线聚焦点的范围。Because the image acquisition system used in this application includes multiple cameras, it improves the accuracy of determining the focus of human eyes, can be applied to the field of intelligent driving, and improves safety; the specific implementation is as follows: the image acquisition device has 9 cameras For example, in step S2, the training of the classifier first proceeds to step S21, the processing of the image. Since the relevant format and size have been unified in the process of collecting the image, the processing here only involves gridding the image. For example, a 100 * 100 grid line will be inserted in the picture; then step S22, step S23, and finally in step S24, in 9 pictures of the same sample, in the feature value comparison, the face height, Width, two pupil lines, left and right eye size and other data. Calculate the direction of the human eye's attention through the corresponding algorithm, and choose the relatively opposite human eye's attention (from X, Y two dimensions), such as the selected The photo is P21, which is the picture of the human eye's attention, that is, the picture in the left and middle, and then the human eye is judged according to the angle of its tilt relative to the XY plane, that is, the scale of the grid line. The range of the focus point is determined by comparing the classifier with the left and middle pictures in the sample set, and using other photos to assist the comparison. Finally, the specific range of the focus point of the human eye is obtained, such as the left rear view. mirror. It is precisely because the image acquisition device provided with multiple cameras is used in this application that the range of the focus point of the driver's sight can be accurately determined from various angles.
实施例3Example 3
该实施例与上述实施例中相同的步骤和/或结构就不再重复,本实施例仅仅介绍某些变化或更加具体的步骤和/或结构。The same steps and / or structures in this embodiment as those in the above embodiment will not be repeated, and this embodiment only introduces certain changes or more specific steps and / or structures.
本发明的驾驶辅助***还包括:实时驾驶场景监测单元;该单元包括前景图像采集单元,用于实时采集驾驶员视野内的前方车外图像,所采集的车外图像也都必须包括时间戳;车辆驾驶状态监控单元,该监控单元包括车辆行驶轨 迹监控单元;该车辆行驶轨迹单元包括车轮传感器,用于感应和计算出车辆实时行驶轨迹,例如转弯,并线等操作;地图单元,用于将此时的车辆位置定位在预先采集的地图中,用于得出车辆当前位置的交通状况,例如前方是否有十字马路,是否有红绿灯,是否有匝道等。The driving assistance system of the present invention further includes: a real-time driving scene monitoring unit; the unit includes a foreground image acquisition unit for real-time acquisition of an image outside the vehicle in front of the driver's field of vision, and the acquired image outside the vehicle must also include a time stamp; Vehicle driving state monitoring unit, the monitoring unit includes a vehicle trajectory monitoring unit; the vehicle trajectory unit includes a wheel sensor for sensing and calculating a vehicle's real-time driving trajectory, such as turning and parallel operation; a map unit for The position of the vehicle at this time is located in a map collected in advance to obtain the traffic condition of the current position of the vehicle, such as whether there is a cross road in front, whether there is a traffic light, whether there is a ramp, etc.
分类器单元,通过对比获取人眼视线聚焦点的范围值。The classifier unit obtains the range value of the focus point of the human eye through comparison.
比较评估单元:在各种交通状况下经过神经网络训练下大多数驾驶员人眼视线聚焦的范围值配合实时驾驶场景的信息进行比对,但需要注意的是驾驶员图像和与其配合的车外图像必须具有相同的时间戳。例如在准备超车的状态下,此时人眼应该注视在左后视镜的区域。在转弯过程中人眼应该注视在车辆前挡风玻璃相应转弯方向的区域等。一计算单元用于将人眼注视的位置坐标与实时驾驶场景监测单元输出的当前人眼应该注视区域进行比对,得出该人眼实际位置是否在该区域内;最后比较评估单元还可以将人眼注意力分为不同的等级,例如注意力集中、注意力轻微不集中,疲劳驾驶、酒驾等。Comparison and evaluation unit: Most drivers ’eyes and eye focus range values are compared with real-time driving scene information under various traffic conditions to compare with the real-time driving scene information. However, it is necessary to pay attention to the driver ’s image and the outside of the vehicle. The images must have the same time stamp. For example, in the state of overtaking, the human eye should look at the area of the left rearview mirror at this time. During the turn, the human eye should look at the area around the vehicle's front windshield corresponding to the turn direction. A calculation unit is used to compare the position coordinates of the human eye gaze with the current eye gaze area output by the real-time driving scene monitoring unit to determine whether the actual position of the human eye is within the area; finally, the comparison and evaluation unit may also compare Human eye attention is divided into different levels, such as concentration, slight inattention, fatigue driving, drunk driving, etc.
优选的,该当前人眼应该注视的区域还可以利用大数据得出。具体的是:在该比较评估单元中存有一数据库,该数据库中存储有各种交通状态下,多数司机人眼所注视的区域。例如,当需要右转弯时,此时大多数司机人眼视线的位置应该在右后视镜的位置。在需要倒车时,大多数司机人眼视线的位置还包括有车内后视镜的位置。当停车下车时,大多数司机人眼视线的位置应该包括左后视镜的位置。将收集的大多数司机人眼应注视的区域位置分驾驶情况进行归类,存储在所述数据库中。例如表1所示:Preferably, the area that the current human eye should look at can also be obtained by using big data. Specifically, a database is stored in the comparison and evaluation unit, and the database stores areas under the eyes of most drivers under various traffic conditions. For example, when a right turn is required, the position of the eyes of most drivers at this time should be the position of the right rearview mirror. When the driver needs to reverse, the position of the eyes of most drivers also includes the position of the rear-view mirror. When parking and getting off, the position of most drivers' eyes should include the position of the left rearview mirror. The locations of most collected drivers' eyes should be classified into driving situations and stored in the database. For example, as shown in Table 1:
序号Serial number 驾驶情景Driving scenario 人眼注意位置Human eye attention position
11 右转弯Turn right 右后视镜Right rearview mirror
22 左转弯Turn left 左后视镜Left rearview mirror
33 倒车Back up 后视镜 rearview mirror
44 停车下车Stop and get off 左后视镜Left rearview mirror
55 超车overtake 左后视镜,右后视镜Left rearview mirror, right rearview mirror
……...  Zh  Zh
表1 各驾驶情景人眼应注视位置Table 1 Human eye should look at each driving situation
当驾驶员进行实际驾驶时,该计算单元用于将当前驾驶员人眼注视的位置坐标与上述数据库中提供的实时驾驶场景监测单元输出的当前人眼应该注视区域进行比对,以得出此时驾驶员是否将视线注视在该注视的位置。When the driver is actually driving, the calculation unit is used to compare the current eye gaze position coordinates of the current driver with the real-time driving scene monitoring unit output area provided in the above database to compare the current eye gaze area to obtain this Whether the driver fixes his gaze at the gaze position at that time.
报警及紧急处理单元:根据比较评估单元的注意力评价等级,报警及紧急处理单元可采取相应的措施。例如,当等级为注意力轻微不集中,例如判断此时人眼实际注视位置不在该区域内时,则会提醒例如发出警告声,并且语音或图像显示应注视的位置区域,使驾驶员将视线放在该注视的位置。还可以判断当等级为酒驾或是严重的疲劳驾驶时,如果汽车具有智能驾驶功能,对外可亮起双闪提醒周围车辆,并逐渐减速直至停车,对内可提高警报的分贝刺激驾驶人员。Alarm and emergency processing unit: According to the attention evaluation level of the comparative evaluation unit, the alarm and emergency processing unit can take corresponding measures. For example, when the level is slightly inattentive, for example, when it is judged that the actual gaze position of the human eye is not in the area at this time, it will remind, for example, a warning sound, and the voice or image displays the area where the gaze should be focused, so that the driver will look away Place in the gaze position. It can also be judged that when the level is drunk driving or severe fatigue driving, if the car has an intelligent driving function, it can light up double flashes to remind the surrounding vehicles and gradually slow down until it stops. The decibel of the alarm can be increased to stimulate the driver.
此外,也可以不采用所述数据库,也能达到对驾驶员驾驶时注意力的检测。通过训练后的神经网络会输出驾驶员当时的视线经纬度或位置坐标。该比较评估单元会记录并监控此时驾驶员的所述经纬度或位置坐标,当发现设定的阈值条件下所述经纬度和位置坐标没有发生变化,说明此时驾驶员可能处于注意力不集中的状态。此时比较评估单元发出信号至报警及紧急处理单元。所述报警及紧急处理单元会提醒例如发出警告声,提醒驾驶员注意驾驶。In addition, it is also possible to detect the driver's attention while driving without using the database. The neural network after training will output the driver's current line of latitude, longitude or position coordinates. The comparison and evaluation unit will record and monitor the driver's latitude, longitude or position coordinates at this time. When it is found that the latitude, longitude, and position coordinates have not changed under the set threshold conditions, it means that the driver may be inattentive at this time. status. At this time, the comparison and evaluation unit sends a signal to the alarm and emergency processing unit. The alarm and emergency processing unit will remind, for example, a warning sound to remind the driver to pay attention to driving.
申请人声明,本发明通过上述实施例来说明本发明的详细结构特征,但本 发明并不局限于上述详细结构特征,即不意味着本发明必须依赖上述详细结构特征才能实施。所属技术领域的技术人员应该明了,对本发明的任何改进,对本发明所选用部件的等效替换以及辅助部件的增加、具体方式的选择等,均落在本发明的保护范围和公开范围之内。The applicant states that the present invention illustrates the detailed structural features of the present invention through the above embodiments, but the present invention is not limited to the above detailed structural features, which does not mean that the present invention must rely on the above detailed structural features to be implemented. Those skilled in the art should understand that any improvement to the present invention, equivalent replacement of the components used in the present invention, addition of auxiliary components, selection of specific methods, etc., all fall within the scope of protection and disclosure of the present invention.
以上详细描述了本发明的优选实施方式,但是,本发明并不限于上述实施方式中的具体细节,在本发明的技术构思范围内,可以对本发明的技术方案进行多种简单变型,这些简单变型均属于本发明的保护范围。The preferred embodiments of the present invention have been described in detail above. However, the present invention is not limited to the specific details in the above embodiments. Within the scope of the technical concept of the present invention, various simple modifications can be made to the technical solution of the present invention. These simple modifications All belong to the protection scope of the present invention.
另外需要说明的是,在上述具体实施方式中所描述的各个具体技术特征,在不矛盾的情况下,可以通过任何合适的方式进行组合,为了避免不必要的重复,本发明对各种可能的组合方式不再另行说明。In addition, it should be noted that the specific technical features described in the above specific embodiments can be combined in any suitable manner without contradiction. In order to avoid unnecessary repetition, the present invention provides various possible The combination is not explained separately.
此外,本发明的各种不同的实施方式之间也可以进行任意组合,只要其不违背本发明的思想,其同样应当视为本发明所公开的内容。In addition, various combinations of the embodiments of the present invention can also be arbitrarily combined, as long as it does not violate the idea of the present invention, it should also be regarded as the content disclosed by the present invention.

Claims (10)

  1. 一种驾驶辅助***,其特征在于:所述***包括人眼视线实时监测单元,实时驾驶场景监测单元,比较评估单元,报警及紧急处理单元;A driving assistance system, characterized in that the system includes a human eye sight real-time monitoring unit, a real-time driving scene monitoring unit, a comparison evaluation unit, an alarm and emergency processing unit;
    所述人眼视线实时监测单元包括DMS图像采集单元,分类器单元;The real-time human eye sight monitoring unit includes a DMS image acquisition unit and a classifier unit;
    所述DMS图像采集单元将采集的人脸图像输入到已经训练好的所述分类器单元,该分类器单元输出此时人眼注视的经纬度或位置坐标;The DMS image acquisition unit inputs the collected face image to the trained classifier unit, and the classifier unit outputs the latitude, longitude or position coordinates of the human eye watching at this time;
    所述经纬度或位置坐标和所述实时驾驶场景监测单元采集的信息一起输入到所述比较评估单元;其中,所述比较评估单元将各种交通状况下经过神经网络训练下的人眼视线聚焦的范围值与实时驾驶场景的信息进行比对,且人脸图像和与其配合的车外图像具有相同的时间戳;The latitude and longitude or position coordinates are input to the comparison and evaluation unit together with the information collected by the real-time driving scene monitoring unit; wherein the comparison and evaluation unit focuses the eyes of a human being trained by a neural network under various traffic conditions. The range value is compared with the information of the real-time driving scene, and the face image and the matching outside image have the same time stamp;
    报警及紧急处理单元,根据所述比较评估单元的评估结果采取相应的措施。The alarm and emergency processing unit takes corresponding measures according to the evaluation result of the comparative evaluation unit.
  2. 根据权利要求1所述的***,其特征在于:所述实时驾驶场景监测单元包括前景图像采集单元,车辆驾驶状态监控单元,地图单元。The system according to claim 1, wherein the real-time driving scene monitoring unit comprises a foreground image acquisition unit, a vehicle driving state monitoring unit, and a map unit.
  3. 根据权利要求1-2任一项所述的***,其特征在于:训练所述分类器单元为采用多台相机呈阵列形式从不同的角度在同一时间拍摄图像,从不同角度同一时间拍摄的图像构成的图像集作为一个样本。The system according to any one of claims 1-2, characterized in that: training the classifier unit is to use multiple cameras in the form of an array to take images from different angles at the same time and images taken from the same time at different angles The set of images is used as a sample.
  4. 根据权利要求1-3任一项所述的***,其特征在于:训练所述分类器所使用的样本集包括人眼注视的位置在车辆前挡风玻璃的各个位置以及左、中、右后视镜。The system according to any one of claims 1 to 3, wherein the sample set used for training the classifier includes positions at which human eyes fixate on various positions of the front windshield of the vehicle and left, middle, and right rear glass.
  5. 根据权利要求1-4任一项所述的***,其特征在于:所述DMS图像采集单元为单一相机。The system according to any one of claims 1-4, wherein the DMS image acquisition unit is a single camera.
  6. 根据权利要求1所述的***,所述比较评估单元包括一数据库;该数据库存储有各驾驶情景下人眼应该注视的位置区域;当采集的所述人眼注视的位置坐标不在所述位置区域时,所述报警及紧急处理单元对驾驶员进行提醒。The system according to claim 1, wherein the comparison and evaluation unit comprises a database; the database stores a position area where the human eye should look in each driving situation; when the collected position coordinates of the human eye gaze are not in the position area At that time, the alarm and emergency processing unit reminds the driver.
  7. 根据权利要求1所述的***,其特征在于:所述DMS图像采集单元和所述实时驾驶场景监测单元所采集的图像都含有时间戳。The system according to claim 1, wherein the images collected by the DMS image acquisition unit and the real-time driving scene monitoring unit both contain time stamps.
  8. 根据权利要求1所述的***,其特征在于:所述前景图像采集单元,用于实时采集驾驶员视野内的前方车外图像;所述地图单元用于将此时的车辆位置定位在预先采集的地图中,用于得出车辆当前位置的交通状况。The system according to claim 1, characterized in that: the foreground image acquisition unit is configured to collect a real-time image outside the vehicle in front of the driver's field of view; and the map unit is configured to locate the vehicle position at this time in advance to collect On the map for traffic conditions at the current location of the vehicle.
  9. 根据权利要求1所述的***,其特征在于:所述比较评估单元会记录并监控此时驾驶员的所述经纬度或位置坐标,当在设定的阈值条件下所述经纬度和位置坐标没有发生变化,则所述报警及紧急处理单元对驾驶员进行提醒。The system according to claim 1, wherein the comparison and evaluation unit records and monitors the latitude, longitude, and position coordinates of the driver at this time, and the latitude, longitude, and position coordinates do not occur under a set threshold condition When the change occurs, the alarm and emergency processing unit reminds the driver.
  10. 一种使用权利要求1-9任一项所述的***提高驾驶员驾驶注意力的方法,该方法包括如下步骤:A method for improving driving attention of a driver using the system according to any one of claims 1-9, the method comprising the following steps:
    步骤S1.通过一人眼视线实时监测单元监测驾驶员的人眼视线位置,该人眼视线实时监测单元包括DMS图像采集单元,分类器单元;所述DMS图像采集单元将采集的人脸图像输入到已经训练好的所述分类器单元,该分类器单元输出此时人眼注视的经纬度或位置坐标;其中在训练分类器单元的过程中,使用一图像采集***,该图像采集***包括有多个相机,并且多个相机同一时刻拍摄人脸图像;且每个所述拍摄图像都盖有时间戳,作为一个样本;Step S1. The driver's eye sight position is monitored by a human eye sight real-time monitoring unit, which includes a DMS image acquisition unit and a classifier unit; the DMS image acquisition unit inputs the collected face image to The classifier unit has been trained, and the classifier unit outputs the latitude, longitude or position coordinates of human eyes watching at this time; wherein during the training of the classifier unit, an image acquisition system is used, and the image acquisition system includes a plurality of A camera, and multiple cameras capture a face image at the same time; and each said captured image is time stamped as a sample;
    步骤S2.将所述人眼注视的经纬度或位置坐标和所述实时驾驶场景监测单元采集的前景图像信息和车辆驾驶状态信息一起输入至所述比较评估单元;Step S2: input the latitude, longitude or position coordinates of the human eye to the comparison evaluation unit together with the foreground image information collected by the real-time driving scene monitoring unit and the vehicle driving state information;
    步骤S3.所述评估单元基于输入的所述前景图像信息、所述车辆驾驶状态信息、所述位置坐标进行评估得出评估结果;Step S3. The evaluation unit performs evaluation based on the inputted foreground image information, the vehicle driving state information, and the position coordinates to obtain an evaluation result;
    步骤S4.一报警及紧急处理单元根据述比较评估单元的评估结果采取相应的措施。Step S4. An alarm and emergency processing unit takes corresponding measures according to the evaluation result of the comparative evaluation unit.
PCT/CN2018/113670 2018-08-10 2018-11-02 Method and system for detecting attention of driver while driving WO2020029444A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810905378.4A CN110826369A (en) 2018-08-10 2018-08-10 Driver attention detection method and system during driving
CN201810905378.4 2018-08-10

Publications (1)

Publication Number Publication Date
WO2020029444A1 true WO2020029444A1 (en) 2020-02-13

Family

ID=69414452

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/113670 WO2020029444A1 (en) 2018-08-10 2018-11-02 Method and system for detecting attention of driver while driving

Country Status (2)

Country Link
CN (1) CN110826369A (en)
WO (1) WO2020029444A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111985307A (en) * 2020-07-07 2020-11-24 深圳市自行科技有限公司 Driver specific action detection method, system and device
CN112289003A (en) * 2020-10-23 2021-01-29 江铃汽车股份有限公司 Method for monitoring end-of-life driving behavior of fatigue driving and active safe driving monitoring system
CN113317792A (en) * 2021-06-02 2021-08-31 樊天放 Attention detection system and method based on binocular eye vector analysis
CN113837027A (en) * 2021-09-03 2021-12-24 东风柳州汽车有限公司 Driving assistance sensing method, device, equipment and storage medium
CN114475426A (en) * 2022-01-29 2022-05-13 深圳智慧车联科技有限公司 Muck vehicle right-turning early warning reminding system and reminding method
CN114575685A (en) * 2022-03-14 2022-06-03 合众新能源汽车有限公司 Vision-based method and system for preventing mistaken touch of automobile door handle
WO2023103708A1 (en) * 2021-12-07 2023-06-15 虹软科技股份有限公司 Automatic calibration method and apparatus for distraction region, road vehicle, and electronic device
CN117133096A (en) * 2023-10-26 2023-11-28 中汽研汽车检验中心(宁波)有限公司 Test system and test method for driver attention monitoring system

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476185B (en) * 2020-04-13 2023-10-10 罗跃宸 Driver attention monitoring method, device and system
CN111626221A (en) * 2020-05-28 2020-09-04 四川大学 Driver gazing area estimation method based on human eye information enhancement
CN112883834A (en) * 2021-01-29 2021-06-01 重庆长安汽车股份有限公司 DMS system distraction detection method, DMS system distraction detection system, DMS vehicle, and storage medium
CN113989466B (en) * 2021-10-28 2022-09-20 江苏濠汉信息技术有限公司 Beyond-the-horizon assistant driving system based on situation cognition
CN115331513A (en) * 2022-07-27 2022-11-11 山东心法科技有限公司 Auxiliary training method, equipment and medium for automobile driving skills

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930278A (en) * 2012-10-16 2013-02-13 天津大学 Human eye sight estimation method and device
US20130162794A1 (en) * 2011-12-26 2013-06-27 Denso Corporation Driver monitoring apparatus
CN103767715A (en) * 2014-01-15 2014-05-07 中国人民解放军国防科学技术大学 Device for detecting safety driving states of driver
CN103770733A (en) * 2014-01-15 2014-05-07 中国人民解放军国防科学技术大学 Method and device for detecting safety driving states of driver
CN107527000A (en) * 2016-06-21 2017-12-29 现代自动车株式会社 The apparatus and method that the concentration degree of driver is monitored using eyes tracking
CN107818310A (en) * 2017-11-03 2018-03-20 电子科技大学 A kind of driver attention's detection method based on sight
CN108304745A (en) * 2017-01-10 2018-07-20 普天信息技术有限公司 A kind of driver's driving behavior detection method, device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7742623B1 (en) * 2008-08-04 2010-06-22 Videomining Corporation Method and system for estimating gaze target, gaze sequence, and gaze map from video
CN102307288A (en) * 2011-07-27 2012-01-04 中国计量学院 Projection system moving along with sightline of first person based on human face recognition
US20130096820A1 (en) * 2011-10-14 2013-04-18 Continental Automotive Systems, Inc. Virtual display system for a vehicle
CN102547123B (en) * 2012-01-05 2014-02-26 天津师范大学 Self-adapting sightline tracking system and method based on face recognition technology
US20170119298A1 (en) * 2014-09-02 2017-05-04 Hong Kong Baptist University Method and Apparatus for Eye Gaze Tracking and Detection of Fatigue

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130162794A1 (en) * 2011-12-26 2013-06-27 Denso Corporation Driver monitoring apparatus
CN102930278A (en) * 2012-10-16 2013-02-13 天津大学 Human eye sight estimation method and device
CN103767715A (en) * 2014-01-15 2014-05-07 中国人民解放军国防科学技术大学 Device for detecting safety driving states of driver
CN103770733A (en) * 2014-01-15 2014-05-07 中国人民解放军国防科学技术大学 Method and device for detecting safety driving states of driver
CN107527000A (en) * 2016-06-21 2017-12-29 现代自动车株式会社 The apparatus and method that the concentration degree of driver is monitored using eyes tracking
CN108304745A (en) * 2017-01-10 2018-07-20 普天信息技术有限公司 A kind of driver's driving behavior detection method, device
CN107818310A (en) * 2017-11-03 2018-03-20 电子科技大学 A kind of driver attention's detection method based on sight

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111985307A (en) * 2020-07-07 2020-11-24 深圳市自行科技有限公司 Driver specific action detection method, system and device
CN112289003A (en) * 2020-10-23 2021-01-29 江铃汽车股份有限公司 Method for monitoring end-of-life driving behavior of fatigue driving and active safe driving monitoring system
CN112289003B (en) * 2020-10-23 2022-06-17 江铃汽车股份有限公司 Method for monitoring end-of-driving behavior of fatigue driving and active safety driving monitoring system
CN113317792A (en) * 2021-06-02 2021-08-31 樊天放 Attention detection system and method based on binocular eye vector analysis
CN113837027A (en) * 2021-09-03 2021-12-24 东风柳州汽车有限公司 Driving assistance sensing method, device, equipment and storage medium
WO2023103708A1 (en) * 2021-12-07 2023-06-15 虹软科技股份有限公司 Automatic calibration method and apparatus for distraction region, road vehicle, and electronic device
CN114475426A (en) * 2022-01-29 2022-05-13 深圳智慧车联科技有限公司 Muck vehicle right-turning early warning reminding system and reminding method
CN114575685A (en) * 2022-03-14 2022-06-03 合众新能源汽车有限公司 Vision-based method and system for preventing mistaken touch of automobile door handle
CN117133096A (en) * 2023-10-26 2023-11-28 中汽研汽车检验中心(宁波)有限公司 Test system and test method for driver attention monitoring system
CN117133096B (en) * 2023-10-26 2024-01-09 中汽研汽车检验中心(宁波)有限公司 Test system and test method for driver attention monitoring system

Also Published As

Publication number Publication date
CN110826369A (en) 2020-02-21

Similar Documents

Publication Publication Date Title
WO2020029444A1 (en) Method and system for detecting attention of driver while driving
US20210357670A1 (en) Driver Attention Detection Method
EP3109114B1 (en) Method and device for detecting safe driving state of driver
US9180887B2 (en) Driver identification based on face data
CN101593425B (en) Machine vision based fatigue driving monitoring method and system
CN100462047C (en) Safe driving auxiliary device based on omnidirectional computer vision
García et al. Driver monitoring based on low-cost 3-D sensors
CN109501807B (en) Automatic driving attention detection system and method
CN103714659B (en) Fatigue driving identification system based on double-spectrum fusion
CN104200192A (en) Driver gaze detection system
CN106156725A (en) A kind of method of work of the identification early warning system of pedestrian based on vehicle front and cyclist
CN110703904A (en) Augmented virtual reality projection method and system based on sight tracking
CN109664889B (en) Vehicle control method, device and system and storage medium
CN110490139A (en) Night fatigue driving judgment method based on recognition of face
Tawari et al. Attention estimation by simultaneous analysis of viewer and view
CN110825216A (en) Method and system for man-machine interaction of driver during driving
Ribeiro et al. Driver gaze zone dataset with depth data
WO2019176492A1 (en) Calculation system, information processing device, driving assistance system, index calculation method, computer program, and storage medium
JP7331728B2 (en) Driver state estimation device
Louie et al. Towards a driver monitoring system for estimating driver situational awareness
JP7331729B2 (en) Driver state estimation device
JP2004334786A (en) State detection device and state detection system
CN116012822B (en) Fatigue driving identification method and device and electronic equipment
CN114037979A (en) Lightweight driver fatigue state detection method
Riera et al. Detecting and tracking unsafe lane departure events for predicting driver safety in challenging naturalistic driving data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18929311

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18929311

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 18929311

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 06.08.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18929311

Country of ref document: EP

Kind code of ref document: A1