TW202222619A - In-vehicle driving monitoring system comprising an image capturing device and a driving monitoring device - Google Patents

In-vehicle driving monitoring system comprising an image capturing device and a driving monitoring device Download PDF

Info

Publication number
TW202222619A
TW202222619A TW109142224A TW109142224A TW202222619A TW 202222619 A TW202222619 A TW 202222619A TW 109142224 A TW109142224 A TW 109142224A TW 109142224 A TW109142224 A TW 109142224A TW 202222619 A TW202222619 A TW 202222619A
Authority
TW
Taiwan
Prior art keywords
image
feature point
feature points
driving
driver
Prior art date
Application number
TW109142224A
Other languages
Chinese (zh)
Other versions
TWI741892B (en
Inventor
郭英偉
王文虎
Original Assignee
咸瑞科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 咸瑞科技股份有限公司 filed Critical 咸瑞科技股份有限公司
Priority to TW109142224A priority Critical patent/TWI741892B/en
Application granted granted Critical
Publication of TWI741892B publication Critical patent/TWI741892B/en
Publication of TW202222619A publication Critical patent/TW202222619A/en

Links

Images

Landscapes

  • Image Processing (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

An in-vehicle driving monitoring system comprises an image capturing device and a driving monitoring device. The image capture device captures a driving image of driving; the driving monitoring device is connected to the image capturing device and comprises an image processing unit, a feature identification unit and a status monitoring unit, wherein the image processing unit performs an image processing process on the driving image and generates a face image; the feature identification unit is connected to the image processing unit to generate a plurality of feature points corresponding to the facial features according to the face image, and the positions of the plurality of feature points are represented by coordinate values; the status monitoring unit is connected to the feature identification unit, and compares the coordinate values of the plurality of feature points with the coordinate values of multiple internally-stored preset feature points, so as to judge whether the driving status is abnormal.

Description

車內駕駛監測系統In-vehicle driving monitoring system

一種監測系統,尤指一種車內駕駛監測系統。A monitoring system, especially an in-vehicle driving monitoring system.

車輛為現代社會中廣為應用的交通工具之一,與現代人的生活息息相關,舉凡是日常通勤或貨物運輸等都涉及車輛的應用,而隨著車輛的增多,發生交通事故的機率也隨之攀升。Vehicles are one of the most widely used means of transportation in modern society, and are closely related to the lives of modern people. For example, daily commuting or freight transportation involves the application of vehicles. With the increase of vehicles, the probability of traffic accidents also follows. rising.

車輛要安全行駛十分仰賴駕駛人的專注力及操駕,駕駛人必須隨時注意周圍車輛及道路環境,並保持良好的精神狀態,以因應行駛間的各種突發狀況,而當駕駛人因長途駕駛等因素產生疲勞感或精神不濟時,駕駛人容易產生睏怠感,使得駕駛人的注意力無法集中、反應速度下降,甚至發睏等,可能導致駕駛人疏於車輛操駕,造成車輛失控發生意外,且疲勞駕駛容易影響人體視覺,可能造成駕駛人視野模糊、視野縮減,影響駕駛人觀察周遭環境的能力,駕駛人便無法因應道路突發狀況即時做出反應,從而引發交通事故。The safe driving of the vehicle is very dependent on the driver's concentration and driving. The driver must pay attention to the surrounding vehicles and road environment at all times, and maintain a good mental state to cope with various emergencies during driving. When fatigue or lack of energy are caused by other factors, the driver is prone to feel drowsy, which makes the driver unable to concentrate, the reaction speed decreases, or even drowsiness, etc., which may cause the driver to neglect the operation of the vehicle and cause the vehicle to lose control. Accidents and fatigue driving can easily affect human vision, which may result in blurred and reduced vision of the driver, affecting the driver's ability to observe the surrounding environment, and the driver cannot respond immediately to road emergencies, resulting in traffic accidents.

有鑑於此,因此要提升車輛駕駛的安全性,減低發生交通事故發生機率,可從檢測駕駛人狀態著手,避免疲勞駕駛的情形發生,本發明提供一種車內駕駛監測系統,擷取駕駛人的人臉圖像,並根據圖像中對應人臉五官的特徵點判斷駕駛人狀態,當有狀態異常時便即時示警駕駛人,提醒駕駛人注意車況及自身狀態,以提升車輛駕駛的安全性。In view of this, in order to improve the safety of vehicle driving and reduce the probability of traffic accidents, we can start by detecting the driver's state to avoid fatigue driving. Face image, and judge the driver's state according to the feature points corresponding to the facial features in the image. When there is an abnormal state, the driver will be alerted immediately, reminding the driver to pay attention to the condition of the vehicle and its own state, so as to improve the safety of vehicle driving.

為達成前述目的,本發明車內駕駛監測系統包含有: 一影像擷取裝置,擷取駕駛人的一駕駛影像;及 一駕駛監測裝置,連接該影像擷取裝置,包含有: 一影像處理單元,對該影像擷取裝置擷取的該駕駛影像進行一影像處理流程,並產生一人臉圖像; 一特徵點辨識單元,連接該影像處理單元,根據該人臉圖像產生對應人臉五官的複數特徵點,並將該複數特徵點的位置以座標值表示;及 一狀態監測單元,連接該特徵點辨識單元,該狀態監測單元內部儲存有對應該複數特徵點的複數預設特徵點,該狀態監測單元將該複數特徵點的座標值與該複數預設特徵點的座標值進行比對,當各該特徵點的座標值與對應的各該預設特徵點的座標值的距離差距超過一特徵點偏移基準值時,或是當兩個特徵點間的距離與對應的兩個預設特徵點間的距離的長短差距超過一特徵點變化基準值時,該狀態監測單元即判斷駕駛狀態異常,並輸出一警示訊號警示使用者。 In order to achieve the aforementioned purpose, the in-vehicle driving monitoring system of the present invention includes: an image capture device for capturing a driving image of the driver; and A driving monitoring device, connected to the image capturing device, includes: an image processing unit for performing an image processing process on the driving image captured by the image capturing device, and generating a face image; a feature point identification unit, connected to the image processing unit, to generate complex feature points corresponding to facial features according to the face image, and to represent the positions of the complex feature points as coordinate values; and a state monitoring unit connected to the feature point identification unit, the state monitoring unit internally stores a plurality of preset feature points corresponding to the plurality of feature points, the state monitoring unit and the coordinate values of the plurality of feature points and the plurality of preset feature points When the distance difference between the coordinate value of each feature point and the coordinate value of the corresponding preset feature point exceeds a feature point offset reference value, or when the distance between the two feature points When the distance between the corresponding two preset feature points exceeds a feature point variation reference value, the state monitoring unit determines that the driving state is abnormal, and outputs a warning signal to warn the user.

本發明車內駕駛監測系統透過該影像擷取裝置擷取駕駛人的該駕駛影像,該影像擷取裝置對該駕駛影像執行影像處理,並將影像處理後的該人臉圖像傳輸至該特徵點辨識單元,由該特徵點辨識單元根據該人臉圖像中的人臉五官產生對應不同五官位置的該複數特徵點,後由該狀態監測單元進行該複數特徵點的比對,計算該複數特徵點與該複數預設特徵點的座標差距,藉此判斷駕駛人當前狀態是否偏離正常狀態,當該狀態監測單元判斷駕駛人的狀態出現異常時,即藉由該警示訊號警示駕駛人,以提升車輛駕駛的安全性。The in-vehicle driving monitoring system of the present invention captures the driving image of the driver through the image capture device, the image capture device performs image processing on the driving image, and transmits the image-processed face image to the feature Point identification unit, by this feature point identification unit according to the facial features in the face image to generate this complex number of feature points corresponding to different facial features, then by this state monitoring unit to carry out the comparison of this complex number of feature points, calculate this complex number The coordinate difference between the feature point and the plurality of preset feature points is used to determine whether the current state of the driver deviates from the normal state. Improve vehicle driving safety.

請參看圖1所示,本發明車內駕駛監測系統1,用以監測駕駛人狀態,當駕駛人狀態異常時對駕駛人進行示警,該車內駕駛監測系統1包含有:一影像擷取裝置10及一駕駛監測裝置20,該影像擷取裝置10可為一紅外線鏡頭,由該影像擷取裝置10擷取駕駛人的一駕駛影像,並由該影像擷取裝置10將該駕駛影像向外傳輸,該影像擷取裝置10可設置於車輛的儀表板、擋風玻璃、冷氣出風口、後照鏡等位置,使該影像擷取裝置10的拍攝方向是朝向該駕駛人的區域,當駕駛人乘坐於車輛中對車輛進行操駕時,可將該影像擷取裝置10對準駕駛人的臉部,以擷取該駕人駛操作車輛時的該駕駛影像,其中,該影像擷取裝置10的設置位置是以駕駛人以正常狀態駕駛車輛時,可擷取駕駛人正面的該駕駛影像的位置為準,而該影像擷取裝置10可將該駕駛影像以H.264視訊編碼(又稱MPEG-4)的技術進行影像壓縮,以減少該駕駛影像傳輸時的影像容量大小,且該影像擷取裝置10可以Wi-Fi點對點(Peer-to-Peer, P2P)協議進行該駕駛影像的傳輸。Please refer to FIG. 1 , the in-vehicle driving monitoring system 1 of the present invention is used to monitor the driver's state and warn the driver when the driver's state is abnormal. The in-vehicle driving monitoring system 1 includes: an image capture device 10 and a driving monitoring device 20, the image capturing device 10 can be an infrared lens, a driving image of the driver is captured by the image capturing device 10, and the driving image is captured by the image capturing device 10 to the outside The image capture device 10 can be installed on the instrument panel, windshield, air-conditioning vent, rear mirror, etc. of the vehicle, so that the shooting direction of the image capture device 10 is towards the area of the driver. When a person rides in a vehicle and drives the vehicle, the image capture device 10 can be aimed at the driver's face to capture the driving image when the driver operates the vehicle, wherein the image capture device The setting position of 10 is based on the position where the driving image of the front of the driver can be captured when the driver drives the vehicle in a normal state, and the image capturing device 10 can encode the driving image with H.264 video (and The technology called MPEG-4 is used for image compression to reduce the image capacity when the driving image is transmitted, and the image capture device 10 can perform the Wi-Fi Peer-to-Peer (P2P) protocol for the driving image. transmission.

該駕駛監測裝置20與該影像擷取裝置10連接,包含有一影像處理單元21、一特徵點辨識單元22及一狀態監測單元23,且該駕駛監測裝置20可為一手機或一平板等,由該駕駛監測裝置20執行駕駛監測程式,該駕駛監測裝置20可以Wi-Fi點對點(Peer-to-Peer, P2P)協議接收該影像擷取裝置10所傳輸的該駕駛影像,並由該影像處理單元21以H.264視訊編碼(又稱MPEG-4)的技術對該駕駛影像進行解碼,該影像處理單元21對該駕駛影像進行一影像處理流程產生一人臉圖像。The driving monitoring device 20 is connected to the image capturing device 10 and includes an image processing unit 21 , a feature point identification unit 22 and a state monitoring unit 23 , and the driving monitoring device 20 can be a mobile phone or a tablet, etc. The driving monitoring device 20 executes a driving monitoring program, and the driving monitoring device 20 can receive the driving image transmitted by the image capturing device 10 through a Wi-Fi peer-to-peer (P2P) protocol, and process the driving image by the image processing unit. 21 decodes the driving image with H.264 video coding (also known as MPEG-4) technology, and the image processing unit 21 performs an image processing process on the driving image to generate a face image.

該特徵點辨識單元22連接該影像處理單元21,該特徵點辨識單元22根據影像處理流程後的該人臉圖像的像素,以該人臉圖像的一邊界點為座標起點,並於該人臉圖像上建立平面座標軸,且於該人臉圖像上產生對應人臉五官狀態的複數特徵點,並將該複數特徵點的位置以座標值表示,其中,該複數特徵點可包含眼部特徵點、嘴部特徵點、鼻部特徵點等。The feature point identification unit 22 is connected to the image processing unit 21 , and the feature point identification unit 22 takes a boundary point of the face image as a coordinate starting point according to the pixels of the face image after the image processing process, and creates a A plane coordinate axis is established on the face image, and complex feature points corresponding to the facial features of the human face are generated on the face image, and the position of the complex feature point is represented by a coordinate value, wherein the complex feature point can include eye facial feature points, mouth feature points, nose feature points, etc.

如圖2所示,以駕駛人正常狀態時正面臉部的一人臉圖像為例,若該人臉圖像的像素值320*240,該特徵點辨識單元22可以該人臉圖像的左下角為起點(0,0)設定座標系,若以第一軸向為X軸,第二軸向為Y軸,該特徵點辨識單元22將該人臉圖像的第一軸向範圍設定為0~320、該人臉圖像的第二軸向範圍設定為0~240,且產生對應該人臉圖像中五官的該複數特徵點,該複數特徵點可包含對應分別駕駛人兩眉毛上緣的一第一特徵點P1及一第二特徵點P2、對應駕駛人其中一眼睛上緣及下緣的一第三特徵點P3及一第四特徵點P4、對應駕駛人兩側嘴角的一第五特徵點P5及一第六特徵點P6等,但該特徵點辨識單元22所設定的座標起點亦可為該人臉圖像的右上角、左上角等該人臉圖像的邊界點,該人臉圖像的座標起點及軸向不以本實施例為限。As shown in FIG. 2 , taking a face image of the front face of the driver in a normal state as an example, if the pixel value of the face image is 320*240, the feature point identification unit 22 can identify the lower left of the face image. The angle is the starting point (0,0) to set the coordinate system. If the first axis is the X axis and the second axis is the Y axis, the feature point identification unit 22 sets the first axis range of the face image as 0~320, the second axial range of the face image is set to 0~240, and the complex feature points corresponding to the facial features in the face image are generated. A first feature point P1 and a second feature point P2 of the edge, a third feature point P3 and a fourth feature point P4 corresponding to the upper and lower edges of one of the driver's eyes, and a corresponding corner of the driver's mouth on both sides. The fifth feature point P5 and a sixth feature point P6, etc., but the coordinate starting point set by the feature point identification unit 22 can also be the upper right corner of the face image, the upper left corner of the face image and other boundary points of the face image, The coordinate starting point and the axial direction of the face image are not limited to this embodiment.

該狀態監測單元23連接該特徵點辨識單元22,該狀態監測單元23內部儲存有對應該複數特徵點的複數預設特徵點,每一個預設特徵點對應一個特徵點,相互對應的各該預設特徵點與各該特徵點代表人臉五官中同一個部位的位置,該狀態監測單元23將該特徵點辨識單元22產生的該複數特徵點的座標值與該複數預設特徵點的座標值進行比對,當各該特徵點的座標值與對應的各該預設特徵點的座標值的距離差距超過一特徵點偏移基準值時,或是當兩個特徵點間的距離與對應的兩個預設特徵點的距離的長短差距超過一特徵點變化基準值時,該狀態監測單元23即判斷駕駛人的狀態異常,並輸出一警示訊號警示使用者,其中,該複數預設特徵點可於駕駛人使用該車內駕駛監測系統1進行駕駛狀態監測前先行設置,該影像擷取裝置10先擷取駕駛人正常狀態時的一駕駛影像,而該影像擷取裝置10將該駕駛影像傳輸至該駕駛監測裝置20後,由該駕駛監測裝置20的該影像處理單元21對該駕駛影像執行該影像處理流程,再由該特徵點辨識單元22根據該影像處理流程後的該人臉圖像的五官狀態產生該複數預設特徵點,最後該狀態監測單元23儲存該複數預設特徵點,完成該複數預設特徵點的設置流程。The state monitoring unit 23 is connected to the feature point identification unit 22, and the state monitoring unit 23 internally stores a plurality of preset feature points corresponding to the plurality of feature points, each preset feature point corresponds to a feature point, and each of the preset feature points corresponding to each other It is assumed that the feature point and each of the feature points represent the position of the same part in the facial features of the human face, and the state monitoring unit 23 generates the coordinate value of the complex feature point and the complex preset feature point generated by the feature point identification unit 22. Coordinate value For comparison, when the distance difference between the coordinate value of each feature point and the corresponding coordinate value of each preset feature point exceeds a feature point offset reference value, or when the distance between the two feature points and the corresponding When the distance between the two preset feature points exceeds a feature point variation reference value, the state monitoring unit 23 determines that the driver's state is abnormal, and outputs a warning signal to warn the user, wherein the plurality of preset feature points are It can be set before the driver uses the in-vehicle driving monitoring system 1 to monitor the driving state. The image capture device 10 first captures a driving image of the driver in a normal state, and the image capture device 10 captures the driving image. After transmission to the driving monitoring device 20, the image processing unit 21 of the driving monitoring device 20 executes the image processing process on the driving image, and then the feature point recognition unit 22 performs the image processing process according to the face map after the image processing process. The state of the facial features of the image generates the plurality of preset feature points, and finally the state monitoring unit 23 stores the plurality of preset feature points to complete the process of setting the plurality of preset feature points.

請參看圖3所示,該車內駕駛監測系統1執行駕駛狀態監測的流程包含有:Referring to FIG. 3 , the process of performing driving state monitoring by the in-vehicle driving monitoring system 1 includes:

S10:該影像擷取裝置10擷取駕駛人的一駕駛影像。S10: The image capturing device 10 captures a driving image of the driver.

S11:該影像處理單元21對該駕駛影像進行一影像處理流程,進一步參看圖4所示,該影像處理流程包含有:S11: The image processing unit 21 performs an image processing process on the driving image. Referring further to FIG. 4, the image processing process includes:

S111:該影像處理單元21自該駕駛影像中擷取駕駛人的一人臉圖像。S111: The image processing unit 21 captures a face image of the driver from the driving image.

S112:該影像處理單元21將該人臉圖像進行亮度及對比處理,增加該人臉圖像的亮度,並增加該人臉圖像的對比度,以凸顯人臉輪廓,強調五官主體。S112: The image processing unit 21 performs brightness and contrast processing on the face image, increases the brightness of the face image, and increases the contrast of the face image, so as to highlight the outline of the face and emphasize the main body of the facial features.

S113:該影像處理單元21將該人臉圖像灰階化,而該人臉圖像中的每個像素點具有不同的灰階值。S113: The image processing unit 21 grayscales the face image, and each pixel in the face image has a different grayscale value.

S114:該影像處理單元21將灰階化後的該人臉圖像進行黑白化,該影像處理單元21將該人臉圖像中各像素點的灰階值與一閾值(threshold)進行比對,當該人臉圖像中的一像素點的灰階值過該閾值時,該影像處理單元21將該像素點轉換為黑點,而該人臉圖像中的一像素點的灰階值未超過該閾值時,該影像處理單元21將該像素點轉換為白點,意即將該人臉圖像轉換為只有黑與白的二值化圖像,以供用於後續的影像分析。S114: The image processing unit 21 performs black and white on the grayscaled face image, and the image processing unit 21 compares the grayscale value of each pixel in the face image with a threshold (threshold). , when the grayscale value of a pixel in the face image exceeds the threshold, the image processing unit 21 converts the pixel into a black point, and the grayscale value of a pixel in the face image When the threshold value is not exceeded, the image processing unit 21 converts the pixel point into a white point, which means that the face image is converted into a binary image with only black and white for subsequent image analysis.

S115:該影像處理單元21根據黑白化的該人臉圖像辨識及確認人臉的五官位置。S115: The image processing unit 21 recognizes and confirms the facial features of the human face according to the black and white image of the human face.

S12:該特徵點辨識單元22於該人臉圖像上產生對應人臉五官狀態的複數特徵點,並將該複數特徵點的位置以座標表示,其中,辨識人臉特徵點的技術為圖像辨識領域中的現有技術,在此容不詳述。S12: The feature point identification unit 22 generates a complex number of feature points corresponding to the facial features of the human face on the face image, and represents the position of the complex number of feature points with coordinates, wherein the technology for identifying the human face feature points is an image The prior art in the identification field will not be described in detail here.

S13:該狀態監測單元23將該複數特徵點的座標值與複數預設特徵點的座標值進行比對,當各該特徵點與對應的各該預設特徵點的座標值差異介於一偏移容許值內時,代表各該特徵點的座標與各該預設特徵點的座標差距微小,各該特徵點的座標與各該預設特徵點的座標差距可能來源於車輛的震動或駕駛人的細小動作,意即駕駛人仍處於正常狀態,該車內駕駛監測系統1重新執行步驟S10;而當各該特徵點與對應的各該預設特徵點的座標值差異超過一偏移容許值時,該狀態監測單元23判斷駕駛狀態產生變化,執行步驟S14的判斷。S13: The state monitoring unit 23 compares the coordinate values of the complex feature points with the coordinate values of the complex preset feature points. When the difference between the coordinate values of each of the feature points and the corresponding preset feature points is within a When moving within the allowable value, it means that the difference between the coordinates of each of the feature points and the coordinates of each of the preset feature points is small, and the difference between the coordinates of each of the feature points and the coordinates of each of the preset feature points may be caused by the vibration of the vehicle or the driver. , which means that the driver is still in a normal state, the in-vehicle driving monitoring system 1 re-executes step S10; and when the difference between the coordinate values of each of the feature points and the corresponding preset feature points exceeds an offset allowable value , the state monitoring unit 23 judges that the driving state has changed, and executes the judgment of step S14.

S14:該狀態監測單元23將各該特徵點中駕駛人臉部直向方向的該軸向的座標值與各該預設特徵點中駕駛人臉部直向方向的該軸向的座標值進行比對,判斷是否需要調整該複數預設特徵點的座標值。以該特徵點辨識單元22以該人臉圖像的左下邊界點或右下邊界點作為座標軸起點時,當有一特徵點其駕駛人臉部直向方向的該軸向的座標值大於對應的預設特徵點中駕駛人臉部直向方向的該軸向的座標值時,執行步驟S15;而當沒有特徵點其駕駛人臉部直向方向的該軸向的座標值大於對應的預設特徵點中駕駛人臉部直向方向的該軸向的座標值時,執行步驟S16,以圖2為例,駕駛人臉部直向方向的該軸向為第二軸向(Y軸),在該影像擷取裝置10擷取駕駛人的該駕駛影像時,若駕駛人離該影像擷取裝置10較遠,或是駕駛人的臉部較靠近該影像擷取裝置10其擷取範圍的下方邊界時,容易造成該特徵點辨識單元22所產生的各該預設特徵點的第二軸向的座標值較小,當駕駛人將臉部靠近該影像擷取裝置10或是重新調整坐姿使得臉部較靠近該影像擷取裝置10其擷取範圍的上方邊界時,由新的該駕駛影像取得的新的該複數特徵點其第二軸向的座標值較大,因此以該預設特徵點比對新的該複數特徵點時容易發生狀態辨識失準的問題,需要以新的該複數特徵點替代原先預設的該複數預設特徵點進行後續的駕駛狀態監測。S14: The state monitoring unit 23 compares the coordinate value of the axial direction of the driver's face in each of the feature points with the coordinate value of the axial direction of the driver's face in each of the preset feature points. Compare and judge whether the coordinate value of the complex preset feature point needs to be adjusted. When the feature point identification unit 22 takes the lower left boundary point or the lower right boundary point of the face image as the starting point of the coordinate axis, when there is a feature point, the coordinate value of the axis in the vertical direction of the driver's face is greater than the corresponding preset value. When setting the coordinate value of the axis in the direction of the driver's face in the feature point, step S15 is performed; and when there is no feature point, the coordinate value of the axis in the direction of the driver's face is greater than the corresponding preset feature When the coordinate value of the axial direction in the direction of the driver's face in the point is obtained, step S16 is executed. Taking Fig. 2 as an example, the axial direction in the vertical direction of the driver's face is the second axial direction (Y-axis). When the image capturing device 10 captures the driving image of the driver, if the driver is far away from the image capturing device 10 or the driver's face is closer to the lower part of the capturing range of the image capturing device 10 In the boundary, it is easy to cause the coordinate value of the second axis of each preset feature point generated by the feature point identification unit 22 to be small. When the driver brings his face close to the image capturing device 10 or readjusts his sitting posture When the face is closer to the upper boundary of the capture range of the image capture device 10, the coordinate values of the new plurality of feature points obtained from the new driving image are larger in the second axis, so the default feature When comparing the new complex feature points, the problem of state identification inaccuracy is likely to occur, and it is necessary to replace the original preset complex feature points with the new complex feature points for subsequent driving state monitoring.

同樣的,以該特徵點辨識單元22以該人臉圖像的左上邊界點或右上邊界點作為座標軸起點時,當有一特徵點其駕駛人臉部直向方向的該軸向的座標值小於對應的預設特徵點中駕駛人臉部直向方向的該軸向的座標值時,執行步驟S15;而當沒有特徵點其駕駛人臉部直向方向的該軸向的座標值小於對應的預設特徵點中駕駛人臉部直向方向的該軸向的座標值時,執行步驟S16,在該影像擷取裝置10擷取駕駛人的該駕駛影像時,若駕駛人離該影像擷取裝置10較遠,或是駕駛人的臉部較靠近該影像擷取裝置10其擷取範圍的上方邊界時,容易造成該特徵點辨識單元22所產生的各該預設特徵點的第二軸向的座標值較小,當駕駛人將臉部靠近該影像擷取裝置10或是重新調整坐姿使得臉部較靠近該影像擷取裝置10其擷取範圍的下方邊界時,由新的該駕駛影像取得的新的該複數特徵點其第二軸向的座標值較大,因此以該預設特徵點比對新的該複數特徵點時容易發生狀態辨識失準的問題,需要以新的該複數特徵點替代原先預設的該複數預設特徵點進行後續的駕駛狀態監測。Similarly, when the feature point identification unit 22 uses the upper left boundary point or the upper right boundary point of the face image as the starting point of the coordinate axis, when there is a feature point, the coordinate value of the axis in the vertical direction of the driver's face is smaller than the corresponding When the coordinate value of the axial direction of the driver's face in the preset feature points of When the coordinate value of the axis in the direction of the driver's face in the feature point is set, step S16 is executed. When the image capture device 10 captures the driving image of the driver, if the driver is far away from the image capture device 10 is far away, or when the driver's face is closer to the upper boundary of the capture range of the image capture device 10, it is easy to cause the second axis of each of the preset feature points generated by the feature point identification unit 22. The coordinate value of , is smaller, when the driver brings his face close to the image capture device 10 or readjusts his sitting posture so that the face is closer to the lower boundary of the capture range of the image capture device 10 , the new driving image The obtained new complex feature point has a larger coordinate value in the second axis, so when comparing the new complex feature point with the preset feature point, the problem of state identification inaccuracy is likely to occur, and it is necessary to use the new complex number The feature points replace the original preset multiple preset feature points for subsequent driving state monitoring.

S15:該狀態監測單元23判斷需要調整各該預設特徵點的座標值,該狀態監測單元23將對應需要調整的各該預設特徵點的各該特徵點設為新的各該預設特徵點,並執行步驟S16。S15: The state monitoring unit 23 determines that the coordinate values of the preset feature points need to be adjusted, and the state monitoring unit 23 sets the feature points corresponding to the preset feature points to be adjusted as new preset features point, and execute step S16.

S16:該狀態監測單元23將該複數特徵點的座標值與該複數預設特徵點的座標值進行比對,當各該特徵點的座標值與對應的各該預設特徵點的座標值的距離差距超過一特徵點偏移基準值時,或是當兩個特徵點間的距離與對應的兩個預設特徵點的距離的長短差距超過一特徵點變化基準值時,代表駕駛人可能出現低頭、閉眼、轉頭等或增加車輛行駛危險性的異常狀態,該狀態監測單元23即判斷駕駛人的狀態異常,並輸出一警示訊號警示使用者。S16: The state monitoring unit 23 compares the coordinate value of the complex feature point with the coordinate value of the complex preset feature point. When the coordinate value of each feature point and the corresponding coordinate value of each preset feature point are When the distance difference exceeds a feature point offset reference value, or when the length difference between the distance between two feature points and the distance between the corresponding two preset feature points exceeds a feature point change reference value, it means that the driver may appear In abnormal states such as bowing, closing eyes, turning heads, or increasing the danger of vehicle driving, the state monitoring unit 23 determines that the state of the driver is abnormal, and outputs a warning signal to warn the user.

以下以對應不同人臉五官的該複數特徵點與該複數預設特徵點,詳細說明該狀態監測單元23如何執行步驟S16。The following describes in detail how the state monitoring unit 23 performs step S16 with the plurality of feature points and the plurality of preset feature points corresponding to different facial features.

配合圖5A及圖5B所示,以圖5A中正常狀態時該人臉圖像中分別對應駕駛人眼睛上緣及下緣的一第一預設特徵點P1’及一第二預設特徵點P2’,以及圖5B中閉眼狀態時該人臉圖像中分別對應駛眼睛上緣及下緣的一第一特徵點P1及一第二特徵點P2為例,由圖5可見由於駕駛人閉眼,使得該第一特徵點P1的座標位置偏移該預設第一特徵點P1’,該狀態監測單元23可比對該第一特徵點P1及該預設第一特徵點P1’,該狀態監測單元23一方面可判斷該第一特徵點P1的座標值及該預設第一特徵點P1’的座標值間的距離差距是否超過該特徵點偏移基準值,另一方面可判斷該第一特徵點P1與該第二特徵點P2間的距離,與對應的該第一預設特徵點P1’與該第二預設特徵點P2’間的距離的長短差距是否超過該特徵點變化基準值,來判斷駕駛狀態是否異常。5A and FIG. 5B , in the normal state in FIG. 5A , a first preset feature point P1 ′ and a second preset feature point corresponding to the upper and lower edges of the driver's eyes in the face image respectively P2', and a first feature point P1 and a second feature point P2 in the face image corresponding to the upper and lower edges of the driver's eyes respectively when the eyes are closed as shown in FIG. 5B , as an example, it can be seen from FIG. , so that the coordinate position of the first feature point P1 is offset from the preset first feature point P1', the state monitoring unit 23 can compare the first feature point P1 and the preset first feature point P1', the state monitoring On the one hand, the unit 23 can determine whether the distance difference between the coordinate value of the first feature point P1 and the coordinate value of the preset first feature point P1' exceeds the feature point offset reference value, and on the other hand, can determine the first feature point Whether the difference between the distance between the feature point P1 and the second feature point P2 and the distance between the corresponding first preset feature point P1' and the second preset feature point P2' exceeds the feature point change reference value , to judge whether the driving state is abnormal.

配合圖6A及圖6B所示,以圖6A中正常狀態時該人臉圖像中分別對應駕駛人眼睛上緣及下緣的一第一預設特徵點P1’及一第二預設特徵點P2’、對應駕駛人眉毛上緣的一第三預設特徵點P3’、對應駕駛人嘴角邊緣的一第四預設特徵點P4’,以及圖6B中低頭狀態時該人臉圖像中分別對應駕駛人眼睛上緣及下緣的一第一特徵點P1及一第二特徵點P2、對應駕駛人眉毛上緣的一第三特徵點P3、對應駕駛人嘴角邊緣的一第四特徵點P4為例,由圖6可見由於駕駛人低頭,使得該第一特徵點P1、該第二特徵點P2、該第三特徵點P3、該第四特徵點P4位置皆偏移對應的該第一預設特徵點P1’、該第二預設特徵點P2’、該第三預設特徵點P3’、該第四預設特徵點P4’, 該狀態監測單元23一方面可判斷該第一特徵點P1、該第二特徵點P2、該第三特徵點P3、該第四特徵點P4的座標值分別與對應的該第一預設特徵點P1’、該第二預設特徵點P2’、該第三預設特徵點P3’、該第四預設特徵點P4’的座標值間的距離差距是否超過該特徵點偏移基準值,另一方面可判斷不同的兩個特徵點間的距離,與對應的兩個預設特徵點間的距離的長短差距是否超過該特徵點變化基準值,來判斷駕駛狀態是否異常。6A and FIG. 6B , in the normal state in FIG. 6A , a first preset feature point P1 ′ and a second preset feature point corresponding to the upper and lower edges of the driver's eyes in the face image respectively P2', a third preset feature point P3' corresponding to the upper edge of the driver's eyebrow, a fourth preset feature point P4' corresponding to the edge of the driver's mouth, and the face image when the head is bowed in FIG. 6B , respectively A first feature point P1 and a second feature point P2 corresponding to the upper and lower edges of the driver's eyes, a third feature point P3 corresponding to the upper edge of the driver's eyebrows, and a fourth feature point P4 corresponding to the edge of the driver's mouth As an example, it can be seen from FIG. 6 that the first feature point P1, the second feature point P2, the third feature point P3, and the fourth feature point P4 are all offset from the corresponding first prediction point due to the driver bowing his head. Assuming the feature point P1', the second preset feature point P2', the third preset feature point P3', and the fourth preset feature point P4', the state monitoring unit 23 can determine the first feature point on the one hand The coordinate values of P1, the second feature point P2, the third feature point P3, and the fourth feature point P4 correspond to the corresponding first preset feature point P1', the second preset feature point P2', the Whether the distance difference between the coordinate values of the third preset feature point P3' and the fourth preset feature point P4' exceeds the feature point offset reference value, on the other hand, the distance between the two different feature points can be determined, Whether the distance between the corresponding two preset feature points exceeds the reference value of the feature point change to determine whether the driving state is abnormal.

配合圖7A及圖7B所示,除了圖5A至圖6B中對應五官的各該特徵點及各該預設特徵點,當駕駛人有配戴眼鏡時,該特徵點辨識單元22亦可產生分別對應眼鏡邊框位置的該複數預設特徵點及該複數特徵點,以圖7A中正常狀態時該人臉圖像中分別對應駕駛人眼鏡上緣及下緣的一第五預設特徵點P5’及一預設第六特徵點P6’,以及圖7B中低頭狀態時該人臉圖像中分別對應駕駛人眼鏡上緣及下緣的一第五特徵點P5及一第六特徵點P6為例,由圖7可見由於駕駛人低頭,使得該第五特徵點P2及該第六特徵點P6的座標位置偏移對應的該預設第五特徵點P5’及該預設第六特徵點P6’,該狀態監測單元23一方面可判斷該第五特徵點P5及該第六特徵點P6的座標值分別與對應的該預設第五特徵點P5’及該第六特徵點P6’的座標值間的距離差距是否超過該特徵點偏移基準值,另一方面可判斷該第五特徵點P5與該第六特徵點P6間的距離,與對應的該第五預設特徵點P5’與該第六預設特徵點P6’間的距離的長短差距是否超過該特徵點變化基準值,來判斷駕駛人的狀態是否異常。7A and FIG. 7B , in addition to the feature points corresponding to the facial features and the preset feature points in FIGS. 5A to 6B , when the driver wears glasses, the feature point identification unit 22 can also generate different The plural preset feature points and the plural feature points corresponding to the frame position of the glasses are a fifth preset feature point P5' corresponding to the upper edge and the lower edge of the driver's glasses respectively in the face image in the normal state in FIG. 7A . and a preset sixth feature point P6', and a fifth feature point P5 and a sixth feature point P6 respectively corresponding to the upper edge and the lower edge of the driver's glasses in the face image when the head is bowed in FIG. 7B as an example 7 , it can be seen from FIG. 7 that because the driver bows his head, the coordinate positions of the fifth feature point P2 and the sixth feature point P6 are offset from the preset fifth feature point P5 ′ and the preset sixth feature point P6 ′. On the one hand, the state monitoring unit 23 can determine the coordinate values of the fifth feature point P5 and the sixth feature point P6 and the corresponding coordinate values of the preset fifth feature point P5' and the sixth feature point P6' respectively Whether the distance difference between them exceeds the feature point offset reference value, on the other hand, it can be judged that the distance between the fifth feature point P5 and the sixth feature point P6, and the corresponding fifth preset feature point P5' and the Whether the difference between the distances between the sixth preset feature points P6' exceeds the reference value of the feature point variation is determined to determine whether the driver's state is abnormal.

配合圖8A及圖8B所示,以圖8A中正常狀態時該人臉圖像中對應駕駛人眉毛上緣的一第三預設特徵點P3’、對應駕駛人嘴角邊緣的一第四預設特徵點P4’、分別對應駕駛人眼鏡上緣及下緣的一第五預設特徵點P5’及一預設第六特徵點P6’,以及圖8B中轉頭狀態時該人臉圖像中對應駕駛人眉毛上緣的一第三特徵點P3、對應駕駛人嘴角邊緣的一第四特徵點P4、分別對應駕駛人眼鏡上緣及下緣的一第五特徵點P5及一第六特徵點P6為例,由圖8可見由於駕駛人轉頭,使得該第三特徵點P3、該第四特徵點P4、該第五特徵點P2及該第六特徵點P6的座標位置皆偏移對應的該第三預設特徵點P3’、該第四預設特徵點P4’、該預設第五特徵點P5’及該預設第六特徵點P6’,該狀態監測單元23一方面可判斷該第三特徵點P3、該第四特徵點P4、該第五特徵點P5及該第六特徵點P6的座標值分別與對應的該第三預設特徵點P3’、該第四預設特徵點P4’、該預設第五特徵點P5’及該第六特徵點P6’的座標值間的距離差距是否超過該特徵點偏移基準值,另一方面可判斷不同的兩個特徵點間的距離,與對應的兩個預設特徵點間的距離的長短差距是否超過該特徵點變化基準值,來判斷駕駛人的狀態是否異常。8A and 8B, in the normal state in FIG. 8A, a third preset feature point P3' corresponding to the upper edge of the driver's eyebrow and a fourth preset feature point corresponding to the edge of the driver's mouth in the face image The feature point P4', a fifth preset feature point P5' and a preset sixth feature point P6' corresponding to the upper and lower edges of the driver's glasses respectively, and the face image in the head-turning state shown in FIG. 8B . A third feature point P3 corresponding to the upper edge of the driver's eyebrow, a fourth feature point P4 corresponding to the edge of the driver's mouth corner, a fifth feature point P5 and a sixth feature point corresponding to the upper and lower edges of the driver's glasses respectively Take P6 as an example, it can be seen from FIG. 8 that because the driver turns his head, the coordinate positions of the third feature point P3, the fourth feature point P4, the fifth feature point P2 and the sixth feature point P6 are all offset corresponding to The third preset feature point P3', the fourth preset feature point P4', the preset fifth feature point P5', and the preset sixth feature point P6', on the one hand, the state monitoring unit 23 can determine the The coordinate values of the third feature point P3 , the fourth feature point P4 , the fifth feature point P5 and the sixth feature point P6 are respectively corresponding to the third preset feature point P3 ′ and the fourth preset feature point Whether the distance difference between the coordinate values of P4', the preset fifth feature point P5' and the sixth feature point P6' exceeds the feature point offset reference value, on the other hand, it can be determined whether the difference between the two different feature points Whether the difference between the distance and the distance between the corresponding two preset feature points exceeds the change reference value of the feature point is used to determine whether the driver's state is abnormal.

配合圖9A及圖9B所示,若駕駛人佩戴墨鏡,使得影像擷取無法取得眼部狀態的資訊時,該狀態監測單元23仍可透過對應墨鏡上緣及下緣的各該特徵點、對應眉毛或嘴部的各該特徵點判斷駕駛人的狀態。以圖9A中正常狀態時該人臉圖像中對應駕駛人眉毛上緣的一第三預設特徵點P3’、對應駕駛人嘴角邊緣的一第四預設特徵點P4’、分別對應駕駛人眼鏡上緣及下緣的一第五預設特徵點P5’及一預設第六特徵點P6’,以及圖9B中低頭狀態時該人臉圖像中對應駕駛人眉毛上緣的一第三特徵點P3、對應駕駛人嘴角邊緣的一第四特徵點P4、分別對應駕駛人眼鏡上緣及下緣的一第五特徵點P5及一第六特徵點P6為例,由圖8可見由於駕駛人低頭,使得該第三特徵點P3、該第四特徵點P4、該第五特徵點P2及該第六特徵點P6的座標位置皆偏移對應的該第三預設特徵點P3’、該第四預設特徵點P4’、該預設第五特徵點P5’及該預設第六特徵點P6’,該狀態監測單元23一方面可判斷該第三特徵點P3、該第四特徵點P4、該第五特徵點P5及該第六特徵點P6的座標值分別與對應的該第三預設特徵點P3’、該第四預設特徵點P4’、該預設第五特徵點P5’及該第六特徵點P6’的座標值間的距離差距是否超過該特徵點偏移基準值,另一方面可判斷不同的兩個特徵點間的距離,與對應的兩個預設特徵點間的距離的長短差距是否超過該特徵點變化基準值,來判斷駕駛人的狀態是否異常。9A and 9B, if the driver wears sunglasses, so that the image capture cannot obtain the information of the eye state, the state monitoring unit 23 can still pass through the feature points corresponding to the upper and lower edges of the sunglasses, corresponding Each of the feature points on the eyebrows or the mouth judges the driver's state. In the normal state in FIG. 9A, a third preset feature point P3' corresponding to the upper edge of the driver's eyebrow, a fourth preset feature point P4' corresponding to the edge of the driver's mouth in the face image, respectively corresponding to the driver A fifth preset feature point P5' and a preset sixth feature point P6' on the upper and lower edges of the glasses, and a third feature point corresponding to the upper edge of the driver's eyebrow in the face image when the head is bowed in FIG. 9B The feature point P3, a fourth feature point P4 corresponding to the edge of the driver's mouth, a fifth feature point P5 and a sixth feature point P6 corresponding to the upper and lower edges of the driver's glasses, respectively, are taken as examples. The person bows his head, so that the coordinate positions of the third feature point P3, the fourth feature point P4, the fifth feature point P2 and the sixth feature point P6 are all offset from the corresponding third preset feature point P3', the The fourth preset feature point P4', the preset fifth feature point P5', and the preset sixth feature point P6', on the one hand, the state monitoring unit 23 can determine the third feature point P3, the fourth feature point P4, the coordinate values of the fifth feature point P5 and the sixth feature point P6 are respectively corresponding to the third preset feature point P3', the fourth preset feature point P4', and the preset fifth feature point P5 ' and the distance difference between the coordinate values of the sixth feature point P6' exceeds the feature point offset reference value, on the other hand, it can be judged that the distance between the two different feature points is different from the corresponding two preset feature points. Whether the difference in the distance between them exceeds the reference value of the feature point change to determine whether the driver's state is abnormal.

本發明車內駕駛監測系統1除了由單一人臉圖像判斷駕駛人的狀態,亦可由一預設時間內取得的複數人臉圖像執行一抖動偵測,判斷駕駛人頭部是否有搖晃或抖動狀態,該狀態監測單元23計算該複數人臉圖像中對應同一五官位置的一特徵點於同一軸向上的一座標平均值,並將該座標平均值與對應該特徵點的該預設特徵點的座標值進行比較,當該座標平均值與該預設特徵點的座標值的數值差距超過一抖動臨界值時,該狀態監測單元23即判斷駕駛人狀態異常,並輸出該警示訊號警示駕駛人。The in-vehicle driving monitoring system 1 of the present invention not only judges the state of the driver from a single face image, but can also perform a shake detection from a plurality of face images obtained within a preset time, so as to judge whether the driver's head is shaking or not. The shaking state, the state monitoring unit 23 calculates the coordinate mean value of a feature point corresponding to the same facial features in the complex face image on the same axis, and compares the coordinate mean value with the preset feature corresponding to the feature point The coordinate values of the points are compared, and when the numerical difference between the coordinate average value and the coordinate value of the preset feature point exceeds a jitter threshold, the state monitoring unit 23 determines that the driver's state is abnormal, and outputs the warning signal to warn the driver. people.

配合圖2所示,以擷取五張如圖2的該人臉圖像,且每一人臉圖像的該第一特徵點的座標值分別為(152,210)、(155,207)、(152,225)、(150,201)、(153,211)為例,若以第二軸向(Y軸)的座標值進行判斷,該狀態監測單元23計算各該人臉圖像中各該第一特徵點第二軸向上的該座標平均值,即210+207+225+201+211=210.8,並將該座標平均值與對應的該第一預設特徵點於第二軸向上的座標值進行比對,當該座標平均值與該預設特徵點的座標值的數值差距超過該抖動臨界值時,該狀態監測單元23即判斷駕駛人存在抖動或晃動等異常狀態。In accordance with FIG. 2 , five face images as shown in FIG. 2 are captured, and the coordinate values of the first feature point of each face image are respectively (152, 210), (155, 207), (152, 225), (150, 201) ), (153, 211) as an example, if the judgment is based on the coordinate value of the second axis (Y axis), the state monitoring unit 23 calculates the coordinate on the second axis of each of the first feature points in each of the face images. The average value is 210+207+225+201+211=210.8, and the average coordinate value is compared with the corresponding coordinate value of the first preset feature point on the second axis. When the numerical difference between the coordinate values of the preset feature points exceeds the shaking threshold, the state monitoring unit 23 determines that the driver has an abnormal state such as shaking or shaking.

進一步的,為提升該抖動偵測的準確性,該狀態監測單元23可先剔除複數特徵點中具有座標最大值及座標最小值的兩個特徵點,計算其餘特徵點該座標平均值,同樣以擷取五張如圖2的該人臉圖像,且每一人臉圖像的該第一特徵點的座標值分別為(152,210)、(155,207)、(152,225)、(150,201)、(153,211)為例,若以第二軸向(Y軸)的座標值進行判斷,該狀態監測單元23判斷座標值為(150,201)的該第一特徵點具有第二軸向上的座標最小值,座標值為(152,225)的該第一特徵點具有第二軸向上的座標最大值,因此該狀態監測單元23剔除座標值為(150,201) 及(152,225)的兩個第一特徵點,該狀態監測單元23計算其餘的各該第一特徵點第二軸向上的該座標平均值,即210+207+211=209.3,並將該座標平均值與對應的該第一預設特徵點於第二軸向上的座標值進行比對,當該座標平均值與該預設特徵點的座標值的數值差距超過該抖動臨界值時,該狀態監測單元23即判斷駕駛人存在抖動或晃動等異常狀態。Further, in order to improve the accuracy of the jitter detection, the state monitoring unit 23 can first remove the two feature points with the maximum coordinate value and the minimum coordinate value in the complex feature points, calculate the average value of the coordinates of the remaining feature points, and also use Take five face images as shown in Figure 2, and the coordinate values of the first feature point of each face image are respectively (152, 210), (155, 207), (152, 225), (150, 201), (153, 211) as an example , if the judgment is based on the coordinate value of the second axis (Y axis), the state monitoring unit 23 judges that the first feature point with the coordinate value of (150, 201) has the minimum coordinate value on the second axis, and the coordinate value is (152, 225 ) of the first feature point has the maximum coordinate value in the second axis, so the state monitoring unit 23 rejects the two first feature points with the coordinate values (150, 201) and (152, 225), and the state monitoring unit 23 calculates the remaining The average value of the coordinates on the second axis of each of the first feature points, that is, 210+207+211=209.3, and the average value of coordinates and the corresponding coordinate values of the first preset feature point on the second axis are calculated. By comparison, when the numerical difference between the coordinate average value and the coordinate value of the preset feature point exceeds the jitter threshold, the state monitoring unit 23 determines that the driver has an abnormal state such as jittering or shaking.

其中,該狀態監測單元23產生的該警示訊號可控制執行該駕駛監測裝置20的該電子裝置,由該電子裝置根據該警示訊號於顯示螢幕上顯示示警訊息,或是發出警示聲響及燈光,藉此警示駕駛人。Wherein, the warning signal generated by the status monitoring unit 23 can control the electronic device that executes the driving monitoring device 20, and the electronic device displays the warning message on the display screen according to the warning signal, or emits warning sounds and lights, so as to This alerts the driver.

綜上所述,本發明車內駕駛監測系統1透過該影像擷取裝置10擷取駕駛人操駕車輛時的該駕駛影像,該影像擷取裝置10將該駕駛影像傳輸至該駕駛監測裝置20中,該影像擷取裝置10對該駕駛影像執行該影像處理流程,並將影像處理後的該人臉圖像傳輸至該特徵點辨識單元22,由該特徵點辨識單元22根據該人臉圖像中的人臉五官產生對應不同五官位置的該複數特徵點,後由該狀態監測單元23將該複數特徵點與駕駛人正常狀態時的該複數預設特徵點進行比對,計算該複數特徵點與該複數預設特徵點的座標差距,藉此判斷駕駛人當前狀態是否偏離正常狀態,當該狀態監測單元23判斷駕駛人的狀態出現異常時,即藉由該警示訊號警示駕駛人,提升駕駛人的乘車安全,降低因駕駛人精神不濟、疲勞駕駛而導致交通事故發生的可能性。To sum up, the in-vehicle driving monitoring system 1 of the present invention captures the driving image when the driver drives the vehicle through the image capturing device 10 , and the image capturing device 10 transmits the driving image to the driving monitoring device 20 . , the image capture device 10 executes the image processing process on the driving image, and transmits the image-processed face image to the feature point recognition unit 22, and the feature point recognition unit 22 analyzes the face image according to the face image. The facial features in the image generate the complex feature points corresponding to different facial features, and then the state monitoring unit 23 compares the complex feature points with the complex preset feature points in the normal state of the driver, and calculates the complex features The coordinate difference between the point and the plurality of preset feature points is used to determine whether the current state of the driver deviates from the normal state. When the state monitoring unit 23 determines that the state of the driver is abnormal, the warning signal is used to warn the driver. The driving safety of the driver reduces the possibility of traffic accidents caused by the driver's mental state and fatigue driving.

1:車內駕駛監測系統 10:影像擷取裝置 20:駕駛監測裝置 21:影像處理單元 22:特徵點辨識單元 23:狀態監測單元 P1:第一特徵點 P1’:第一預設特徵點 P2:第二特徵點 P2’:第二預設特徵點 P3:第三特徵點 P3’:第三預設特徵點 P4:第四特徵點 P4’:第四預設特徵點 P5:第五特徵點 P5’:第五預設特徵點 P6:第六特徵點 P6’:第六預設特徵點 1: In-vehicle driving monitoring system 10: Image capture device 20: Driving Monitoring Device 21: Image processing unit 22: Feature point identification unit 23: Condition monitoring unit P1: The first feature point P1': The first preset feature point P2: Second feature point P2': The second preset feature point P3: The third feature point P3': The third preset feature point P4: Fourth feature point P4': Fourth preset feature point P5: Fifth feature point P5': The fifth preset feature point P6: Sixth feature point P6': sixth preset feature point

圖1:本發明車內駕駛監測系統的方塊示意圖。 圖2:駕駛正常狀態時正面臉部的人臉圖像示意圖。 圖3:本發明車內駕駛監測系統進行駕駛狀態監測的步驟流程圖。 圖4:影像處理單元進行影像處理流程的步驟流程圖。 圖5A:駕駛人正常狀態的人臉圖像的示意圖。 圖5B:駕駛人閉眼狀態的人臉圖像的示意圖。 圖6A:駕駛人正常狀態的人臉圖像的示意圖。 圖6B:駕駛人低頭狀態的人臉圖像的示意圖。 圖7A:佩戴眼鏡時駕駛人正常狀態的人臉圖像的示意圖。 圖7B:佩戴眼鏡時駕駛人低頭狀態的人臉圖像的示意圖。 圖8A:佩戴眼鏡時駕駛人正常狀態的人臉圖像的示意圖。 圖8B:佩戴眼鏡時駕駛人轉頭狀態的人臉圖像的示意圖。 圖9A:佩戴墨鏡時駕駛人正常狀態的人臉圖像的示意圖。 圖9B:佩戴墨鏡時駕駛人低頭狀態的人臉圖像的示意圖。 FIG. 1 is a block diagram of the in-vehicle driving monitoring system of the present invention. Figure 2: Schematic diagram of the face image of the frontal face when driving in a normal state. FIG. 3 is a flow chart of the steps of monitoring the driving state by the in-vehicle driving monitoring system of the present invention. FIG. 4 is a flow chart of the steps of the image processing process performed by the image processing unit. Figure 5A: A schematic diagram of a driver's face image in a normal state. Figure 5B: Schematic diagram of the driver's face image with eyes closed. Fig. 6A: Schematic diagram of a driver's face image in a normal state. Figure 6B: Schematic diagram of the driver's face image in the state of bowing his head. FIG. 7A : A schematic diagram of the face image of a driver in a normal state when wearing glasses. FIG. 7B : A schematic diagram of the face image of the driver in the state of bowing his head when wearing glasses. FIG. 8A : A schematic diagram of a face image of a driver in a normal state when wearing glasses. FIG. 8B : A schematic diagram of the face image of the driver turning his head when wearing glasses. FIG. 9A : A schematic diagram of the face image of the driver in a normal state when wearing sunglasses. FIG. 9B : A schematic diagram of the face image of the driver in the state of bowing his head when wearing sunglasses.

1:車內駕駛監測系統 1: In-vehicle driving monitoring system

10:影像擷取裝置 10: Image capture device

20:駕駛監測裝置 20: Driving Monitoring Device

21:影像處理單元 21: Image processing unit

22:特徵點辨識單元 22: Feature point identification unit

23:狀態監測單元 23: Condition monitoring unit

Claims (10)

一種車內駕駛監測系統,包含有: 一影像擷取裝置,擷取駕駛人的一駕駛影像;及 一駕駛監測裝置,連接該影像擷取裝置,包含有: 一影像處理單元,對該影像擷取裝置擷取的該駕駛影像進行一影像處理流程,並產生一人臉圖像; 一特徵點辨識單元,連接該影像處理單元,根據該人臉圖像產生對應人臉五官的複數特徵點,並將該複數特徵點的位置以座標值表示;及 一狀態監測單元,連接該特徵點辨識單元,該狀態監測單元內部儲存有對應該複數特徵點的複數預設特徵點,該狀態監測單元將該複數特徵點的座標值與該複數預設特徵點的座標值進行比對,當各該特徵點的座標值與對應的各該預設特徵點的座標值的距離差距超過一特徵點偏移基準值時,或是當兩個特徵點間的距離與對應的兩個預設特徵點間的距離的長短差距超過一特徵點變化基準值時,該狀態監測單元即判斷駕駛狀態異常,並輸出一警示訊號警示使用者。 An in-vehicle driving monitoring system, comprising: an image capture device for capturing a driving image of the driver; and A driving monitoring device, connected to the image capturing device, includes: an image processing unit for performing an image processing process on the driving image captured by the image capturing device, and generating a face image; a feature point identification unit, connected to the image processing unit, to generate complex feature points corresponding to facial features according to the face image, and to represent the positions of the complex feature points as coordinate values; and a state monitoring unit connected to the feature point identification unit, the state monitoring unit internally stores a plurality of preset feature points corresponding to the plurality of feature points, the state monitoring unit and the coordinate values of the plurality of feature points and the plurality of preset feature points When the distance difference between the coordinate value of each feature point and the coordinate value of the corresponding preset feature point exceeds a feature point offset reference value, or when the distance between the two feature points When the distance between the corresponding two preset feature points exceeds a feature point variation reference value, the state monitoring unit determines that the driving state is abnormal, and outputs a warning signal to warn the user. 如請求項1所述之車內駕駛監測系統,該影像處理流程包含有: 該影像處理單元自該駕駛影像中擷取一人臉圖像; 該影像處理單元對該人臉圖像進行亮度及對比處理; 該影像處理單元對該人臉圖像進行灰階化處理; 該影像處理單元將灰階化後的該人臉圖像進行黑白化處理;以及 該影像處理單元根據黑白化的該人臉圖像辨別人臉的五官位置。 According to the in-vehicle driving monitoring system described in claim 1, the image processing flow includes: The image processing unit captures a face image from the driving image; The image processing unit performs brightness and contrast processing on the face image; The image processing unit performs grayscale processing on the face image; The image processing unit performs black and white processing on the grayscaled face image; and The image processing unit identifies the facial features of the face according to the black and white face image. 如請求項1所述之車內駕駛監測系統,該特徵點辨識單元以該人臉圖像的一邊界點為座標起點,並於該人臉圖像上建立平面座標軸,再於該人臉圖像上產生對應人臉五官狀態的該複數特徵點。According to the in-vehicle driving monitoring system described in claim 1, the feature point identification unit takes a boundary point of the face image as a coordinate starting point, and establishes a plane coordinate axis on the face image, and then uses the face image to establish a plane coordinate axis. The complex feature points corresponding to the facial features of the face are generated on the image. 如請求項1所述之車內駕駛監測系統,該複數預設特徵點由該影像擷取裝置擷取駕駛人正常狀態時的一駕駛影像,該駕駛監測裝置的該影像處理單元對該駕駛影像執行該影像處理流程產生一人臉圖像,該特徵點辨識單元根據該人臉圖像,產生對應駕駛人正常狀態時的五官位置的該複數預設特徵點,並將該複數預設特徵點儲存於該狀態監測單元中。According to the in-vehicle driving monitoring system as claimed in claim 1, the image capturing device captures a driving image of the driver in a normal state for the plurality of preset feature points, and the image processing unit of the driving monitoring device captures the driving image. Execute the image processing flow to generate a face image, the feature point identification unit generates the plurality of preset feature points corresponding to the facial features of the driver in a normal state according to the face image, and stores the plurality of preset feature points in the status monitoring unit. 如請求項1所述之車內駕駛監測系統,該複數特徵點包含分別對應眼部上緣及下緣位置、兩側嘴角位置或眉毛位置的特徵點。According to the in-vehicle driving monitoring system according to claim 1, the plurality of feature points include feature points corresponding to the upper and lower edge positions of the eyes, the positions of the corners of the mouth on both sides, or the positions of the eyebrows, respectively. 如請求項1所述之車內駕駛監測系統,該複數特徵點包含對應眼鏡邊框位置的特徵點。The in-vehicle driving monitoring system according to claim 1, wherein the plurality of feature points include feature points corresponding to the positions of the glasses frame. 如請求項1所述之車內駕駛監測系統,該影像處理單元於一預設時間內取得複數人臉圖像,該狀態監測單元根據該複數人臉圖像執行一抖動偵測,該狀態監測單元計算該複數人臉圖像中對應同一五官位置的一特徵點的一座標平均值,並比對該座標平均值與對應該特徵點的該預設特徵點的座標值,當該座標平均值與該預設特徵點的座標值的數值差距超過一抖動臨界值時,該狀態監測單元即判斷駕駛狀態異常,並輸出一警示訊號。The in-vehicle driving monitoring system according to claim 1, wherein the image processing unit obtains a plurality of face images within a preset time, the state monitoring unit performs a shake detection according to the plurality of face images, and the state monitoring The unit calculates the coordinate mean value of a feature point corresponding to the same facial features in the complex face image, and compares the coordinate mean value with the coordinate value of the preset feature point corresponding to the feature point. When the coordinate mean value is When the numerical difference with the coordinate value of the preset feature point exceeds a jitter threshold, the state monitoring unit determines that the driving state is abnormal, and outputs a warning signal. 如請求項1所述之車內駕駛監測系統,該影像擷取裝置及該駕駛監測裝置以H.264視訊編碼技術進行影像處理。According to the in-vehicle driving monitoring system as claimed in claim 1, the image capturing device and the driving monitoring device perform image processing with H.264 video coding technology. 如請求項1所述之車內駕駛監測系統,該影像擷取裝置與該駕駛監測裝置以Wi-Fi點對點(Peer-to-Peer, P2P)協議進行該駕駛影像的傳輸。According to the in-vehicle driving monitoring system of claim 1, the image capturing device and the driving monitoring device transmit the driving image through a Wi-Fi peer-to-peer (P2P) protocol. 如請求項1所述之車內駕駛監測系統,該影像擷取裝置為一紅外線鏡頭。According to the in-vehicle driving monitoring system of claim 1, the image capturing device is an infrared lens.
TW109142224A 2020-12-01 2020-12-01 In-car driving monitoring system TWI741892B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW109142224A TWI741892B (en) 2020-12-01 2020-12-01 In-car driving monitoring system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW109142224A TWI741892B (en) 2020-12-01 2020-12-01 In-car driving monitoring system

Publications (2)

Publication Number Publication Date
TWI741892B TWI741892B (en) 2021-10-01
TW202222619A true TW202222619A (en) 2022-06-16

Family

ID=80782380

Family Applications (1)

Application Number Title Priority Date Filing Date
TW109142224A TWI741892B (en) 2020-12-01 2020-12-01 In-car driving monitoring system

Country Status (1)

Country Link
TW (1) TWI741892B (en)

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4259585B2 (en) * 2007-03-08 2009-04-30 株式会社デンソー Sleepiness determination device, program, and sleepiness determination method
JP2013156707A (en) * 2012-01-26 2013-08-15 Nissan Motor Co Ltd Driving support device
TWI474264B (en) * 2013-06-14 2015-02-21 Utechzone Co Ltd Warning method for driving vehicle and electronic apparatus for vehicle
TW201501044A (en) * 2013-06-24 2015-01-01 Utechzone Co Ltd Apparatus, method and computer readable recording medium of generating signal by detecting facial action
JP6304999B2 (en) * 2013-10-09 2018-04-04 アイシン精機株式会社 Face detection apparatus, method and program
TWI598258B (en) * 2016-11-28 2017-09-11 Driving behavior detection method and system thereof
CN109190515A (en) * 2018-08-14 2019-01-11 深圳壹账通智能科技有限公司 A kind of method for detecting fatigue driving, computer readable storage medium and terminal device
CN111079475A (en) * 2018-10-19 2020-04-28 上海商汤智能科技有限公司 Driving state detection method and device, driver monitoring system and vehicle
JP2020144573A (en) * 2019-03-06 2020-09-10 オムロン株式会社 Driver monitoring device
CN111709264A (en) * 2019-03-18 2020-09-25 北京市商汤科技开发有限公司 Driver attention monitoring method and device and electronic equipment

Also Published As

Publication number Publication date
TWI741892B (en) 2021-10-01

Similar Documents

Publication Publication Date Title
CN109937152B (en) Driving state monitoring method and device, driver monitoring system and vehicle
JP7146959B2 (en) DRIVING STATE DETECTION METHOD AND DEVICE, DRIVER MONITORING SYSTEM AND VEHICLE
KR102469234B1 (en) Driving condition analysis method and device, driver monitoring system and vehicle
WO2019232972A1 (en) Driving management method and system, vehicle-mounted intelligent system, electronic device and medium
US9460601B2 (en) Driver distraction and drowsiness warning and sleepiness reduction for accident avoidance
US10867195B2 (en) Systems and methods for monitoring driver state
KR101386823B1 (en) 2 level drowsy driving prevention apparatus through motion, face, eye,and mouth recognition
US8538044B2 (en) Line-of-sight direction determination device and line-of-sight direction determination method
CN106965675B (en) A kind of lorry swarm intelligence safety work system
JP5549721B2 (en) Driver monitor device
CN108621794B (en) Display system for vehicle and control method for display system for vehicle
WO2016038784A1 (en) Driver state determination apparatus
US11455810B2 (en) Driver attention state estimation
Anjali et al. Real-time nonintrusive monitoring and detection of eye blinking in view of accident prevention due to drowsiness
Kailasam et al. Accident alert system for driver using face recognition
KR102494530B1 (en) Camera Apparatus Installing at a Car for Detecting Drowsy Driving and Careless Driving and Method thereof
KR20150061668A (en) An apparatus for warning drowsy driving and the method thereof
JP2022149287A (en) Driver monitoring device, driver monitoring method and computer program for driver monitoring
KR20220005297A (en) In-Cabin Security Sensor Installed at a Car and Platform Service Method therefor
TWI741892B (en) In-car driving monitoring system
TWI550440B (en) Method and system for detecting person to use handheld apparatus
US11983941B2 (en) Driver monitor
KR20190044818A (en) Apparatus for monitoring driver and method thereof
JP6689470B1 (en) Information processing apparatus, program, and information processing method
CN114596687A (en) In-vehicle driving monitoring system