TWI520076B - Method and apparatus for detecting person to use handheld device - Google Patents

Method and apparatus for detecting person to use handheld device Download PDF

Info

Publication number
TWI520076B
TWI520076B TW103143288A TW103143288A TWI520076B TW I520076 B TWI520076 B TW I520076B TW 103143288 A TW103143288 A TW 103143288A TW 103143288 A TW103143288 A TW 103143288A TW I520076 B TWI520076 B TW I520076B
Authority
TW
Taiwan
Prior art keywords
mouth
image
processing unit
area
sequence
Prior art date
Application number
TW103143288A
Other languages
Chinese (zh)
Other versions
TW201621756A (en
Inventor
林伯聰
許佳微
Original Assignee
由田新技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 由田新技股份有限公司 filed Critical 由田新技股份有限公司
Priority to TW103143288A priority Critical patent/TWI520076B/en
Priority to CN201510054941.8A priority patent/CN105989328A/en
Application granted granted Critical
Publication of TWI520076B publication Critical patent/TWI520076B/en
Publication of TW201621756A publication Critical patent/TW201621756A/en

Links

Description

偵測人員使用手持裝置的方法及裝置 Method and device for detecting personnel using handheld device

本發明是有關於一種影像辨識技術,且特別是有關於一種利用影像辨識技術來偵測人員使用手持裝置的方法及裝置。 The present invention relates to an image recognition technology, and more particularly to a method and apparatus for detecting a person using a handheld device using image recognition technology.

隨著行動通訊等技術的快速發展,人們可使用諸如功能型手機(feature phone)或智慧型手機(smart phone)等類型的手持裝置來進行通話、傳訊息甚至是網際網路(Internet)瀏覽。此外,隨著半導體製程、材料、機構設計等技術的進步,手持裝置逐漸具備符合輕薄設計以方便手持操作。因此,手持裝置所帶來的方便性,使得人們的生活逐漸無法脫離手持裝置。 With the rapid development of technologies such as mobile communication, people can use a handheld device such as a feature phone or a smart phone to make calls, send messages, and even browse the Internet. In addition, with the advancement of semiconductor process, materials, mechanism design and other technologies, handheld devices are gradually equipped with a thin and light design to facilitate handheld operations. Therefore, the convenience brought by the handheld device makes people's life gradually unable to get rid of the handheld device.

另一方面,隨著交通運輸的發達,促進了地方的發展,但是人為駕駛交通工具不當所造成之交通事故,儼然成為危害社會安全的主要因素。然而,人們時常會在駕駛交通工具同時還使用手持裝置,此時,人們往往會因為注意力分散等因素,而導致交通意外發生。因此,如何有效且即時地監控駕駛行為或其他不 適合使用手持裝置的情況,以避免意外發生,將會是此領域中極需解決之問題。 On the other hand, with the development of transportation, local development has been promoted, but traffic accidents caused by improper driving of people have become a major factor threatening social security. However, people often use handheld devices while driving vehicles. At this time, people often cause traffic accidents due to factors such as distraction. So how to monitor driving behavior or other non-effectively and instantly The situation that is suitable for using a handheld device to avoid accidents will be a problem that needs to be solved in this field.

本發明提供一種偵測人員使用手持裝置的方法及裝置,其可透過影像辨識技術來判斷手持裝置的移動軌跡以及人員的嘴部動作,以準確且快速地判斷人員是否使用手持裝置。 The invention provides a method and a device for detecting a person using a handheld device, which can determine the movement trajectory of the handheld device and the mouth movement of the person through the image recognition technology, so as to accurately and quickly determine whether the person uses the handheld device.

本發明提供一種偵測人員使用手持裝置的方法,適用於電子裝置,此方法包括下列步驟。擷取人員的影像序列。分析影像序列的每一個影像,以獲得臉部物件。依據臉部物件來決定耳側位置區域以及嘴部位置區域。於影像序列的每一個影像,偵測目標物體,以計算目標物體的移動軌跡。在偵測到移動軌跡為目標物體朝向耳側位置區域移動之後,比對在嘴部位置區域中所偵測的嘴部動作資訊是否符合預設嘴部資訊,來判斷人員是否正在使用手持裝置。 The present invention provides a method for detecting a person using a handheld device, which is suitable for use in an electronic device. The method includes the following steps. Capture the image sequence of the person. Analyze each image of the image sequence to obtain a facial object. The ear side position area and the mouth position area are determined according to the face object. For each image of the image sequence, the target object is detected to calculate the movement trajectory of the target object. After detecting that the movement track is moving toward the ear side position area, the mouth motion information detected in the mouth position area is compared with the preset mouth information to determine whether the person is using the handheld device.

在本發明的一實施例中,上述於影像序列的每一個影像,偵測目標物體,以計算目標物體的移動軌跡包括下列步驟。計算目標物體的垂直投影量與水平投影量,以獲得目標物體的尺寸範圍。於尺寸範圍內取基準點。藉由影像序列的每一個影像中的基準點的位置,獲得移動軌跡。 In an embodiment of the invention, detecting the target object for each image of the image sequence to calculate the movement trajectory of the target object includes the following steps. The vertical projection amount and the horizontal projection amount of the target object are calculated to obtain a size range of the target object. Take the reference point within the size range. The movement trajectory is obtained by the position of the reference point in each image of the image sequence.

在本發明的一實施例中,上述依據臉部物件決定耳側位置區域以及嘴部位置區域包括下列步驟。藉由人臉偵測演算法獲 得臉部物件。於臉部物件中搜尋鼻孔物件。基於鼻孔物件的位置,往水平方向搜尋耳側位置區域。 In an embodiment of the invention, the determining the ear side position area and the mouth position area according to the facial object includes the following steps. Obtained by face detection algorithm Have face items. Search for nostrils in your face. Based on the position of the nostril object, the ear side position area is searched horizontally.

在本發明的一實施例中,上述於臉部物件中搜尋鼻孔物件之後,更包括下列步驟。由鼻孔物件中辨識出鼻孔定位點。基於鼻孔定位點設定嘴部區域。對嘴部區域的影像進行影像處理以判斷人員之嘴部物件。依據嘴部物件在嘴部區域決定嘴部位置區域。 In an embodiment of the invention, after searching for the nostril object in the facial object, the following steps are further included. The nostril positioning point is identified from the nostril object. The mouth area is set based on the nostril positioning point. The image of the mouth area is image processed to determine the mouth of the person. The mouth position area is determined in the mouth area according to the mouth item.

在本發明的一實施例中,上述在偵測到移動軌跡為目標物體朝向耳側位置區域移動包括下列步驟。依據耳側位置區域獲得興趣區域。將影像序列中之當前影像與參考影像兩者各自的興趣區域,執行影像相減演算法,以獲得目標區域影像。藉由參考影像的興趣區域,濾除目標區域影像的雜訊,以獲得目標物體。 In an embodiment of the invention, the detecting the movement trajectory to move the target object toward the ear side position region comprises the following steps. The region of interest is obtained according to the ear side location area. The image subtraction algorithm is performed on the respective regions of interest of the current image and the reference image in the image sequence to obtain the target region image. By referring to the region of interest of the image, the noise of the image of the target area is filtered to obtain the target object.

在本發明的一實施例中,上述在偵測到移動軌跡為目標物體朝向耳側位置區域移動之後,比對在嘴部位置區域中所偵測的嘴部動作資訊是否符合預設嘴部資訊,來判斷人員是否正在使用手持裝置包括下列步驟。取得嘴部影像,且依據嘴部影像取得嘴部特徵。依據嘴部特徵來判斷嘴部影像為張開動作影像或閉合動作影像。在嘴部紀錄時間內,依序紀錄嘴部位置區域中所偵測到的所有閉合動作影像或張開動作影像並轉換成編碼序列。將編碼序列存入嘴部動作資訊。 In an embodiment of the invention, after detecting that the movement track is the target object moving toward the ear side position area, comparing whether the mouth motion information detected in the mouth position area meets the preset mouth information To determine whether a person is using a handheld device, the following steps are included. The mouth image is obtained, and the mouth feature is obtained according to the mouth image. According to the characteristics of the mouth, the image of the mouth is determined to be an open motion image or a closed motion image. During the recording time of the mouth, all closed motion images or open motion images detected in the mouth position area are sequentially recorded and converted into a coding sequence. The code sequence is stored in the mouth motion information.

在本發明的一實施例中,上述在偵測到移動軌跡為目標物體朝向耳側位置區域移動之後,比對在嘴部位置區域中所偵測 的嘴部動作資訊是否符合預設嘴部資訊,來判斷人員是否正在使用手持裝置包括下列步驟。在嘴部比對時間內,將嘴部位置區域的影像與樣板影像進行比對,以產生嘴型編碼。將嘴型編碼存入編碼序列中。將編碼序列存入嘴部動作資訊。 In an embodiment of the invention, after detecting that the movement track is the target object moving toward the ear side position area, the comparison is detected in the mouth position area. Whether the mouth motion information conforms to the preset mouth information, to determine whether the person is using the handheld device includes the following steps. During the mouth comparison time, the image of the mouth position area is compared with the template image to generate a mouth type code. The mouth type code is stored in the code sequence. The code sequence is stored in the mouth motion information.

本發明提供一種偵測人員使用手持裝置的裝置,此裝置包括影像擷取單元、儲存單元以及處理單元。影像擷取單元擷取人員的影像序列。儲存單元儲存影像序列以及預設嘴部資訊。處理單元耦接至儲存單元以取得影像序列。處理單元分析影像序列的每一個影像,以獲得臉部物件。處理單元依據臉部物件決定耳側位置區域以及嘴部位置區域。於影像序列的每一個影像,偵測目標物體,以計算目標物體的移動軌跡。在處理單元偵測到移動軌跡為目標物體朝向耳側位置區域移動之後,比對在嘴部位置區域中所偵測的嘴部動作資訊是否符合預設嘴部資訊,來判斷人員是否正在使用手持裝置。 The invention provides a device for detecting a person using a handheld device, the device comprising an image capturing unit, a storage unit and a processing unit. The image capturing unit captures the image sequence of the person. The storage unit stores the image sequence and preset mouth information. The processing unit is coupled to the storage unit to obtain an image sequence. The processing unit analyzes each image of the image sequence to obtain a facial object. The processing unit determines the ear side position area and the mouth position area according to the face object. For each image of the image sequence, the target object is detected to calculate the movement trajectory of the target object. After the processing unit detects that the movement track is the target object moving toward the ear side position area, whether the mouth motion information detected in the mouth position area matches the preset mouth information is used to determine whether the person is using the hand held Device.

在本發明的一實施例中,上述的處理單元計算目標物體的垂直投影量與水平投影量,以獲得目標物體的尺寸範圍,於尺寸範圍內取基準點,且藉由影像序列的每一個影像中的基準點的位置,獲得移動軌跡。 In an embodiment of the invention, the processing unit calculates a vertical projection amount and a horizontal projection amount of the target object to obtain a size range of the target object, takes a reference point within the size range, and uses each image of the image sequence. The position of the reference point in the middle to obtain the movement trajectory.

在本發明的一實施例中,上述的處理單元藉由人臉偵測演算法獲得臉部物件,於臉部物件中搜尋鼻孔物件,且基於鼻孔物件的位置,往水平方向搜尋耳側位置區域。 In an embodiment of the invention, the processing unit obtains a face object by a face detection algorithm, searches for a nostril object in the face object, and searches for the ear side position area in a horizontal direction based on the position of the nostril object. .

在本發明的一實施例中,上述的處理單元由鼻孔物件中 辨識出鼻孔定位點,基於鼻孔定位點設定嘴部區域,對嘴部區域的影像進行影像處理以判斷人員之嘴部物件,且依據嘴部物件在嘴部區域決定嘴部位置區域。 In an embodiment of the invention, the processing unit is configured by a nostril object The nostril positioning point is identified, the mouth area is set based on the nostril positioning point, the image of the mouth area is image processed to determine the mouth part of the person, and the mouth position area is determined according to the mouth part in the mouth area.

在本發明的一實施例中,上述的處理單元依據耳側位置區域獲得興趣區域,將影像序列中之當前影像與參考影像兩者各自的興趣區域,執行影像相減演算法,以獲得目標區域影像。處理單元藉由參考影像的興趣區域,濾除目標區域影像的雜訊,以獲得目標物體。 In an embodiment of the present invention, the processing unit obtains an area of interest according to the ear side location area, and performs an image subtraction algorithm on the respective regions of interest of the current image and the reference image in the image sequence to obtain a target area. image. The processing unit filters out the noise of the image of the target area by referring to the region of interest of the image to obtain the target object.

在本發明的一實施例中,上述的處理單元取得嘴部影像,且依據嘴部影像取得嘴部特徵,依據嘴部特徵來判斷嘴部影像為張開動作影像或閉合動作影像,在嘴部紀錄時間內。處理單元依序紀錄嘴部位置區域中所偵測到的所有閉合動作影像或張開動作影像並轉換成編碼序列,且將編碼序列存入嘴部動作資訊。 In an embodiment of the invention, the processing unit obtains the mouth image, and obtains the mouth feature according to the mouth image, and determines the mouth image as the open motion image or the closed motion image according to the mouth feature. Record time. The processing unit sequentially records all the closed motion images or the motion motion images detected in the mouth position area and converts them into a code sequence, and stores the code sequence in the mouth motion information.

在本發明的一實施例中,在嘴部比對時間內,處理單元將嘴部位置區域的影像與樣板影像進行比對,以產生嘴型編碼。處理單元將嘴型編碼存入編碼序列中,且將編碼序列存入嘴部動作資訊。 In an embodiment of the invention, during the mouth comparison time, the processing unit compares the image of the mouth position area with the template image to generate a mouth type code. The processing unit stores the mouth type code in the code sequence, and stores the code sequence in the mouth motion information.

在本發明的一實施例中,上述的裝置更包括警示模組。警示模組耦接處理單元。當處理單元判斷人員正使用手持裝置時,透過警示模組啟動警示程序。 In an embodiment of the invention, the device further includes an alert module. The warning module is coupled to the processing unit. When the processing unit determines that the person is using the handheld device, the alerting program is activated through the alert module.

基於上述,本發明實施例可藉由影像辨識技術監控目標物體的移動軌跡是否朝向人員的耳側位置區域,再比對人員的嘴 部動作是否符合預設嘴部資訊,以判斷人員是否正在使用手持裝置。藉此,便能有效且準確地判斷人員是否正使用手持裝置。 Based on the above, the embodiment of the present invention can monitor whether the moving track of the target object faces the ear side position area of the person by using the image recognition technology, and then compare the mouth of the person. Whether the action of the part conforms to the preset mouth information to determine whether the person is using the handheld device. Thereby, it is possible to effectively and accurately judge whether or not the person is using the handheld device.

為讓本發明的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。 The above described features and advantages of the invention will be apparent from the following description.

100‧‧‧裝置 100‧‧‧ device

110‧‧‧影像擷取單元 110‧‧‧Image capture unit

130‧‧‧儲存單元 130‧‧‧storage unit

150‧‧‧處理單元 150‧‧‧Processing unit

S210~S290、S410~S490、S610~S690、S710~S770‧‧‧步驟 S210~S290, S410~S490, S610~S690, S710~S770‧‧‧ steps

300、510、520、530、540、550‧‧‧影像 300, 510, 520, 530, 540, 550 ‧ ‧ images

310‧‧‧臉部物件 310‧‧‧Face objects

320‧‧‧鼻孔物件 320‧‧‧ Nostril objects

511、521、551、R‧‧‧興趣區域 511, 521, 551, R‧‧‧ areas of interest

B‧‧‧基準點 B‧‧‧ benchmark

C1、C2‧‧‧邊界 C1, C2‧‧‧ border

E‧‧‧耳側位置區域 E‧‧‧ear area

O‧‧‧目標物體 O‧‧‧Target object

圖1是依據本發明一實施例說明一種裝置的方塊圖。 1 is a block diagram showing an apparatus in accordance with an embodiment of the present invention.

圖2是依據本發明一實施例說明一種偵測人員使用手持裝置的方法流程圖。 2 is a flow chart showing a method for detecting a person using a handheld device according to an embodiment of the invention.

圖3是依照本發明一實施例的影像的示意圖。 3 is a schematic diagram of an image in accordance with an embodiment of the present invention.

圖4是依據本發明一實施例說明決定嘴部位置區域的流程範例。 4 is a flow chart showing an example of determining a mouth position area according to an embodiment of the present invention.

圖5A~圖5E是依照本發明一實施例的偵測目標物體的示意圖。 5A-5E are schematic diagrams of detecting a target object according to an embodiment of the invention.

圖6是依據本發明一實施例說明紀錄嘴部動作資訊的流程範例。 FIG. 6 is a flow chart showing an example of recording mouth movement information according to an embodiment of the present invention.

圖7是依據本發明另一實施例說明紀錄嘴部動作資訊的流程範例。 FIG. 7 is a flow chart showing an example of recording mouth movement information according to another embodiment of the present invention.

人們在抓取行動電話接聽來電的過程中,行動電話通常 會朝向耳側移動,以使行動電話的聽筒朝向耳朵,並使行動電話的話筒貼近嘴部。據此,本發明實施例便是對人員進行影像監控,且利用影像辨識技術來判斷人員是否將手持裝置朝向耳朵移動。同時,本發明實施例更判斷人員的嘴部動作是否符合預設嘴部資訊。藉此,便能有效且準確地判斷人員是否正使用手持裝置。以下提出符合本發明之精神的多個實施例,應用本實施例者可依其需求而對這些實施例進行適度調整,而不僅限於下述描述中的內容。 In the process of picking up a mobile phone to answer a call, the mobile phone usually It will move towards the ear side so that the handset of the mobile phone is facing the ear and the microphone of the mobile phone is placed close to the mouth. Accordingly, the embodiment of the present invention performs image monitoring on a person and uses image recognition technology to determine whether a person moves the handheld device toward the ear. At the same time, the embodiment of the invention further determines whether the mouth movement of the person meets the preset mouth information. Thereby, it is possible to effectively and accurately judge whether or not the person is using the handheld device. A plurality of embodiments in accordance with the spirit of the present invention are set forth below, and those applying the present embodiment can be appropriately adjusted according to their needs, and are not limited to the contents described in the following description.

圖1是依據本發明一實施例說明一種偵測人員使用手持裝置的裝置的方塊圖。請參照圖1,裝置100包括影像擷取單元110、儲存單元130及處理單元150。在一實施例中,裝置100例如是設置在行車內,以對駕駛人進行偵測。在其他實施例中,裝置100亦可以使用於自動櫃員機(Automated Teller Machine;ATM)等自動交易裝置,以判斷例如是操作者是否正接聽手持裝置而進行轉帳操作。需說明的是,應用本發明實施例者可依據需求,將裝置100設置於任何需要監控人員是否正使用手持裝置的電子裝置或場所,本發明實施例不加以限制。 1 is a block diagram showing an apparatus for detecting a person using a handheld device according to an embodiment of the invention. Referring to FIG. 1 , the device 100 includes an image capturing unit 110 , a storage unit 130 , and a processing unit 150 . In one embodiment, the device 100 is, for example, disposed within the vehicle for detecting the driver. In other embodiments, the device 100 can also be used in an automated transaction device such as an Automated Teller Machine (ATM) to determine, for example, whether the operator is answering the handheld device and performing a transfer operation. It should be noted that the embodiment of the present invention can be used to set the device 100 to any electronic device or location that requires the monitoring personnel to use the handheld device.

影像擷取單元110可以是電荷耦合元件(Charge coupled device;CCD)鏡頭、互補式金氧半電晶體(Complementary metal oxide semiconductor transistors;CMOS)鏡頭、或紅外線鏡頭的攝影機、照相機。影像擷取單元110用以擷取人員的影像,並將影像存放儲存單元130。 The image capturing unit 110 may be a charge coupled device (CCD) lens, a complementary metal oxide semiconductor transistor (CMOS) lens, or a camera or camera of an infrared lens. The image capturing unit 110 is configured to capture an image of a person and store the image in the storage unit 130.

儲存單元130可以是任何型態的固定或可移動隨機存取記憶體(random access memory;RAM)、唯讀記憶體(read-only memory;ROM)、快閃記憶體(flash memory)、硬碟(Hard Disk Drive;HDD)或類似元件或上述元件的組合。 The storage unit 130 can be any type of fixed or removable random access memory (RAM), read-only memory (ROM), flash memory, hard disk. (Hard Disk Drive; HDD) or the like or a combination of the above elements.

處理單元150耦接影像擷取單元110以及儲存單元130。處理單元150可以是中央處理器(Central Processing Unit;CPU)具有運算功能的晶片組、微處理器或微控制器(micro control unit;MCU)。本發明實施例處理單元150用以處理本實施例之裝置100的所有作業。處理單元150可透過影像擷取單元110取得影像,將影像儲存至儲存單元130中,並對影像進行影像處理之程序。 The processing unit 150 is coupled to the image capturing unit 110 and the storage unit 130. The processing unit 150 may be a chip set, a microprocessor or a micro control unit (MCU) having a computing function of a central processing unit (CPU). The processing unit 150 of the embodiment of the present invention is configured to process all the operations of the apparatus 100 of the embodiment. The processing unit 150 can acquire images through the image capturing unit 110, store the images in the storage unit 130, and perform image processing on the images.

需說明的是,在其他實施例中,影像擷取單元110更具有照明元件,用以在光線不足時適時進行補光,以確保其所拍攝到的影像的清晰度。 It should be noted that, in other embodiments, the image capturing unit 110 further has a lighting component for correcting light when the light is insufficient to ensure the sharpness of the image captured by the image capturing unit 110.

為幫助理解本發明之技術,以下舉一情境說明本發明的應用方式。假設本發明實施例的裝置100設置於汽車上,一位駕駛人員坐在駕駛位置(未方便說明,以下以「人員」作為此駕駛人員),裝置100上的影像擷取單元110可對人員進行拍攝。影像擷取單元110所擷取到人員的影像可包含人員的臉部、肩部甚至是半身。此外,假設手持裝置放置於排檔附近或儀表板上方等任何位置。於此,底下將依據此情境搭配諸多實施例來進行詳細說明。 To assist in understanding the techniques of the present invention, the following is a description of the application of the present invention. It is assumed that the device 100 of the embodiment of the present invention is installed in a car, and a driver sits in a driving position (not conveniently described, the following is a "person" as the driver), and the image capturing unit 110 on the device 100 can perform the personnel. Shooting. The image captured by the image capturing unit 110 may include a person's face, shoulders, or even a half body. In addition, it is assumed that the handheld device is placed anywhere near the gear or above the dashboard. Herein, the details will be described below in conjunction with various embodiments.

圖2是依據本發明一實施例說明一種偵測人員使用手持 裝置(例如,功能型手機或智慧型手機等類型的行動電話)的方法流程圖。請參照圖2,本實施例的方法適用於圖1的裝置100。下文中,將搭配裝置100中的各項元件說明本發明實施例所述之方法。本方法的各個流程可依照實施情形而隨之調整,且並不僅限於此。 2 is a diagram illustrating a detecting person using a handheld according to an embodiment of the invention A flow chart of a method of a device (for example, a mobile phone of the type such as a feature phone or a smart phone). Referring to FIG. 2, the method of the present embodiment is applicable to the apparatus 100 of FIG. Hereinafter, the method described in the embodiments of the present invention will be described with respect to various elements in the device 100. The various processes of the method can be adjusted accordingly according to the implementation situation, and are not limited thereto.

在步驟S210中,處理單元150透過影像擷取單元110擷取人員的影像序列。例如,影像擷取單元110可設定為每秒30張、45張等拍攝速度,以對人員進行拍攝,並將包括連續擷取的多張影像的影像序列儲存在儲存單元130中。 In step S210, the processing unit 150 retrieves the image sequence of the person through the image capturing unit 110. For example, the image capturing unit 110 can set the shooting speeds of 30 sheets, 45 sheets, and the like to capture a person, and store the image sequence including the plurality of images continuously captured in the storage unit 130.

在其他實施例中,處理單元150亦可事先設定啟動條件。當符合此啟動條件時,處理單元150可致能影像擷取單元110來擷取人員的影像。例如,可在影像擷取單元110的附近設置感測器(例如,紅外線感測器),裝置100利用紅外線感測器來偵測是否有人員位於影像擷取單元110可擷取影像的範圍內。倘若紅外線感測器偵測到在影像擷取單元110前方有人員出現(即,符合啟動條件)時,處理單元150便會致能影像擷取單元110開始擷取影像。另外,裝置100上亦可設置啟動鈕,當此啟動紐被按壓時,處理單元150才啟動影像擷取單元110。需說明的是,上述僅為舉例說明,本發明並不以此為限。 In other embodiments, the processing unit 150 may also set a startup condition in advance. When the start condition is met, the processing unit 150 can enable the image capture unit 110 to capture an image of the person. For example, a sensor (for example, an infrared sensor) may be disposed in the vicinity of the image capturing unit 110, and the device 100 uses an infrared sensor to detect whether a person is located within a range in which the image capturing unit 110 can capture an image. . If the infrared sensor detects that a person appears in front of the image capturing unit 110 (ie, meets the starting condition), the processing unit 150 enables the image capturing unit 110 to start capturing images. In addition, a start button may be disposed on the device 100, and when the start button is pressed, the processing unit 150 activates the image capture unit 110. It should be noted that the foregoing is merely illustrative, and the invention is not limited thereto.

此外,處理單元150亦可對擷取到的影像序列執行背景濾除動作。例如,將第I張影像與第I+1張影像進行差分處理,I為正整數。之後,處理單元150可將濾除背影的影像轉為灰階影 像,藉此進行後續動作。 In addition, the processing unit 150 may also perform a background filtering action on the captured image sequence. For example, the first image and the first +1 image are subjected to differential processing, and I is a positive integer. Afterwards, the processing unit 150 can convert the image filtered back to grayscale Like, to follow up.

接著,由處理單元150開始對上述影像序列的每一張影像進行影像處理程序。在步驟S230中,處理單元150分析影像序列的每一個影像,以獲得臉部物件。具體而言,處理單元150分析影像序列以取得臉部特徵(例如,眼睛、鼻子、嘴唇等),再利用臉部特徵的比對,來找出影像中的臉部物件。例如,儲存單元130儲存有特徵資料庫。此特徵資料庫包括了臉部特徵樣本(pattern)。而處理單元150藉由與特徵資料庫中的樣本進行比對來獲得臉部物件。針對偵測臉部的技術,本發明實施例可利用AdaBoost演算法或其他人臉偵測演算法(例如,主成份分析(Principal Component Analysis;PCA)、獨立成份分析(Independent Component Analysis;ICA)等演算法或利用Haar-like特徵來進行人臉偵測動作等)來獲得每一個影像中的臉部物件。 Next, the processing unit 150 starts an image processing procedure for each image of the video sequence. In step S230, the processing unit 150 analyzes each image of the image sequence to obtain a face object. Specifically, the processing unit 150 analyzes the image sequence to obtain facial features (eg, eyes, nose, lips, etc.), and then uses the alignment of the facial features to find the facial objects in the image. For example, the storage unit 130 stores a feature database. This feature database includes facial feature samples. The processing unit 150 obtains the facial object by comparing with the samples in the feature database. For the technique of detecting a face, the embodiment of the present invention can utilize an AdaBoost algorithm or other face detection algorithms (for example, Principal Component Analysis (PCA), Independent Component Analysis (ICA), etc. The algorithm or the Haar-like feature is used to perform face detection actions, etc.) to obtain facial objects in each image.

而在其他實施例中,在偵測臉部特徵之前,處理單元150更可先執行背景濾除動作。例如,處理單元150可透過影像擷取單元110事先擷取至少一張未存在人像的背景影像,以在獲得人員的影像之後,處理單元150可將包括人像的影像與背景影像進行相減,如此便能夠將背景濾除。之後,處理單元150可將濾除背影的影像轉為灰階影像,再轉為二值化影像。此時,處理單元150便可於二值化影像中來偵測臉部特徵。 In other embodiments, the processing unit 150 may perform a background filtering action before detecting the facial features. For example, the processing unit 150 may capture at least one background image of the non-existing portrait through the image capturing unit 110, so that after obtaining the image of the person, the processing unit 150 may subtract the image including the portrait from the background image, such that The background can be filtered out. Afterwards, the processing unit 150 can convert the image of the filtered back image into a grayscale image and then convert it into a binarized image. At this time, the processing unit 150 can detect the facial features in the binarized image.

在步驟S250中,處理單元150依據臉部物件決定至少一個耳側位置區域以及嘴部位置區域。 In step S250, the processing unit 150 determines at least one ear side position area and a mouth position area according to the face object.

在一實施例中,為了更精準地獲得耳側位置區域,處理單元150更可在獲得臉部物件之後,於臉部物件中搜尋鼻孔物件,進而基於鼻孔物件的位置,往水平方向搜尋耳側位置區域。例如,往鼻孔物件的左右兩側搜尋左右臉頰的邊界。之後,處理單元150依據人臉與耳朵的相對位置,搜尋到的邊界為基準來獲得左右兩邊的耳側位置區域。接著,處理單元150可依據耳側位置區域獲得興趣區域(Region of Interest;ROI)。 In an embodiment, in order to obtain the ear side positional area more accurately, the processing unit 150 may search for the nostril object in the facial object after obtaining the facial object, and then search the ear side in the horizontal direction based on the position of the nostril object. Location area. For example, search for the left and right cheeks on the left and right sides of the nostril object. Thereafter, the processing unit 150 obtains the ear side position regions on the left and the right sides based on the relative positions of the face and the ear and the searched boundary as a reference. Next, the processing unit 150 may obtain a Region of Interest (ROI) according to the ear side location area.

舉例而言,圖3是依照本發明一實施例的影像的示意圖。處理單元150在偵測到影像300的臉部物件310之後,便可獲得鼻孔物件320,再由鼻孔物件320來找到左右兩邊的邊界C1、C2,並以邊界C1、C2為基準而獲得耳側位置區域。為方便說明,範例中僅舉其中一側臉頰的邊界C1來進行說明,然而,可依此類推至另一側臉頰的邊界C2。以邊界C1的座標為基準,處理單元150利用預先設定好的尺寸範圍而獲得耳側位置區域E。然後,處理單元150根據耳側位置區域E,再以另一預設尺寸範圍而獲得興趣區域R。需說明的是,圖3中耳側位置區域E以及興趣區域R的大小、位置及形狀在其他實施例中可能不同,本發明實施例不加以限制。 For example, FIG. 3 is a schematic diagram of an image in accordance with an embodiment of the present invention. After detecting the facial object 310 of the image 300, the processing unit 150 can obtain the nostril object 320, and then find the boundary C1 and C2 of the left and right sides by the nostril object 320, and obtain the ear side based on the boundaries C1 and C2. Location area. For convenience of explanation, the example only illustrates the boundary C1 of one of the cheeks, however, it can be pushed to the boundary C2 of the other cheek. Based on the coordinates of the boundary C1, the processing unit 150 obtains the ear side position area E using a predetermined size range. Then, the processing unit 150 obtains the region of interest R in another preset size range according to the ear side position region E. It should be noted that the size, the position and the shape of the ear-side location area E and the region of interest R in FIG. 3 may be different in other embodiments, which are not limited in the embodiment of the present invention.

在另一實施例中,處理單元150由鼻孔物件中辨識出鼻孔定位點,基於鼻孔定位點設定嘴部區域,對嘴部區域的影像進行影像處理以判斷人員之嘴部物件,且依據嘴部物件在嘴部區域決定嘴部位置區域。 In another embodiment, the processing unit 150 identifies the nostril positioning point from the nostril object, sets the mouth area based on the nostril positioning point, performs image processing on the image of the mouth area to determine the mouth part of the person, and according to the mouth part. The object determines the mouth position area in the mouth area.

舉例而言,圖4是依據本發明一實施例說明決定嘴部位置區域的流程範例。處理單元150可基於鼻孔位置資訊設定嘴部區域,藉由例如是嘴唇顏色與皮膚、牙齒顏色深淺不同的差異,透過調整嘴部區域內的對比而獲得加強影像(步驟S410),並進一步對此加強影像進行去雜點處理。例如,透過像素矩陣將雜點濾除。處理單元150便可獲得相對於加強影像更清晰的去雜點影像(步驟S430)。接著,處理單元150根據影像中某個顏色與另一個顏色的對比程度進行邊緣銳利化處理,以決定此去雜點影像中的邊緣,因而獲得銳化影像(步驟S450)。由於影像的複雜程度將決定影像占用的記憶容量,為提高比對的效能,處理單元150更對此銳化影像進行二值化處理。例如,處理單元150先設定門檻值,並將影像中的像素分為超出或低於此門檻值的二種數值,而可獲得二值化影像(步驟S470)。最後,處理單元150再次對二值化影像進行邊緣銳利化處理。此時,二值化影像中人員的嘴唇部位已相當明顯,處理單元150便可於嘴部區域中取出嘴部位置區域(步驟S490)。 For example, FIG. 4 is a flow chart showing an example of determining a mouth position area according to an embodiment of the present invention. The processing unit 150 can set the mouth area based on the information of the nostril position, and obtain a enhanced image by adjusting the contrast in the mouth area by, for example, a difference in color of the lips and the difference in color of the skin and the teeth (step S410), and further Enhance the image for denoising processing. For example, the pixels are filtered through the pixel matrix. The processing unit 150 can obtain a clear denoising image with respect to the enhanced image (step S430). Next, the processing unit 150 performs edge sharpening processing according to the degree of contrast of a certain color in the image with another color to determine an edge in the denoising image, thereby obtaining a sharpened image (step S450). Since the complexity of the image will determine the memory capacity occupied by the image, in order to improve the performance of the comparison, the processing unit 150 performs binarization processing on the sharpened image. For example, the processing unit 150 first sets the threshold value, and divides the pixels in the image into two values exceeding or lower than the threshold value, thereby obtaining a binarized image (step S470). Finally, the processing unit 150 performs edge sharpening processing on the binarized image again. At this time, the lip portion of the person in the binarized image is already quite conspicuous, and the processing unit 150 can take out the mouth position region in the mouth region (step S490).

需說明的是,應用本發明實施例者,可以依據設計需求來決定耳側位置區域以及嘴部位置區域,例如針對不同人員的臉部特徵(例如,臉寬、耳朵大小、嘴唇寬度等)進行調整,本發明並不以此為限。 It should be noted that, in the embodiment of the present invention, the ear side position area and the mouth position area may be determined according to design requirements, for example, for facial features of different people (for example, face width, ear size, lip width, etc.). Adjustments, the invention is not limited thereto.

在步驟S270中,處理單元150於影像序列的每一個影像,偵測目標物體,以計算此目標物體的移動軌跡。例如,目標 物體例如為手持裝置。處理單元150便在每一個影像中偵測手持裝置。此外,人員手腕上配戴的手錶或手指都可依據設計需求而作為目標物體。在一實施例中,處理單元150依據耳側位置區域獲得興趣區域,將當前影像與參考影像(可以是先前影像,例如此當前影像的前一幅影像或前N幅影像,亦或是預先設定的任一幅影像)兩者各自的興趣區域(例如,圖3的興趣區域R)執行影像相減演算法,以獲得目標區域影像。處理單元150並藉由參考影像的興趣區域,濾除目標區域影像的雜訊,以獲得目標物體。 In step S270, the processing unit 150 detects the target object for each image of the image sequence to calculate the movement trajectory of the target object. For example, the goal The object is for example a handheld device. The processing unit 150 detects the handheld device in each of the images. In addition, the watch or finger worn on the wrist of the person can be used as the target object according to the design requirements. In an embodiment, the processing unit 150 obtains the region of interest according to the ear side location area, and the current image and the reference image (which may be a previous image, such as the previous image or the first N images of the current image, or preset) Either image of each of the two regions of interest (eg, region of interest R of FIG. 3) performs a image subtraction algorithm to obtain a target region image. The processing unit 150 filters out the noise of the image of the target area by referring to the region of interest of the image to obtain the target object.

舉例而言,圖5A~圖5E是依照本發明一實施例的偵測目標物體的示意圖。為了方便說明,圖5A~圖5E的灰階度將省略,而僅描示出灰階的邊緣來進行說明。圖5A所示為參考影像510以及興趣區域511。圖5B所示為目前影像擷取單元110所擷取的當前影像520、興趣區域521以及耳側位置區域E。圖5C所示為目標區域影像530。圖5D所示為濾除區域影像540。圖5E所示為具有目標物體O的區域影像550。 For example, FIGS. 5A-5E are schematic diagrams of detecting a target object according to an embodiment of the invention. For convenience of explanation, the gray scale of FIGS. 5A to 5E will be omitted, and only the edges of the gray scale will be described. FIG. 5A shows a reference image 510 and an area of interest 511. FIG. 5B shows the current image 520, the region of interest 521, and the ear-side location region E captured by the current image capturing unit 110. A target area image 530 is shown in FIG. 5C. A filtered area image 540 is shown in Figure 5D. An area image 550 having a target object O is shown in Figure 5E.

具體而言,處理單元150將參考影像510的興趣區域511與當前影像520的興趣區域521執行影像相減演算法之後,便能夠獲得兩張影像中具有差異的目標區域影像530。也就是說,目標區域影像530為興趣區域511與興趣區域521進行影像相減演算法後所獲得的結果。在目標區域影像530中,以虛線來表示非目標物體的其他雜訊。接著,為了濾除雜訊以獲得目標物體,處理單元150對參考影像510的興趣區域511的執行邊緣偵測演算法 及膨脹(dilate)演算法等,以獲得濾除區域影像540。然後,處理單元150將目標區域影像530與濾除區域影像540進行影像相減演算法後,便可獲得如圖5E所示具有目標物體O的區域影像550。 Specifically, after the processing unit 150 performs the image subtraction algorithm on the region of interest 511 of the reference image 510 and the region of interest 521 of the current image 520, the target region image 530 having the difference between the two images can be obtained. That is to say, the target area image 530 is a result obtained by performing the image subtraction algorithm between the interest area 511 and the interest area 521. In the target area image 530, other noises of the non-target object are indicated by broken lines. Then, in order to filter out the noise to obtain the target object, the processing unit 150 performs an edge detection algorithm on the region of interest 511 of the reference image 510. And a dilate algorithm or the like to obtain a filtered region image 540. Then, after the processing unit 150 performs the image subtraction algorithm on the target area image 530 and the filtered area image 540, the area image 550 having the target object O as shown in FIG. 5E can be obtained.

然後,處理單元150獲得目標物體之後,根據影像序列計算目標物體的移動軌跡。在一實施例中,處理單元150計算目標物體的垂直投影量與水平投影量,以獲得目標物體的尺寸範圍。舉例而言,處理單元150計算目標物體的垂直投影量,以獲得目標物體在垂直軸上的長度,且計算目標物體的水平投影量而獲得目標物體位於水平軸上的寬度。藉由上述長度與寬度,處理單元150便可獲得目標物體的尺寸範圍。 Then, after the processing unit 150 obtains the target object, the movement trajectory of the target object is calculated according to the image sequence. In an embodiment, the processing unit 150 calculates a vertical projection amount and a horizontal projection amount of the target object to obtain a size range of the target object. For example, the processing unit 150 calculates the vertical projection amount of the target object to obtain the length of the target object on the vertical axis, and calculates the horizontal projection amount of the target object to obtain the width of the target object on the horizontal axis. With the above length and width, the processing unit 150 can obtain the size range of the target object.

接著,處理單元150在尺寸範圍內取基準點,且藉由影像序列的每一個影像中的基準點的位置,獲得移動軌跡。處理單元150可依據目標物體的其中一點作為基準點,對每張影像中基準點的位置進行統計,進而獲得目標物體的移動軌跡。舉例而言,以圖5E為例,處理單元150計算出目標物體O的長度與寬度,以獲得尺寸範圍551。並且,處理單元150以尺寸範圍551的左上方頂點B作為基準點。而後續其他張影像的目標物體亦可依據尺寸範圍的左上方頂點作為基準點。據此,由多張影像的基準點便可得知目標物體的移動軌跡。需說明的是,上述依據尺寸範圍的左上方頂點作為基準點僅為舉例說明,本發明實施例不以此為限。 Next, the processing unit 150 takes the reference point within the size range, and obtains the movement trajectory by the position of the reference point in each image of the image sequence. The processing unit 150 can count the position of the reference point in each image according to one point of the target object as a reference point, thereby obtaining a movement trajectory of the target object. For example, taking FIG. 5E as an example, the processing unit 150 calculates the length and width of the target object O to obtain a size range 551. Further, the processing unit 150 takes the upper left vertex B of the size range 551 as a reference point. The target object of the subsequent other images may also be used as a reference point according to the upper left vertex of the size range. According to this, the movement trajectory of the target object can be known from the reference points of the plurality of images. It should be noted that the above-mentioned upper left vertex according to the size range is only used as a reference point, and the embodiment of the present invention is not limited thereto.

在步驟S290中,在處理單元150偵測到移動軌跡為目標 物體朝向耳側位置區域(例如,左耳側區域或右耳側區域)移動之後,比對在嘴部位置區域中所偵測的嘴部動作資訊是否符合預設嘴部資訊,來判斷人員是否正在使用手持裝置。 In step S290, the processing unit 150 detects that the movement trajectory is the target. After the object moves toward the ear side position area (for example, the left ear side area or the right ear side area), whether the mouth motion information detected in the mouth position area matches the preset mouth information is used to determine whether the person is The handheld device is being used.

舉例而言,儲存單元130可事先儲存預設移動軌跡,處理單元150便可將偵測到的移動軌跡與預設移動軌跡進行比對,以判斷偵測到的移動軌跡是否朝向耳側位置區域。此預設移動軌跡可以是自興趣區域的任何位置點,以任何角度至耳側位置區域的直線或任何不規則的線段。此外,裝置100可事先以影像擷取單元110紀錄多次人員執行拿取手持裝置的動作,並將紀錄的移動軌跡進行分析,以儲存作為預設移動軌跡。 For example, the storage unit 130 may store the preset movement track in advance, and the processing unit 150 may compare the detected movement track with the preset movement track to determine whether the detected movement track is toward the ear side position area. . This preset movement trajectory may be a straight line or any irregular line segment from any point of interest to the ear side position area from any point of interest. In addition, the device 100 may record, in advance, the image capturing unit 110 to perform the action of taking the handheld device, and analyze the recorded movement trajectory to store as a preset movement trajectory.

接著,在處理單元150偵測到移動軌跡為朝向耳側位置區域移動後,處理單元150持續依據影像擷取單元110所擷取到的後續影像序列,來比對在嘴部位置區域中所偵測的嘴部動作資訊是否符合預設嘴部資訊。例如,預設嘴部資訊是編碼序列、影像變動率或像素變動率等。在一個比對時間(例如,2秒、5秒等)內,處理單元150可比對影像序列所取得的嘴部動作資訊是否符合預設的編碼序列、影像變動率或像素變動率。若處理單元150判斷嘴部動作資訊是否符合預設嘴部資訊,則判定人員正在使用手持裝置。反之,則判斷人員並未使用手持裝置。 Then, after the processing unit 150 detects that the movement track is moving toward the ear side position area, the processing unit 150 continues to compare the subsequent image sequences captured by the image capturing unit 110 to detect the position in the mouth position area. Whether the measured mouth motion information meets the preset mouth information. For example, the preset mouth information is a code sequence, a picture change rate, or a pixel change rate. During a comparison time (eg, 2 seconds, 5 seconds, etc.), the processing unit 150 can compare the mouth motion information obtained by the image sequence with a preset coding sequence, image variation rate, or pixel variation rate. If the processing unit 150 determines whether the mouth motion information conforms to the preset mouth information, the determination person is using the handheld device. On the contrary, the judger does not use the handheld device.

需說明的是,在比對在嘴部位置區域中所偵測的嘴部動作資訊是否符合預設嘴部資訊之前,處理單元150更先紀錄嘴部動作資訊以便後續比對。以下將舉實施例說明。 It should be noted that, before comparing the mouth motion information detected in the mouth position area to the preset mouth information, the processing unit 150 records the mouth motion information for subsequent comparison. The embodiment will be described below.

在一實施例中,處理單元150取得嘴部影像,且依據嘴部影像取得嘴部特徵,依據嘴部特徵來判斷嘴部影像為張開動作影像或閉合動作影像。在嘴部紀錄時間內,處理單元150依序紀錄嘴部位置區域中所偵測到的所有閉合動作影像或張開動作影像並轉換成編碼序列,且將編碼序列存入嘴部動作資訊。 In one embodiment, the processing unit 150 obtains the mouth image, and obtains the mouth feature according to the mouth image, and determines the mouth image as the open motion image or the closed motion image according to the mouth feature. During the mouth recording time, the processing unit 150 sequentially records all the closed motion images or the motion motion images detected in the mouth position area and converts them into a code sequence, and stores the code sequence in the mouth motion information.

舉例而言,圖6是依據本發明一實施例說明紀錄嘴部動作資訊的流程範例。請參照圖6,處理單元150在嘴部位置區域中取出數個嘴部特徵(步驟S610),例如,嘴部特徵包括上唇部位以及下唇部位。具體而言,處理單元150取出嘴部特徵的方法,可藉由找出嘴部區域的左右兩側邊界,定義出左側嘴角與右側嘴角。同樣的,處理單元150藉由找出嘴部位置區域的上下兩側的輪廓線,並經由左側嘴角與右側嘴角的連線辨識出上唇部位與下唇部位。接著,處理單元150將上唇部位與下唇部位的間距與間隙值(例如,0.5公分、1公分等)進行比較(步驟S620)。判斷上唇部位與下唇部位間的間距是否大於間隙值(步驟S630)。若上唇部位與下唇部位間的間距大於間隙值,則代表使用者的嘴巴是張開的,並藉以獲得一張張開動作影像(步驟S640)。反之,則處理單元150獲得一張閉合動作影像(步驟S650)。 For example, FIG. 6 is a flow chart showing an example of recording mouth movement information according to an embodiment of the present invention. Referring to FIG. 6, the processing unit 150 takes out a plurality of mouth features in the mouth position region (step S610). For example, the mouth features include an upper lip portion and a lower lip portion. Specifically, the method by which the processing unit 150 takes out the features of the mouth can define the left and right corners of the mouth by finding the left and right borders of the mouth region. Similarly, the processing unit 150 recognizes the contours of the upper and lower sides of the mouth position region and recognizes the upper lip portion and the lower lip portion via the line connecting the left corner of the mouth and the right corner of the mouth. Next, the processing unit 150 compares the pitch of the upper lip portion and the lower lip portion with a gap value (for example, 0.5 cm, 1 cm, etc.) (step S620). It is judged whether or not the interval between the upper lip portion and the lower lip portion is larger than the gap value (step S630). If the distance between the upper lip portion and the lower lip portion is greater than the gap value, the user's mouth is opened, and an open motion image is obtained (step S640). Otherwise, the processing unit 150 obtains a closed motion image (step S650).

處理單元150依據閉合動作影像或張開動作影像產生編碼,並將編碼存入編碼序列中(例如,第N欄,N為正整數)(步驟S660)。編碼序列可以是二元編碼,或以摩斯電碼的方式編碼。例如,將張開動作影像定義為1,而閉合動作影像定義為0。若人 員張開嘴巴兩個單位時間後再閉起嘴巴兩個單位時間,則編碼序列即為(1,1,0,0)。接著,處理單元150判斷是否達到嘴部紀錄時間(步驟S670)。例如,處理單元150在步驟S610時啟動計時器,並在步驟S670判斷計時器是否到達嘴部紀錄時間。若處理單元150判斷計時器尚未達到嘴部紀錄時間,則使N=N+1(步驟S680),並返回步驟S610繼續分辨嘴部的開闔狀況。並且,下次產生的編碼將會儲存至例如是編碼序列中的下一欄位(例如,第N+1欄)。其中,每一個第N欄位均代表一個單位時間(例如,200毫秒、500毫秒等),儲存於欄位中的編碼則代表一個單位時間所紀錄的所有張開動作影像以及閉合動作影像的順序。 The processing unit 150 generates a code according to the closed motion image or the open motion image, and stores the code in the code sequence (for example, the Nth column, N is a positive integer) (step S660). The coding sequence can be binary coded or encoded in the form of a Morse code. For example, the open motion image is defined as 1 and the closed motion image is defined as 0. If After opening the mouth for two unit hours and then closing the mouth for two unit hours, the coding sequence is (1, 1, 0, 0). Next, the processing unit 150 determines whether the mouth recording time is reached (step S670). For example, the processing unit 150 starts the timer at step S610, and determines whether the timer reaches the mouth recording time at step S670. If the processing unit 150 determines that the timer has not reached the mouth recording time, N=N+1 is made (step S680), and the process returns to step S610 to continue to distinguish the opening condition of the mouth. And, the next generated code will be stored, for example, to the next field in the code sequence (for example, column N+1). Each of the Nth fields represents a unit time (for example, 200 milliseconds, 500 milliseconds, etc.), and the code stored in the field represents all open motion images recorded in a unit time and the sequence of the closed motion images. .

需說明的是,此範例中可在步驟S610至S680的流程中加入延遲時間(例如,100毫秒、200毫秒等),使步驟S610至S680流程所耗費的時間等於單位時間,以使每個第N欄位代表一個單位時間。最後,處理單元150將編碼序列存入嘴部動作資訊(步驟S690)。 It should be noted that, in this example, a delay time (for example, 100 milliseconds, 200 milliseconds, etc.) may be added in the processes of steps S610 to S680, so that the time spent in the processes of steps S610 to S680 is equal to the unit time, so that each The N field represents a unit time. Finally, the processing unit 150 stores the encoded sequence in the mouth motion information (step S690).

而在另一實施例中,在嘴部比對時間內,處理單元150將嘴部位置區域的影像與樣板影像進行比對,以產生嘴型編碼。處理單元150將嘴型編碼存入編碼序列中,且將編碼序列存入嘴部動作資訊。 In another embodiment, during the mouth comparison time, the processing unit 150 compares the image of the mouth position area with the template image to generate a mouth type code. The processing unit 150 stores the mouth type code in the code sequence and stores the code sequence in the mouth motion information.

舉例而言,圖7是依據本發明另一實施例說明紀錄嘴部動作資訊的流程範例。在本實施例中,嘴部動作資訊亦可代表多種嘴型之組合序列。請參照圖7,處理單元150將嘴部位置區域的 影像與儲存單元130中的數個樣板(pattern)影像進行比對(步驟S710)。樣板影像可以是具有辨識性的特定嘴部動作影像或唇語等,例如,朗讀日文五十音的「」、中文的「喂、您好、請說、我是」或英文的「hello」等的嘴部各處肌肉呈現的動作。這些樣板影像分別具有一定的變動彈性,即便人員臉部影像中的嘴型與樣板影像具有些微的差異,只要差異在變動彈性可容許的範圍內,處理單元150依然可辨識為與樣板影像相符。 For example, FIG. 7 is a flow chart showing an example of recording mouth movement information according to another embodiment of the present invention. In this embodiment, the mouth motion information may also represent a combined sequence of a plurality of mouth types. Referring to FIG. 7, the processing unit 150 compares the image of the mouth position area with a plurality of pattern images in the storage unit 130 (step S710). The sample image can be a specific specific mouth motion image or lip language, for example, reading the Japanese syllabary. , , , , "Chinese, "Hello, Hello, Please, I am" or English "hello" and other muscles in the mouth. These template images have a certain degree of variation flexibility, and even if the mouth shape and the template image in the human face image have slight differences, as long as the difference is within the allowable range of the variable elasticity, the processing unit 150 can still recognize that it matches the template image.

接著,處理單元150判斷嘴部位置區域的影像相符於樣板影像(步驟S720)。若比對結果為相符,則處理單元150產生嘴型編碼,並將嘴型編碼存入編碼序列中(例如,第M欄,M為正整數)(步驟S730)。若比對結果為不符合,則處理單元150將M=M+1(步驟S740),並返回步驟S710。接著,處理單元150判斷是否達到嘴部比對時間(步驟S750)。例如,處理單元150在步驟S710時啟動計時器,並在步驟S740判斷計時器是否到達嘴部比對時間。當計時器到達嘴部比對時間後,處理單元150將編碼序列存入嘴部動作資訊(步驟S770)。若處理單元150判斷計時器尚未達到嘴部比對時間,則使M=M+1(步驟S760),並返回步驟S710繼續將嘴部位置區域的影像與樣板影像進行比對。 Next, the processing unit 150 determines that the image of the mouth position area matches the template image (step S720). If the result of the comparison is the match, the processing unit 150 generates a mouth type code and stores the mouth type code in the code sequence (for example, the Mth column, M is a positive integer) (step S730). If the result of the comparison is not satisfied, the processing unit 150 sets M = M + 1 (step S740), and returns to step S710. Next, the processing unit 150 determines whether the mouth comparison time is reached (step S750). For example, the processing unit 150 starts the timer at step S710, and determines whether the timer reaches the mouth comparison time at step S740. When the timer reaches the mouth comparison time, the processing unit 150 stores the code sequence in the mouth motion information (step S770). If the processing unit 150 determines that the timer has not reached the mouth comparison time, then M=M+1 is made (step S760), and the process returns to step S710 to continue comparing the image of the mouth position area with the template image.

另一方面,人員通常會在抓取手持裝置至耳側附近後,待手持裝置位於適合通話的位置或人員可聽到手持裝置的聽筒發出的聲音,才會進行講話。因此,在一實施例中,處理單元150更判斷目標物體(例如,手持裝置)停留於耳側位置區域的停留 時間是否超過預設時間(例如,1秒、3秒等)。當停留時間超過預設時間時,處理單元150比對在嘴部位置區域中所偵測的嘴部動作資訊是否符合預設嘴部資訊。 On the other hand, a person usually speaks after grabbing the handheld device to the vicinity of the ear side, and the speaker is in a position suitable for a call or a person can hear the sound of the handset of the handheld device. Therefore, in an embodiment, the processing unit 150 further determines that the target object (eg, the handheld device) stays in the ear side position area. Whether the time exceeds the preset time (for example, 1 second, 3 seconds, etc.). When the dwell time exceeds the preset time, the processing unit 150 compares whether the mouth motion information detected in the mouth position region conforms to the preset mouth information.

此外,本發明實施例的裝置100亦可包括警示模組。警示模組耦接處理單元150。警示模組可以是顯示模組、燈光模組、振動模組或揚聲器模組其中之一或其組合。當處理單元150判斷人員正使用手持裝置時,透過警示模組啟動警示程序。具體而言,處理單元150產生提示信號至警示模組,警示模組便可依據提示信號來警示人員。例如,顯示模組可顯示文字、影像或圖像說明警告事宜(例如,小心!駕駛過程請勿使用手持裝置!)。燈光模組可以特定頻率閃爍燈光或發出特定顏色的燈光(例如,紅色、綠色等)。振動模組例如是包括振動馬達以產生固定頻率或變動頻率等振動。揚聲器模組可發出提示音。 In addition, the apparatus 100 of the embodiment of the present invention may further include an alert module. The warning module is coupled to the processing unit 150. The alert module can be one or a combination of a display module, a light module, a vibration module, or a speaker module. When the processing unit 150 determines that the person is using the handheld device, the alerting program is activated through the alert module. Specifically, the processing unit 150 generates a prompt signal to the alert module, and the alert module can alert the person according to the prompt signal. For example, the display module can display text, images, or images to indicate warnings (for example, be careful! Do not use the handheld device during driving!). A light module can flash a light at a specific frequency or emit a light of a specific color (for example, red, green, etc.). The vibration module includes, for example, a vibration motor to generate vibrations such as a fixed frequency or a varying frequency. The speaker module emits a beep.

在一些實施例中,預先儲存在儲存單元130中的預設嘴部資訊可以是取自樣板影像排列組合而成的預設嘴型編碼序列,且各預設嘴型編碼序列均對應於提示信號。例如,當使用者受到挾持時,人員可由嘴部作出唸誦「求救」的動作,但不必發出聲音。處理單元150便能以難以被察覺的方式,使警示模組產生求救訊號並發送至保全中心等處求援(警示模組可包括通訊模組)。 In some embodiments, the preset mouth information stored in the storage unit 130 may be a preset mouth type coding sequence obtained by combining the template image arrangement, and each preset mouth type coding sequence corresponds to the prompt signal. . For example, when the user is held hostage, the person can make a "salvation" action from the mouth, but does not have to make a sound. The processing unit 150 can cause the alert module to generate a distress signal and send it to the security center for assistance in a manner that is difficult to detect (the alert module can include a communication module).

需說明的是,上述駕駛汽車(亦可以是飛機、船等)的情境是為了幫助實施例說明,但本發明實施例亦可應用在自動交易裝置或其他監控人員是否正使用手持裝置的電子裝置或場所。 It should be noted that the above situation of driving a car (which may also be an airplane, a ship, etc.) is for the purpose of assisting the description of the embodiment, but the embodiment of the present invention can also be applied to an electronic device of an automatic transaction device or other monitoring personnel who is using the handheld device. Or place.

綜上所述,本發明實施例所述的裝置可藉由影像辨識技術判斷目標物體的移動軌跡是否朝向人員的耳側位置區域,再判斷人員的嘴部動作是否符合預設嘴部資訊。當判斷結果都符合時,本發明實施例的裝置便可判斷人員正在使用手持裝置,亦可發出提示信號來警示人員。藉此,本發明實施例可有效且即時地監控駕駛行為或其他不適合使用手持裝置的情況,例如,駕駛人員更可提高警覺,自動交易裝置可幫助警察單位快速處理電話詐欺等問題。 In summary, the device according to the embodiment of the present invention can determine whether the moving track of the target object faces the ear side position area of the person by using the image recognition technology, and then determine whether the mouth motion of the person meets the preset mouth information. When the judgment results are all met, the device of the embodiment of the present invention can judge that the person is using the handheld device, and can also issue a prompt signal to alert the person. Thereby, the embodiment of the present invention can effectively and immediately monitor the driving behavior or other situations that are not suitable for using the handheld device. For example, the driver can be more alert, and the automatic transaction device can help the police unit to quickly deal with problems such as telephone fraud.

雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。 Although the present invention has been disclosed in the above embodiments, it is not intended to limit the present invention, and any one of ordinary skill in the art can make some changes and refinements without departing from the spirit and scope of the present invention. The scope of the invention is defined by the scope of the appended claims.

S210~S290‧‧‧步驟 S210~S290‧‧‧Steps

Claims (15)

一種偵測人員使用手持裝置的方法,適用於一電子裝置,包括:擷取一人員的一影像序列;分析該影像序列的每一個影像,以獲得一臉部物件;依據該臉部物件決定至少一耳側位置區域以及一嘴部位置區域;於該影像序列的每一個影像,偵測一目標物體,以計算該目標物體的一移動軌跡;以及其中在偵測到該移動軌跡為該目標物體朝向該至少一耳側位置區域移動之後,比對在該嘴部位置區域中所偵測的一嘴部動作資訊是否符合一預設嘴部資訊,來判斷該人員是否正在使用一手持裝置。 A method for detecting a person using a handheld device, which is applicable to an electronic device, comprising: capturing a sequence of images of a person; analyzing each image of the sequence of images to obtain a face object; determining at least according to the face object An ear side position area and a mouth position area; detecting, for each image of the image sequence, a target object to calculate a moving track of the target object; and wherein detecting the moving track as the target object After moving toward the at least one ear side position area, it is determined whether the person is using a handheld device by comparing whether a mouth motion information detected in the mouth position area conforms to a preset mouth information. 如申請專利範圍第1項所述的方法,其中於該影像序列的每一個影像,偵測該目標物體,以計算該目標物體的該移動軌跡的步驟包括:計算該目標物體的一垂直投影量與一水平投影量,以獲得該目標物體的一尺寸範圍;於該尺寸範圍內取一基準點;以及藉由該影像序列的該每一個影像中的該基準點的位置,獲得該移動軌跡。 The method of claim 1, wherein detecting the target object in each image of the image sequence to calculate the movement trajectory of the target object comprises: calculating a vertical projection amount of the target object And a horizontal projection amount to obtain a size range of the target object; taking a reference point within the size range; and obtaining the movement trajectory by the position of the reference point in each of the images of the image sequence. 如申請專利範圍第1項所述的方法,其中依據該臉部物件 決定至少一耳側位置區域以及一嘴部位置區域的步驟包括:藉由一人臉偵測演算法獲得該臉部物件;於該臉部物件中搜尋一鼻孔物件;以及基於該鼻孔物件的位置,往一水平方向搜尋該耳側位置區域。 The method of claim 1, wherein the facial object is Determining the at least one ear side location area and the one mouth location area includes: obtaining the facial object by a face detection algorithm; searching for a nostril object in the facial object; and based on the position of the nostril object, Search for the ear side location area in a horizontal direction. 如申請專利範圍第3項所述的方法,其中於該臉部物件中搜尋該鼻孔物件的步驟之後,更包括:由該鼻孔物件中辨識出一鼻孔定位點;基於該鼻孔定位點設定一嘴部區域;以及對該嘴部區域的影像進行一影像處理以判斷該人員之一嘴部物件;以及依據該嘴部物件在該嘴部區域決定該嘴部位置區域。 The method of claim 3, wherein after the step of searching for the nostril object in the facial object, the method further comprises: identifying a nostril positioning point from the nostril object; setting a mouth based on the nostril positioning point And an image processing of the image of the mouth region to determine a mouth object of the person; and determining the mouth position region in the mouth region according to the mouth object. 如申請專利範圍第1項所述的方法,其中在偵測到該移動軌跡為該目標物體朝向該至少一耳側位置區域中移動的步驟包括:依據該耳側位置區域獲得一興趣區域;將該影像序列中之一當前影像與一參考影像兩者各自的該興趣區域,執行一影像相減演算法,以獲得一目標區域影像;以及藉由該參考影像的該興趣區域,濾除該目標區域影像的雜訊,以獲得該目標物體。 The method of claim 1, wherein the detecting that the movement trajectory moves the target object toward the at least one ear side position region comprises: obtaining an interest region according to the ear side position region; Performing an image subtraction algorithm on the region of interest of each of the current image and a reference image in the image sequence to obtain a target region image; and filtering the target region by using the region of interest of the reference image The noise of the area image is obtained to obtain the target object. 如申請專利範圍第1項所述的方法,其中在偵測到該移動軌跡為該目標物體朝向該至少一耳側位置區域移動之後,比對在該嘴部位置區域中所偵測的該嘴部動作資訊是否符合該預設嘴部 資訊,來判斷該人員是否正在使用該手持裝置的步驟包括:取得至少一嘴部影像,且依據該至少一嘴部影像取得多個嘴部特徵;依據該些嘴部特徵來判斷該至少一嘴部影像為一張開動作影像或一閉合動作影像;在一嘴部紀錄時間內,依序紀錄該嘴部位置區域中所偵測到的所有該閉合動作影像或該張開動作影像並轉換成一編碼序列;以及將該編碼序列存入該嘴部動作資訊。 The method of claim 1, wherein the mouth detected in the mouth position region is aligned after detecting that the movement track is the target object moving toward the at least one ear side position region Whether the motion information of the department meets the preset mouth The step of determining whether the person is using the handheld device comprises: obtaining at least one mouth image, and obtaining a plurality of mouth features according to the at least one mouth image; determining the at least one mouth according to the mouth features The image of the part is an open motion image or a closed motion image; during the recording time of the mouth, all the closed motion images or the opening motion images detected in the mouth position area are sequentially recorded and converted into one a coding sequence; and storing the coded sequence in the mouth motion information. 如申請專利範圍第1項所述的方法,其中在偵測到該移動軌跡為該目標物體朝向該至少一耳側位置區域移動之後,比對在該嘴部位置區域中所偵測的該嘴部動作資訊是否符合該預設嘴部資訊,來判斷該人員是否正在使用該手持裝置的步驟包括:在一嘴部比對時間內,將該嘴部位置區域的影像與多個樣板影像進行比對,以產生一嘴型編碼;將該嘴型編碼存入一編碼序列中;以及將該編碼序列存入該嘴部動作資訊。 The method of claim 1, wherein the mouth detected in the mouth position region is aligned after detecting that the movement track is the target object moving toward the at least one ear side position region The step of determining whether the person is using the handheld device includes: comparing the image of the mouth position area with the plurality of template images during a mouth comparison time. Pairing to generate a mouth type code; storing the mouth type code in a code sequence; and storing the code sequence in the mouth motion information. 一種偵測人員使用手持裝置的裝置,包括:一影像擷取單元,擷取一人員的一影像序列;一儲存單元,儲存該影像序列以及一預設嘴部資訊;以及一處理單元,耦接至該儲存單元以取得該影像序列;其中該處理單元分析該影像序列的每一個影像,以獲得一臉部物件;該 處理單元依據該臉部物件決定至少一耳側位置區域以及一嘴部位置區域;於該影像序列的每一個影像中偵測一目標物體,以計算該目標物體的一移動軌跡;在該處理單元偵測到該移動軌跡為該目標物體朝向該至少一耳側位置區域移動之後,比對在該嘴部位置區域中所偵測的一嘴部動作資訊是否符合一預設嘴部資訊,來判斷該人員是否正在使用一手持裝置。 A device for detecting a person using a handheld device includes: an image capturing unit that captures a sequence of images of a person; a storage unit that stores the image sequence and a preset mouth information; and a processing unit coupled Go to the storage unit to obtain the image sequence; wherein the processing unit analyzes each image of the image sequence to obtain a facial object; The processing unit determines at least one of the ear side position area and a mouth position area according to the facial object; detecting a target object in each image of the image sequence to calculate a moving track of the target object; After detecting that the moving track is the target object moving toward the at least one ear side position area, determining whether the mouth motion information detected in the mouth position area meets a preset mouth information is determined. Whether the person is using a handheld device. 如申請專利範圍第8項所述的裝置,其中該處理單元計算該目標物體的一垂直投影量與一水平投影量,以獲得該目標物體的一尺寸範圍,於該尺寸範圍內取一基準點,且藉由該影像序列的該每一個影像中的該基準點的位置,獲得該移動軌跡。 The apparatus of claim 8, wherein the processing unit calculates a vertical projection amount and a horizontal projection amount of the target object to obtain a size range of the target object, and takes a reference point within the size range. And the movement trajectory is obtained by the position of the reference point in each of the images of the image sequence. 如申請專利範圍第8項所述的裝置,其中該處理單元藉由一人臉偵測演算法獲得該臉部物件,於該臉部物件中搜尋一鼻孔物件,且基於該鼻孔物件的位置,往一水平方向搜尋該耳側位置區域。 The device of claim 8, wherein the processing unit obtains the facial object by a face detection algorithm, searches for a nostril object in the facial object, and based on the position of the nostril object, The ear side position area is searched in a horizontal direction. 如申請專利範圍第10項所述的裝置,其中該處理單元由該鼻孔物件中辨識出一鼻孔定位點,基於該鼻孔定位點設定一嘴部區域,對該嘴部區域的影像進行一影像處理以判斷該人員之一嘴部物件,且依據該嘴部物件在該嘴部區域決定該嘴部位置區域。 The device of claim 10, wherein the processing unit identifies a nostril positioning point from the nostril object, sets a mouth region based on the nostril positioning point, and performs image processing on the image of the mouth region The mouth object is determined by one of the persons, and the mouth position area is determined in the mouth area according to the mouth item. 如申請專利範圍第8項所述的裝置,其中該處理單元依據該耳側位置區域獲得一興趣區域,將該影像序列中之一當前影像與一參考影像兩者各自的該興趣區域,執行一影像相減演算法,以獲得一目標區域影像,且該處理單元藉由該參考影像的該 興趣區域,濾除該目標區域影像的雜訊,以獲得該目標物體。 The device of claim 8, wherein the processing unit obtains an area of interest according to the ear side location area, and performs an interest area of each of the current image and a reference image of the image sequence. Image subtraction algorithm to obtain a target area image, and the processing unit uses the reference image The region of interest filters out the noise of the image of the target area to obtain the target object. 如申請專利範圍第8項所述的裝置,其中該處理單元取得至少一嘴部影像,且依據該至少一嘴部影像取得多個嘴部特徵,依據該些嘴部特徵來判斷該至少一嘴部影像為一張開動作影像或一閉合動作影像,在一嘴部紀錄時間內,該處理單元依序紀錄該嘴部位置區域中所偵測到的所有該閉合動作影像或該張開動作影像並轉換成一編碼序列,且將該編碼序列存入該嘴部動作資訊。 The device of claim 8, wherein the processing unit obtains at least one mouth image, and obtains a plurality of mouth features according to the at least one mouth image, and determines the at least one mouth according to the mouth features The image is an open motion image or a closed motion image. During a mouth recording time, the processing unit sequentially records all the closed motion images or the open motion images detected in the mouth position region. And converting into a code sequence, and storing the code sequence into the mouth motion information. 如申請專利範圍第8項所述的裝置,其中在一嘴部比對時間內,該處理單元將該嘴部位置區域的影像與多個樣板影像進行比對,以產生一嘴型編碼,該處理單元將該嘴型編碼存入一編碼序列中,且將該編碼序列存入該嘴部動作資訊。 The device of claim 8, wherein the processing unit compares the image of the mouth position area with the plurality of template images during a mouth comparison time to generate a mouth type code, The processing unit stores the mouth type code in a code sequence, and stores the code sequence in the mouth motion information. 如申請專利範圍第8項所述的裝置,其中該裝置更包括:一警示模組,當該處理單元判斷該人員正使用該手持裝置時,透過該警示模組啟動一警示程序。 The device of claim 8, wherein the device further comprises: a warning module, wherein when the processing unit determines that the person is using the handheld device, an alerting program is initiated through the alert module.
TW103143288A 2014-12-11 2014-12-11 Method and apparatus for detecting person to use handheld device TWI520076B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
TW103143288A TWI520076B (en) 2014-12-11 2014-12-11 Method and apparatus for detecting person to use handheld device
CN201510054941.8A CN105989328A (en) 2014-12-11 2015-02-03 Method and device for detecting use of handheld device by person

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW103143288A TWI520076B (en) 2014-12-11 2014-12-11 Method and apparatus for detecting person to use handheld device

Publications (2)

Publication Number Publication Date
TWI520076B true TWI520076B (en) 2016-02-01
TW201621756A TW201621756A (en) 2016-06-16

Family

ID=55810280

Family Applications (1)

Application Number Title Priority Date Filing Date
TW103143288A TWI520076B (en) 2014-12-11 2014-12-11 Method and apparatus for detecting person to use handheld device

Country Status (2)

Country Link
CN (1) CN105989328A (en)
TW (1) TWI520076B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10152642B2 (en) 2016-12-16 2018-12-11 Automotive Research & Testing Center Method for detecting driving behavior and system using the same

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106569599B (en) * 2016-10-24 2020-05-01 百度在线网络技术(北京)有限公司 Method and device for automatically seeking help
CN108345819B (en) * 2017-01-23 2020-09-15 杭州海康威视数字技术股份有限公司 Method and device for sending alarm message
CN110705510B (en) * 2019-10-16 2023-09-05 杭州优频科技有限公司 Action determining method, device, server and storage medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100449579C (en) * 2006-04-21 2009-01-07 浙江工业大学 All-round computer vision-based electronic parking guidance system
CN100468245C (en) * 2007-04-29 2009-03-11 浙江工业大学 Air conditioner energy saving controller based on omnibearing computer vision
CN101344967B (en) * 2008-09-02 2011-03-16 西北工业大学 Detection method for small mobile objective in astronomical image
CN102034334B (en) * 2009-09-28 2012-12-19 财团法人车辆研究测试中心 Driver monitoring method and monitoring system thereof
CN102841676A (en) * 2011-06-23 2012-12-26 鸿富锦精密工业(深圳)有限公司 Webpage browsing control system and method
CN102494676B (en) * 2011-12-12 2013-07-03 中国科学院长春光学精密机械与物理研究所 Satellite automatic recognition device under complicated backgrounds
CN102592143B (en) * 2012-01-09 2013-10-23 清华大学 Method for detecting phone holding violation of driver in driving
CN102799317B (en) * 2012-07-11 2015-07-01 联动天下科技(大连)有限公司 Smart interactive projection system
CN103366506A (en) * 2013-06-27 2013-10-23 北京理工大学 Device and method for automatically monitoring telephone call behavior of driver when driving
CN103886287B (en) * 2014-03-12 2017-02-22 暨南大学 Perspective-crossing gait recognition method based on 3D projection

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10152642B2 (en) 2016-12-16 2018-12-11 Automotive Research & Testing Center Method for detecting driving behavior and system using the same

Also Published As

Publication number Publication date
CN105989328A (en) 2016-10-05
TW201621756A (en) 2016-06-16

Similar Documents

Publication Publication Date Title
TWI603270B (en) Method and apparatus for detecting person to use handheld device
US11527055B2 (en) Feature density object classification, systems and methods
CN106557726B (en) Face identity authentication system with silent type living body detection and method thereof
WO2017071065A1 (en) Area recognition method and apparatus
CN110956061B (en) Action recognition method and device, and driver state analysis method and device
US9354615B2 (en) Device, operating method and computer-readable recording medium for generating a signal by detecting facial movement
TWI520076B (en) Method and apparatus for detecting person to use handheld device
TWI507998B (en) Video analysis
KR101139963B1 (en) Method and apparatus for preventing driver from driving while drowsy based on detection of driver's pupils
US10441198B2 (en) Face detection device, face detection system, and face detection method
TWI492193B (en) Method for triggering signal and electronic apparatus for vehicle
JP2010191793A (en) Alarm display and alarm display method
WO2015131571A1 (en) Method and terminal for implementing image sequencing
Roy Unsupervised sparse, nonnegative, low rank dictionary learning for detection of driver cell phone usage
TWI528331B (en) Attention detecting device, method, computer readable medium, and computer program products
TWI550440B (en) Method and system for detecting person to use handheld apparatus
CN109447000B (en) Living body detection method, stain detection method, electronic apparatus, and recording medium
Liang et al. Non-intrusive eye gaze direction tracking using color segmentation and Hough transform
CN112061065A (en) In-vehicle behavior recognition alarm method, device, electronic device and storage medium
JP2011086051A (en) Eye position recognition device
Bari et al. Android based object recognition and motion detection to aid visually impaired
Nair et al. Driver assistant system using Haar cascade and convolution neural networks (CNN)
Mariappan et al. A labVIEW design for frontal and non-frontal human face detection system in complex background
Padmapriya et al. Detecting driver cell-phone usage based on deep learning technique
Gaikwad et al. Driver Assistance Systems with Driver Drowsiness Detection Using Haar-Cascade Algorithm

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees