JP2008276406A - Face image processor - Google Patents

Face image processor Download PDF

Info

Publication number
JP2008276406A
JP2008276406A JP2007117517A JP2007117517A JP2008276406A JP 2008276406 A JP2008276406 A JP 2008276406A JP 2007117517 A JP2007117517 A JP 2007117517A JP 2007117517 A JP2007117517 A JP 2007117517A JP 2008276406 A JP2008276406 A JP 2008276406A
Authority
JP
Japan
Prior art keywords
face
face image
line
image processing
mouth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2007117517A
Other languages
Japanese (ja)
Inventor
Takeshi Sasuga
岳史 流石
Takehiko Tanaka
勇彦 田中
Futoshi Tsuda
太司 津田
Fumio Sugaya
文男 菅谷
Shinichi Kojima
真一 小島
Takashi Naito
貴志 内藤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toyota Motor Corp
Toyota Central R&D Labs Inc
Original Assignee
Toyota Motor Corp
Toyota Central R&D Labs Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toyota Motor Corp, Toyota Central R&D Labs Inc filed Critical Toyota Motor Corp
Priority to JP2007117517A priority Critical patent/JP2008276406A/en
Publication of JP2008276406A publication Critical patent/JP2008276406A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

<P>PROBLEM TO BE SOLVED: To provide a face image processor accurately recognizing the face of a person by reducing erroneous decisions. <P>SOLUTION: An image is processed based on an acquired video signal, and the face of a driver D is recognized on condition that a line A connecting both nostrils P15, P16 and a line B connecting both end parts P17, P18 of a lateral direction of a mouth are parallel with each other. Thus, the condition is restricted by use of a characteristic on a structure of the face, and the erroneous decision is reduced without increasing an arithmetic load. Accordingly, the face of the driver D can be recognized with high accuracy, and a face direction angle of the driver D can be accurately detected. <P>COPYRIGHT: (C)2009,JPO&INPIT

Description

本発明は、撮像された画像データに基づいて人物の顔を認識する顔画像処理装置に関する。   The present invention relates to a face image processing apparatus that recognizes a person's face based on captured image data.

従来、顔を撮像する撮像手段を備え、撮像された顔画像を用いて人物の顔の向きを認識する技術が知られている(例えば、特許文献1参照)。特許文献1に記載の技術では、一対の顔部品を抽出し、抽出された顔部品の位置に基づいて、顔の向きを検出している。
特開平10−307923号公報
2. Description of the Related Art Conventionally, a technique that includes an imaging unit that captures a face and recognizes the orientation of a person's face using a captured face image is known (see, for example, Patent Document 1). In the technique described in Patent Document 1, a pair of face parts is extracted, and the orientation of the face is detected based on the position of the extracted face parts.
Japanese Patent Laid-Open No. 10-307923

従来技術では、抽出された顔部品候補から顔部品を選択する際に、目、鼻、口の相対関係に基づいて、予め設定された3次元顔モデルにもっとも近いものを、正解として顔部品として選択していた。図4は、顔画像撮像カメラによって撮影された顔画像の一例を示すものであり、従来の顔部品の抽出方法を説明するための図である。しかしながら、従来技術では、鼻孔中心P1の位置に基づいて、鼻を抽出した場合、目候補として眉毛を選択したり、口候補として口ひげを選択したりすることがあった。   In the prior art, when selecting a facial part from the extracted facial part candidates, the one closest to the preset 3D face model is selected as the correct facial part based on the relative relationship between eyes, nose and mouth. Was selected. FIG. 4 shows an example of a face image taken by a face image capturing camera, and is a diagram for explaining a conventional method for extracting a facial part. However, in the related art, when the nose is extracted based on the position of the nostril center P1, the eyebrows may be selected as the eye candidate or the mustache may be selected as the mouth candidate.

図4に示すように、右側の眉毛の両端P2,P3を抽出して、右目候補として、誤検出すると共に、口ひげの右端部P6と口の左端部P7とを結ぶ線を口として、誤検出することがあった。例えば、点P1を鼻孔中心、点P2,P3間を結ぶ線を右目、点P4,P5間を結ぶ線を左目、点P6,P7間を結ぶ線を口として検出した場合には、実際に、顔が正面を向いている場合であっても、顔が右に傾いていると誤って認識される場合があった。また、「鼻孔中心の位置」に代えて、「一対の鼻孔の位置」に基づいて、鼻を抽出することで、顔部品の誤判定を防止することが考えられるが、この場合には、演算負荷の増大が問題となってしまう。   As shown in FIG. 4, both ends P2 and P3 of the right eyebrow are extracted and erroneously detected as right eye candidates, and the line connecting the right end portion P6 of the mustache and the left end portion P7 of the mouth is used as a mouth. There was something to do. For example, when the point P1 is detected as the nostril center, the line connecting the points P2 and P3 is detected as the right eye, the line connecting the points P4 and P5 is detected as the left eye, and the line connecting the points P6 and P7 is detected as the mouth, Even when the face is facing the front, it may be erroneously recognized that the face is tilted to the right. In addition, it may be possible to prevent misidentification of facial parts by extracting the nose based on the “position of a pair of nostrils” instead of the “position of the nostril center”. An increase in load becomes a problem.

本発明は、このような課題を解決するために成されたものであり、誤判定を低減して、人物の顔を精度良く認識することが可能な顔画像処理装置を提供することを目的とする。   The present invention has been made to solve such a problem, and an object of the present invention is to provide a face image processing apparatus that can reduce erroneous determination and can accurately recognize a human face. To do.

本発明による顔画像処理装置は、撮像された画像データに基づいて人物の顔を認識する顔画像処理装置において、両方の鼻孔を結んだ線である第1の線と、口の横方向の両端部を結んだ線である第2の線とが平行であることを条件として、顔を認識することを特徴としている。   The face image processing apparatus according to the present invention is a face image processing apparatus that recognizes a person's face based on captured image data, and includes a first line that connects both nostrils and both ends of the mouth in the lateral direction. It is characterized in that the face is recognized on the condition that the second line, which is a line connecting the parts, is parallel.

このような顔画像処理装置によれば、両方の鼻孔を結んだ線と、口の横方向の両端部を結んだ線とが平行となることを条件(正解)として、顔を認識する。このように、両方の鼻孔を結んだ線、口の端部を結んだ線は平行であるという、顔の構造上の特徴を用いて、顔を認識する際の条件を絞ることで、演算負荷を増大することなく、誤判定(眉毛、口ひげ等を選択すること)を低減することができる。従って、人物の顔を精度良く認識することができるので、顔向き角度を正確に検出することが可能となる。   According to such a face image processing apparatus, the face is recognized on the condition that the line connecting both nostrils and the line connecting both ends of the mouth in the horizontal direction are parallel (correct answer). In this way, by using the structural features of the face that the line connecting both nostrils and the line connecting the ends of the mouth are parallel, the computational load is reduced by narrowing down the conditions for recognizing the face. It is possible to reduce erroneous determinations (selecting eyebrows, mustaches, etc.) without increasing. Therefore, since the face of a person can be recognized with high accuracy, the face orientation angle can be accurately detected.

また、両方の目を結んだ線である第3の線と、第1の線と、第2の線とが平行であることを条件として、顔を認識することが好ましい。これにより、誤判定を一層低減することができ、顔向き角度の検出精度を向上させることができる。   In addition, it is preferable to recognize the face on condition that the third line, which is a line connecting both eyes, the first line, and the second line are parallel. Thereby, misjudgment can be further reduced and the detection accuracy of the face orientation angle can be improved.

本発明の顔画像処理装置によれば、演算負荷の増大を抑えつつ誤判定を低減し、精度良く顔を認識することができる。   According to the face image processing apparatus of the present invention, it is possible to reduce an erroneous determination while suppressing an increase in calculation load and recognize a face with high accuracy.

以下、本発明の好適な実施形態について図面を参照しながら説明する。なお、図面の説明において、同一または相当要素には同一の符号を付し、重複する説明は省略する。本実施形態では、本発明の顔画像処理装置である顔画像処理電子制御ユニット(以下、「顔画像処理ECU」という。)の顔向き角度検出装置への適用について説明する。図1は、本発明の実施形態に係る顔画像処理ECUを備えた顔向き角度検出装置の概略構成図、図2は、顔画像撮像カメラによって撮像された顔画像の一例を示す図であり、直線A〜Bを示すものである。   Preferred embodiments of the present invention will be described below with reference to the drawings. In the description of the drawings, the same or corresponding elements are denoted by the same reference numerals, and redundant description is omitted. In the present embodiment, application of a face image processing electronic control unit (hereinafter referred to as “face image processing ECU”), which is a face image processing device of the present invention, to a face orientation angle detection device will be described. FIG. 1 is a schematic configuration diagram of a face orientation angle detection device including a face image processing ECU according to an embodiment of the present invention. FIG. 2 is a diagram illustrating an example of a face image captured by a face image capturing camera. The straight lines A to B are shown.

図1に示す顔向き角度検出装置100は、例えば運転者Dの脇見等を検出するものであり、運転者Dの顔画像を撮像する顔画像撮像カメラ2、顔画像撮像カメラ2からの映像信号に基づいて画像処理を行う顔画像処理ECU3を備えている。顔画像撮像カメラ2は、例えばコラムカバー9の上面に設置され、運転者Dの顔画像を取得するものである。   A face orientation angle detection device 100 shown in FIG. 1 detects a driver's D look-aside, for example. The face image capturing camera 2 captures a face image of the driver D, and a video signal from the face image capturing camera 2. The face image processing ECU 3 for performing image processing based on the above is provided. The face image capturing camera 2 is installed, for example, on the upper surface of the column cover 9 and acquires the face image of the driver D.

この顔画像処理ECU3は、演算処理を行うCPU、記憶部となるROM及びRAM、入力信号回路、出力信号回路、電源回路などにより構成され、入力された映像信号に基づいて、画像処理を行い運転者Dの顔画像を認識し、運転者Dの顔向き角度を検出することができる。   The face image processing ECU 3 includes a CPU that performs arithmetic processing, a ROM and a RAM serving as a storage unit, an input signal circuit, an output signal circuit, a power supply circuit, and the like, and performs image processing and driving based on an input video signal. The face image of the driver D can be recognized and the face direction angle of the driver D can be detected.

ここで、顔画像処理ECU3のCPUでは、記憶部に記憶されているプログラムを実行することで、目位置抽出部、鼻位置抽出部、口位置抽出部、直線算出部、適合値算出部、顔向き角度判定部が構成される。また、記憶部には、予め複数の3次元顔モデルが記憶されている。   Here, the CPU of the face image processing ECU 3 executes a program stored in the storage unit, thereby performing an eye position extraction unit, a nose position extraction unit, a mouth position extraction unit, a straight line calculation unit, a fitness value calculation unit, a face An orientation angle determination unit is configured. The storage unit stores a plurality of three-dimensional face models in advance.

目位置抽出部は、入力された映像信号から目位置候補点として、図2に示すように、右目の両端部P11,P12、左目の両端部P13,P14を抽出する。鼻位置抽出部は、入力された映像信号から鼻位置候補点として、両方の鼻孔P15,P16を抽出すると共に、両方の鼻孔位置を登録する。口位置抽出部は、入力された映像信号から口位置候補点として、口の横方向の端部P17,P18を抽出する。   As shown in FIG. 2, the eye position extraction unit extracts both ends P11 and P12 of the right eye and both ends P13 and P14 of the left eye as eye position candidate points from the input video signal. The nose position extraction unit extracts both nostrils P15 and P16 as nose position candidate points from the input video signal, and registers both nostril positions. The mouth position extraction unit extracts end portions P17 and P18 in the lateral direction of the mouth as mouth position candidate points from the input video signal.

直線算出部は、両方の鼻孔を結んだ線である直線A(第1の線)と、口の横方向の両端部を結んだ線である直線B(第2の線)と、両方の目を結んだ線である直線C(第3の線)とを算出する。直線Aは、鼻位置抽出部により抽出された鼻位置候補点P15,P16を結ぶものである。直線Bは、口位置抽出部により抽出された口位置候補点P17,P18を結ぶものである。直線Cは、目位置抽出部により抽出された目位置候補点P11〜P14に基づく近似式により設定されるものである。   The straight line calculation unit includes a straight line A (first line) that connects both nostrils, a straight line B (second line) that connects both lateral ends of the mouth, and both eyes. A straight line C (third line) that is a line connecting the two is calculated. A straight line A connects the nose position candidate points P15 and P16 extracted by the nose position extraction unit. A straight line B connects the mouth position candidate points P17 and P18 extracted by the mouth position extraction unit. The straight line C is set by an approximate expression based on the eye position candidate points P11 to P14 extracted by the eye position extraction unit.

適合値算出部は、直線A〜Cが平行である場合に、記憶部に記憶された3次元モデルを参照し、目位置候補点P11〜P14、鼻位置候補点P15,P16、口位置候補点P17,P18の相対位置が顔の構成として問題ないかを判定するための適合値を算出する。そして、適合値算出部は、適合値が所定の閾値以上の場合に、抽出された目位置候補点P11〜P14、鼻位置候補点P15,P16、口位置候補点P17,P18を正解として、運転者Dの顔を認識する。また、目位置候補点、鼻位置候補点、口位置候補点の組合せが複数ある場合には、適合値の最も高い組合せを、正解として、顔部品(目、鼻、口)の組合せを選択する。   When the straight lines A to C are parallel, the adaptive value calculation unit refers to the three-dimensional model stored in the storage unit, and eye position candidate points P11 to P14, nose position candidate points P15 and P16, mouth position candidate points A suitable value for determining whether the relative positions of P17 and P18 are satisfactory as a face configuration is calculated. Then, when the fitness value is greater than or equal to a predetermined threshold, the fitness value calculation unit operates with the extracted eye position candidate points P11 to P14, nose position candidate points P15 and P16, and mouth position candidate points P17 and P18 as correct answers. Recognize the face of person D. In addition, when there are a plurality of combinations of eye position candidate points, nose position candidate points, and mouth position candidate points, a combination of face parts (eyes, nose, mouth) is selected with the combination having the highest matching value as the correct answer. .

顔向き角度判定部は、選択された顔部品の組合せに基づいて、運転者Dの顔向き角度を算出する。   The face direction angle determination unit calculates the face direction angle of the driver D based on the selected combination of face parts.

次に、顔画像処理ECU3で実行される制御処理について図3のフローチャートに沿って説明する。図3は、顔画像処理ECUで実行される制御処理の動作手順を示すフローチャートである。まず、顔画像処理ECU3は、運転者Dの顔画像を撮像した顔画像撮像カメラ2からの映像信号を入力する(S1)。次に、顔画像処理ECU3は、入力された映像信号に基づいて画像処理を行い、目位置候補点P11〜P14を抽出し(S2)、鼻位置候補点P15,P16を抽出し(S3)、口位置候補点P17,P18を抽出する(S4)。   Next, control processing executed by the face image processing ECU 3 will be described with reference to the flowchart of FIG. FIG. 3 is a flowchart showing an operation procedure of control processing executed by the face image processing ECU. First, the face image processing ECU 3 inputs a video signal from the face image capturing camera 2 that captures the face image of the driver D (S1). Next, the face image processing ECU 3 performs image processing based on the input video signal, extracts eye position candidate points P11 to P14 (S2), and extracts nose position candidate points P15 and P16 (S3). Mouth position candidate points P17 and P18 are extracted (S4).

次に、顔画像処理ECU3は、鼻位置候補点P15,P16を結ぶ直線A、口位置候補点P17,P18を結ぶ直線B、目位置候補点P11〜P14を結ぶ直線Cを算出する(S5)。続いて、顔画像処理ECU3は、目位置候補点P11〜P14、鼻位置候補点P15,P16、口位置候補点P17,P18の相対位置が顔の構成として問題ないかを判定するための適合値を算出する(S6)。   Next, the face image processing ECU 3 calculates a straight line A connecting the nose position candidate points P15 and P16, a straight line B connecting the mouth position candidate points P17 and P18, and a straight line C connecting the eye position candidate points P11 to P14 (S5). . Subsequently, the face image processing ECU 3 determines whether the relative positions of the eye position candidate points P11 to P14, the nose position candidate points P15 and P16, and the mouth position candidate points P17 and P18 have no problem as a face configuration. Is calculated (S6).

次に、顔画像処理ECU3は、適合値が所定の閾値以上であるか否かを判定する(S7)。適合値が所定の閾値以上であると判定された場合には、ステップS8に進む、適合値が所定の閾値以上であると判定されなかった場合には、ステップS1に戻り、ステップS1〜S7の処理を繰り返す。   Next, the face image processing ECU 3 determines whether or not the matching value is greater than or equal to a predetermined threshold (S7). If it is determined that the fitness value is equal to or greater than the predetermined threshold value, the process proceeds to step S8. If it is not determined that the fitness value is equal to or greater than the predetermined threshold value, the process returns to step S1 and steps S1 to S7 are performed. Repeat the process.

続いて、ステップS8では、顔画像処理ECU3は、適合値の最も高い顔部品候補(目位置候補点、鼻位置候補点、口位置候補点)の組合せを、正解の顔部品として選択し、運転者Dの顔を認識する。続く、ステップS9では、顔画像処理ECU3は、選択された顔部品の位置に基づいて、運転者Dの顔向き角度を算出する。   Subsequently, in step S8, the face image processing ECU 3 selects a combination of face component candidates (eye position candidate point, nose position candidate point, mouth position candidate point) having the highest fitness value as a correct face component, and performs driving. Recognize the face of person D. In step S9, the face image processing ECU 3 calculates the face direction angle of the driver D based on the position of the selected face part.

このように顔向き角度検出装置100は、両方の鼻孔を結んだ直線A、口の横方向の両端部を結んだ直線B、両方の目を結んだ直線Cを算出し、直線A〜Cが互いに平行であることを条件として、運転者Dの顔を認識している。このように、両目の端部を結んだ直線、両方の鼻孔を結んだ直線、口の端部を結んだ直線は平行であるという、顔の構造上の特徴を用いて、顔を認識する際の条件を絞ることで、演算負荷を増大することなく、誤判定(眉毛、口ひげ等を選択すること)を低減することができるので、運転者Dの顔を精度良く認識することが可能となる。その結果、運転者Dの顔向き角度を精度良く検出することができる。従って、運転者Dの脇見を精度良く検出することができる。また、このような顔向き角度検出装置100を、運転者Dの居眠りを検出する覚醒度判定装置に適用してもよい。   As described above, the face angle detection device 100 calculates the straight line A connecting both nostrils, the straight line B connecting both lateral ends of the mouth, and the straight line C connecting both eyes, and the straight lines A to C are calculated. The face of the driver D is recognized on the condition that they are parallel to each other. In this way, when recognizing a face using the structural features of the face, the straight line connecting the ends of both eyes, the straight line connecting both nostrils, and the straight line connecting the ends of the mouth are parallel. By narrowing down the condition, it is possible to reduce misjudgment (selecting eyebrows, mustaches, etc.) without increasing the calculation load, and thus it is possible to recognize the face of the driver D with high accuracy. . As a result, the face direction angle of the driver D can be detected with high accuracy. Accordingly, it is possible to accurately detect the driver D's looking aside. Moreover, you may apply such a face direction angle detection apparatus 100 to the arousal level determination apparatus which detects the driver's D dozing.

以上、本発明をその実施形態に基づき具体的に説明したが、本発明は、上記実施形態に限定されるものではない。上記実施形態において、両方の鼻孔を結んだ直線A、口の横方向の両端部を結んだ直線B、両方の目を結んだ直線Cが平行であることを条件として、運転者Dの顔を認識しているが、少なくとも、直線A,直線Bが互いに平行であることを条件の一つとして、運転者Dの顔を認識するようにしてもよい。なお、上記「平行」とは、略平行であるものも含んでいる。   As mentioned above, although this invention was concretely demonstrated based on the embodiment, this invention is not limited to the said embodiment. In the above embodiment, the face of the driver D is defined on the condition that the straight line A connecting both nostrils, the straight line B connecting both lateral ends of the mouth, and the straight line C connecting both eyes are parallel. Although it is recognized, the face of the driver D may be recognized on the condition that at least the straight line A and the straight line B are parallel to each other. The “parallel” includes those that are substantially parallel.

本発明の実施形態に係る顔画像処理ECUを備えた顔向き角度検出装置の概略構成図である。It is a schematic block diagram of the face direction angle detection apparatus provided with the face image processing ECU which concerns on embodiment of this invention. 顔画像撮像カメラによって撮像された顔画像の一例を示すものであり、直線A〜Cを示す図である。It is a figure which shows an example of the face image imaged with the face image imaging camera, and shows the straight line AC. 顔画像処理ECUで実行される制御処理の動作手順を示すフローチャートである。It is a flowchart which shows the operation | movement procedure of the control process performed by face image process ECU. 顔画像撮像カメラによって撮影された顔画像の一例を示すものであり、従来の顔部品の抽出方法を説明するための図である。It is an example of the face image image | photographed with the face image imaging camera, and is a figure for demonstrating the extraction method of the conventional face component.

符号の説明Explanation of symbols

2…顔画像撮像カメラ、3…顔画像処理ECU、100…顔向き角度検出装置、D…運転者、P11〜P14…目位置候補点、P15,P16…鼻位置候補点、P17,P18…口位置候補点、A…直線A(第1の線)、B…直線B(第2の線)、C…直線C(第3の線)。   DESCRIPTION OF SYMBOLS 2 ... Face image pick-up camera, 3 ... Face image processing ECU, 100 ... Face direction angle detection apparatus, D ... Driver, P11-P14 ... Eye position candidate point, P15, P16 ... Nose position candidate point, P17, P18 ... Mouth Position candidate points, A ... straight line A (first line), B ... straight line B (second line), C ... straight line C (third line).

Claims (2)

撮像された画像データに基づいて人物の顔を認識する顔画像処理装置において、
両方の鼻孔を結んだ線である第1の線と、口の横方向の両端部を結んだ線である第2の線とが平行であることを条件として、前記顔を認識することを特徴とする顔画像処理装置。
In a face image processing apparatus for recognizing a human face based on imaged image data,
The face is recognized on the condition that a first line that connects both nostrils and a second line that connects both lateral ends of the mouth are parallel to each other. A face image processing apparatus.
両方の目を結んだ線である第3の線と、前記第1の線と、前記第2の線とが平行であることを条件として、前記顔を認識することを特徴とする請求項1記載の顔画像処理装置。   2. The face is recognized on the condition that a third line that connects both eyes, the first line, and the second line are parallel to each other. The face image processing apparatus described.
JP2007117517A 2007-04-26 2007-04-26 Face image processor Pending JP2008276406A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007117517A JP2008276406A (en) 2007-04-26 2007-04-26 Face image processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2007117517A JP2008276406A (en) 2007-04-26 2007-04-26 Face image processor

Publications (1)

Publication Number Publication Date
JP2008276406A true JP2008276406A (en) 2008-11-13

Family

ID=40054298

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2007117517A Pending JP2008276406A (en) 2007-04-26 2007-04-26 Face image processor

Country Status (1)

Country Link
JP (1) JP2008276406A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976360A (en) * 2010-10-27 2011-02-16 西安电子科技大学 Sparse characteristic face recognition method based on multilevel classification

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10307923A (en) * 1997-05-01 1998-11-17 Mitsubishi Electric Corp Face parts extraction device and face direction detection device
JP2004361989A (en) * 2003-05-30 2004-12-24 Seiko Epson Corp Image selection system, image selection program, and image selection method
JP2007042136A (en) * 2006-10-16 2007-02-15 Nec Corp Method and apparatus for comparing object, and recording medium stored with its program
JP2007257321A (en) * 2006-03-23 2007-10-04 Nissan Motor Co Ltd Face portion tracing method and its device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10307923A (en) * 1997-05-01 1998-11-17 Mitsubishi Electric Corp Face parts extraction device and face direction detection device
JP2004361989A (en) * 2003-05-30 2004-12-24 Seiko Epson Corp Image selection system, image selection program, and image selection method
JP2007257321A (en) * 2006-03-23 2007-10-04 Nissan Motor Co Ltd Face portion tracing method and its device
JP2007042136A (en) * 2006-10-16 2007-02-15 Nec Corp Method and apparatus for comparing object, and recording medium stored with its program

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976360A (en) * 2010-10-27 2011-02-16 西安电子科技大学 Sparse characteristic face recognition method based on multilevel classification
CN101976360B (en) * 2010-10-27 2013-02-27 西安电子科技大学 Sparse characteristic face recognition method based on multilevel classification

Similar Documents

Publication Publication Date Title
JP5127583B2 (en) Object determination apparatus and program
JP2001022933A (en) Face image processor using two-dimensional template
JP5061563B2 (en) Detection apparatus, biological determination method, and program
JP5737401B2 (en) 瞼 Detection device
US11900707B2 (en) Skeleton information determination apparatus, skeleton information determination method and computer program
US9542607B2 (en) Lane boundary line recognition device and computer-readable storage medium storing program of recognizing lane boundary lines on roadway
JP2010039788A (en) Image processing apparatus and method thereof, and image processing program
JP2018005357A (en) Information processor and information processing method
JP2005149370A (en) Imaging device, personal authentication device and imaging method
JP2011089784A (en) Device for estimating direction of object
EP3958208A1 (en) Image processing device, image processing method, and image processing program
JP4840978B2 (en) IMAGING DEVICE, ITS CONTROL METHOD, PROGRAM, AND STORAGE MEDIUM
JP6574988B2 (en) Verification device and verification method
KR101610496B1 (en) Method and apparatus for gaze tracking
JP2008276406A (en) Face image processor
JP5035139B2 (en) Eye image processing device
JP4825737B2 (en) Eye opening degree determination device
JP2009015656A (en) Face image processor
JP5376254B2 (en) Subject recognition device
JP2010134489A (en) Visual line detection device and method, and program
JP2011159030A (en) Subject authentication apparatus, subject authentication method and program
KR20140114283A (en) Information processing device
CN106210529B (en) The image pickup method and device of mobile terminal
JP2008146132A (en) Image detection device, program, and image detection method
KR101545408B1 (en) Method for detecting profile line and device for detecting profile line

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20090623

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20110502

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20110510

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20110705

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20110816

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20111213