JP2009211563A - Image recognition device, image recognition method, image recognition program, gesture operation recognition system, gesture operation recognition method, and gesture operation recognition program - Google Patents

Image recognition device, image recognition method, image recognition program, gesture operation recognition system, gesture operation recognition method, and gesture operation recognition program Download PDF

Info

Publication number
JP2009211563A
JP2009211563A JP2008055564A JP2008055564A JP2009211563A JP 2009211563 A JP2009211563 A JP 2009211563A JP 2008055564 A JP2008055564 A JP 2008055564A JP 2008055564 A JP2008055564 A JP 2008055564A JP 2009211563 A JP2009211563 A JP 2009211563A
Authority
JP
Japan
Prior art keywords
region
camera
subject
gesture
image recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2008055564A
Other languages
Japanese (ja)
Other versions
JP5174492B2 (en
Inventor
Toru Yamaguchi
亨 山口
Shoichiro Sakurai
翔一朗 櫻井
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tama TLO Co Ltd
Tokyo Metropolitan Public University Corp
Original Assignee
Tama TLO Co Ltd
Tokyo Metropolitan Public University Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tama TLO Co Ltd, Tokyo Metropolitan Public University Corp filed Critical Tama TLO Co Ltd
Priority to JP2008055564A priority Critical patent/JP5174492B2/en
Publication of JP2009211563A publication Critical patent/JP2009211563A/en
Application granted granted Critical
Publication of JP5174492B2 publication Critical patent/JP5174492B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

<P>PROBLEM TO BE SOLVED: To provide an image recognition device capable of carrying out processing in a short time while improving gesture recognition accuracy, and an image recognition method, an image recognition program, a gesture operation recognition system, a gesture operation recognition method and a gesture operation recognition program. <P>SOLUTION: The gesture operation recognition program JP includes an image recognition program IP and a gesture discrimination program RP. The image recognition program IP has: an imaging means 10 picking up the image of the same user by a first digital camera 3A and a second digital camera 3B; a detecting means 11 comparing image information at the same time of the day to identify a user area UA and a background area BA; and a specifying means 12 further detecting a user head area A1 and a hand area A2 from the user area UA and computing three-dimensional coordinates of feature points P1, P2 of the areas A1, A2. The gesture discrimination program RP has a gesture discriminating means 13 tracking the feature points P1, P2 to discriminate the gesture of the user. <P>COPYRIGHT: (C)2009,JPO&INPIT

Description

本発明は、画像認識装置、画像認識方法、画像認識プログラム、ジェスチャ動作認識システム、ジェスチャ動作認識方法、及びジェスチャ動作認識プログラムに関する。   The present invention relates to an image recognition device, an image recognition method, an image recognition program, a gesture motion recognition system, a gesture motion recognition method, and a gesture motion recognition program.

我々は、様々な情報ツールを使用して日々、様々な情報を取得している。しかし、そのようなツールは機能が多すぎるため、使いこなすことが難しい場合がある。そのため、可能な限り直感的に理解できるインターフェースが必要とされている。そこで、自然なコミュニケーションであるジェスチャインターフェースの一つである指差し動作をロボット等の機械に読み取らせるための研究が行われている(例えば、非特許文献1,2参照。)。   We acquire various information every day using various information tools. However, such tools can be difficult to use because they have too many features. Therefore, an interface that can be understood as intuitively as possible is required. In view of this, research has been conducted to cause a machine such as a robot to read a pointing action, which is one of gesture interfaces that are natural communications (see Non-Patent Documents 1 and 2, for example).

これらの画像処理では、2つのカメラにより得た画像に対し線形変換を行い、人の形状の特徴点をすべて抽出して、人間の3次元情報と動作とを取得している。また、特別な道具を使用しないで人のジェスチャをシステム側が理解できるよう、小型デジタルカメラから得られる人の頭及び手の色情報を利用する研究も行われている(例えば、非特許文献3参照。)。   In these image processing, linear transformation is performed on images obtained by two cameras, and all human shape feature points are extracted to obtain human three-dimensional information and actions. In addition, research has been conducted using color information of a person's head and hand obtained from a small digital camera so that the system side can understand a person's gesture without using a special tool (see Non-Patent Document 3, for example). .)

Takahiro Suzuki and Akihisa Ohya andShin’ichi Yuta, “Operation Direction to aMobile Robot by Projection Lights,” Proc. 2005 IEEE Workshop on AdvancedRobotics and its Social Impacts.2005Takahiro Suzuki and Akihisa Ohya and Shin’ichi Yuta, “Operation Direction to aMobile Robot by Projection Lights,” Proc. 2005 IEEE Workshop on Advanced Robotics and its Social Impacts. 2005 David J.Cannon, Geb Thomas, Collin Wangand T. Kesavadas, “A Virtual Reality BasedPointe-and-direct Robotics System with Instrumented Glove.” InternationalJournal of Industrial Engineering vol.1, no.2, pp.139-148,1994David J. Cannon, Geb Thomas, Collin Wangand T. Kesavadas, “A Virtual Reality BasedPointe-and-direct Robotics System with Instrumented Glove.” International Journal of Industrial Engineering vol.1, no.2, pp.139-148,1994 Toru Yamaguchi et al. “Intelligent Spaceand Human centered Robotics” IEEE Transactions on Industrial electronics, Vol.50, No. 5, pp.881-889, 2003Toru Yamaguchi et al. “Intelligent Spaceand Human centered Robotics” IEEE Transactions on Industrial electronics, Vol.50, No. 5, pp.881-889, 2003

しかしながら、上記従来のジェスチャ動作認識装置では、RGB情報をもとに被写体を追跡しているので、ジェスチャを判別するためには、予め所定の色の帽子と手袋を着用した上で頭と手の位置を特定しなければならないという問題がある。また、時間を異ならせて取得した画像情報の変化を利用するため、処理時間がかかる。   However, in the above conventional gesture motion recognition device, the subject is tracked based on the RGB information, and therefore, in order to discriminate the gesture, the head and hand are worn in advance while wearing a hat and gloves of a predetermined color. There is a problem that the position must be specified. In addition, processing time is required because changes in image information acquired at different times are used.

本発明は上記事情に鑑みて成されたものであり、ジェスチャの識別精度を向上するとともに短時間で処理を行うことができる画像認識装置、画像認識方法、画像認識プログラム、ジェスチャ動作認識システム、ジェスチャ動作認識方法、及びジェスチャ動作認識プログラムを提供することを目的とする。   The present invention has been made in view of the above circumstances, and is an image recognition device, an image recognition method, an image recognition program, a gesture motion recognition system, a gesture that can improve gesture identification accuracy and perform processing in a short time. An object is to provide a motion recognition method and a gesture motion recognition program.

本発明は、上記課題を解決するため、以下の手段を採用する。
本発明に係る画像認識装置は、異なる位置に配された第一カメラ及び第二カメラと接続され、前記第一カメラ及び前記第二カメラにより同一被写体の画像を撮像する撮像手段と、同一時刻における前記第一カメラ及び前記第二カメラの画像情報を対比して被写体領域と背景領域とを識別する検出手段と、識別された前記被写体領域からさらに被写体の頭領域及び手先領域を検出するとともに、前記領域の各々の特徴点を三次元座標算出する特定手段と、を備えていることを特徴とする。
The present invention employs the following means in order to solve the above problems.
An image recognition apparatus according to the present invention is connected to a first camera and a second camera arranged at different positions, and an imaging unit that captures an image of the same subject by the first camera and the second camera, and at the same time Detecting means for comparing a subject region and a background region by comparing image information of the first camera and the second camera, and further detecting a head region and a hand region of the subject from the identified subject region; Specifying means for calculating three-dimensional coordinates of each feature point of the region.

また、本発明に係る画像認識方法は、異なる位置に配された第一カメラ及び第二カメラと接続され、前記第一カメラ及び前記第二カメラにより同一被写体の画像を撮像する撮像ステップと、同一時刻における前記第一カメラ及び前記第二カメラの画像情報を対比して被写体領域と背景領域とを識別する検出ステップと、識別された前記被写体領域からさらに被写体の頭領域及び手先領域を検出するとともに、前記領域の各々の特徴点を三次元座標算出する特定ステップと、を備えていることを特徴とする。 Further, the image recognition method according to the present invention is the same as the imaging step in which an image of the same subject is captured by the first camera and the second camera connected to the first camera and the second camera arranged at different positions. A step of detecting the subject area and the background area by comparing image information of the first camera and the second camera at the time, and further detecting a head area and a hand area of the object from the identified object area And a specific step of calculating three-dimensional coordinates of each feature point of the area.

また、本発明に係る画像認識プログラムは、コンピュータを、異なる位置に配された第一カメラ及び第二カメラと接続され、前記第一カメラ及び前記第二カメラにより同一被写体の画像を撮像する撮像手段と、同一時刻における前記第一カメラ及び前記第二カメラの画像情報を対比して被写体領域と背景領域とを識別する検出手段と、識別された前記被写体領域からさらに被写体の頭領域及び手先領域を検出するとともに、前記領域の各々の特徴点を三次元座標算出する特定手段と、して機能させることを特徴とする。   In addition, the image recognition program according to the present invention is an image pickup unit in which a computer is connected to a first camera and a second camera arranged at different positions, and an image of the same subject is picked up by the first camera and the second camera. Detecting means for comparing a subject area and a background area by comparing image information of the first camera and the second camera at the same time; and a head area and a hand area of the object from the identified object area It is characterized by detecting and functioning as a specifying means for calculating three-dimensional coordinates of each feature point of the region.

この発明は、従来のように、被写体の特徴点として算出する頭と手とに、背景とは異なる色情報を予め付与しておかなくても、被写体の頭と手との3次元情報をそれぞれ得ることができる。   In the present invention, the three-dimensional information of the subject's head and hand can be obtained without adding color information different from the background to the head and hand calculated as the feature points of the subject. Obtainable.

また、本発明に係る画像認識装置は、前記画像認識装置であって、前記検出手段が、前記第一カメラ及び前記第二カメラにおけるそれぞれの画素値の差の相対的に大きい領域を被写体領域とし、かつ、画素値の差の相対的に小さい領域を背景領域とすることを特徴とする。   The image recognition apparatus according to the present invention is the image recognition apparatus, in which the detection unit sets a region having a relatively large difference in pixel value between the first camera and the second camera as a subject region. An area having a relatively small difference in pixel values is used as a background area.

また、本発明に係る画像認識方法は、前記画像認識方法であって、前記検出ステップが、前記第一カメラ及び前記第二カメラにおけるそれぞれの画素値の差の相対的に大きい領域を被写体領域とし、かつ、画素値の差の相対的に小さい領域を背景領域とすることを特徴とする。   Further, the image recognition method according to the present invention is the image recognition method, wherein the detection step uses a region having a relatively large difference in pixel value between the first camera and the second camera as a subject region. An area having a relatively small difference in pixel values is used as a background area.

また、本発明に係る画像認識プログラムは、前記画像認識プログラムであって、前記検出手段が、前記第一カメラ及び前記第二カメラにおけるそれぞれの画素値の差の相対的に大きい領域を被写体領域とし、かつ、画素値の差の相対的に小さい領域を背景領域とする処理を行うことを特徴とする。   The image recognition program according to the present invention is the image recognition program, in which the detection unit sets a region having a relatively large difference in pixel value between the first camera and the second camera as a subject region. In addition, processing is performed in which a region having a relatively small difference in pixel values is used as a background region.

この発明は、第一カメラ及び第二カメラからともに近い物体は視差が大きく、第一カメラ及び第二カメラからともに離れた物体は視差が小さいので、第一カメラ及び第二カメラの近くの被写体を背景から識別させることができる。   Since the object close to both the first camera and the second camera has a large parallax and the object distant from both the first camera and the second camera has a small parallax, the object near the first camera and the second camera It can be identified from the background.

また、本発明に係る画像認識装置は、前記画像認識装置であって、前記特定手段が、前記被写体領域の頂部を前記頭領域、かつ該頂部よりも尖った領域を前記手先領域と特定することを特徴とする。   The image recognition apparatus according to the present invention is the image recognition apparatus, wherein the specifying unit specifies the top of the subject region as the head region and a region sharper than the top as the hand region. It is characterized by.

また、本発明に係る画像認識方法は、前記画像認識方法であって、前記特定ステップが、被写体領域の頂部を前記頭領域、かつ該頂部よりも尖った領域を前記手先領域と特定することを特徴とする。   Further, the image recognition method according to the present invention is the image recognition method, wherein the specifying step specifies the top of the subject region as the head region and a region sharper than the top as the hand region. Features.

また、本発明に係る画像認識プログラムは、前記画像認識プログラムであって、前記特定手段が、前記被写体領域の頂部を前記頭領域、かつ該頂部よりも尖った領域を前記手先領域と特定する処理を行うことを特徴とする。   Further, the image recognition program according to the present invention is the image recognition program, wherein the specifying unit specifies the top of the subject region as the head region and a region sharper than the top as the hand region. It is characterized by performing.

この発明は、三次元座標で算出された特徴点の動きから手先の指す方向を特定することができる。   According to the present invention, the direction indicated by the hand can be specified from the movement of the feature point calculated by the three-dimensional coordinates.

本発明に係るジェスチャ動作認識システムは、本発明に係る画像認識装置と、特定された前記特徴点を追跡して被写体のジェスチャを判別するジェスチャ判別手段と、備えていることを特徴とする。 A gesture motion recognition system according to the present invention includes the image recognition device according to the present invention, and gesture determination means for determining the gesture of a subject by tracking the specified feature points.

また、本発明に係るジェスチャ動作認識方法は、本発明に係る画像認識方法と、特定された前記特徴点を追跡して被写体のジェスチャを判別するジェスチャ判別ステップと、を備えていることを特徴とする。 The gesture motion recognition method according to the present invention includes the image recognition method according to the present invention and a gesture determination step of determining the gesture of the subject by tracking the specified feature points. To do.

また、本発明に係るジェスチャ動作認識プログラムは、本発明に係る画像認識プログラムと、特定された前記特徴点を追跡して被写体のジェスチャを判別するジェスチャ判別手段と、を備えていることを特徴とする。   A gesture motion recognition program according to the present invention includes the image recognition program according to the present invention, and gesture determination means for determining the gesture of the subject by tracking the identified feature points. To do.

この発明は、被写体の意志とジェスチャとの関係を予め決めておくことによって、カメラが撮像した手の動きによりジェスチャを特定することができ、さらにこのジェスチャから被写体の意志を読み取ることができる。   In the present invention, by determining the relationship between the will of the subject and the gesture in advance, the gesture can be specified by the movement of the hand imaged by the camera, and the will of the subject can be read from the gesture.

本発明によれば、ジェスチャの識別精度を向上するとともに短時間で処理を行うことができる。   According to the present invention, it is possible to improve gesture identification accuracy and perform processing in a short time.

本発明に係る一実施形態について、図1から図3を参照して説明する。
本実施形態に係るジェスチャ動作認識システム1は、3次元情報算出用画像認識モジュール(画像認識装置)2と、3次元情報算出用画像認識モジュール2に接続された第一デジタルカメラ(第一カメラ)3A及び第二デジタルカメラ(第二カメラ)3Bと、結果確認用ディスプレイ5と、コマンド入力用コンピュータ6と、ジェスチャ判定処理用コンピュータ7と、を備えている。
An embodiment according to the present invention will be described with reference to FIGS.
A gesture motion recognition system 1 according to this embodiment includes a three-dimensional information calculation image recognition module (image recognition device) 2 and a first digital camera (first camera) connected to the three-dimensional information calculation image recognition module 2. 3A, a second digital camera (second camera) 3B, a result confirmation display 5, a command input computer 6, and a gesture determination processing computer 7.

3次元情報算出用画像認識モジュール2は、画像認識プログラムIP及び必要な処理を行うためのデータ等が記憶されたROM(リードオンリーメモリ)、必要なデータを一時的に保存するためのRAM(ランダムアクセスメモリー)、ROM等に記憶されたプログラムに応じた処理を行うCPU(中央演算処理装置)、を備えている。なお、3次元情報算出用画像認識モジュール2は、市販のトラッキングビジョン(富士通株式会社製)のハードウエア構成を利用してもよい。   The image recognition module 2 for calculating three-dimensional information includes a ROM (read only memory) in which an image recognition program IP and data for performing necessary processing are stored, and a RAM (randomly storing necessary data). An access memory), and a CPU (Central Processing Unit) that performs processing in accordance with a program stored in a ROM or the like. The three-dimensional information calculation image recognition module 2 may use a hardware configuration of a commercially available tracking vision (manufactured by Fujitsu Limited).

第一デジタルカメラ3A及び第二デジタルカメラ3Bは、被写体の三次元位置を算出できるようにキャリブレーションがかけられ、互いの位置関係がずれないように固定されている。コマンド入力用コンピュータ6は、3次元情報算出用画像認識モジュール2を動作させるプログラムが収容されるユーザアプリケーション用記憶領域8を備えている。   The first digital camera 3 </ b> A and the second digital camera 3 </ b> B are calibrated so that the three-dimensional position of the subject can be calculated, and are fixed so that their positional relationship does not shift. The command input computer 6 includes a user application storage area 8 in which a program for operating the three-dimensional information calculation image recognition module 2 is accommodated.

ジェスチャ判定処理用コンピュータ7も、ROM、RAM、CPUを備えている。このジェスチャ判定処理用コンピュータ7には、ジェスチャ動作認識プログラムJPの後述するジェスチャ判別手段13部分やジェスチャとユーザー(被写体)の意思表示との対応テーブルが保存されている。   The gesture determination processing computer 7 also includes a ROM, a RAM, and a CPU. The gesture determination processing computer 7 stores a gesture determination unit 13 (to be described later) of the gesture motion recognition program JP and a correspondence table between gestures and intention indications of users (subjects).

ジェスチャ動作認識プログラムJPは、機能手段(プログラムモジュール)として、第一デジタルカメラ3A及び第二デジタルカメラ3Bにより同一ユーザーの画像を撮像する撮像手段10と、同一時刻における第一デジタルカメラ3A及び第二デジタルカメラ3Bの画像情報を対比してユーザー領域(被写体領域)UAと背景領域BAとを識別する検出手段11と、識別されたユーザー領域UAからさらにユーザーの頭領域A1及び手先領域A2を検出するとともに、領域A1,A2の各々の特徴点P1,P2を三次元座標算出する特定手段12と、を有する画像認識プログラムIPと、特定された特徴点P1,P2を追跡してユーザーのジェスチャを判別するジェスチャ判別手段13を有するジェスチャ判別プログラムRPと、を備えている。 The gesture motion recognition program JP, as functional means (program module), the imaging means 10 for capturing images of the same user by the first digital camera 3A and the second digital camera 3B, and the first digital camera 3A and the second digital camera at the same time. The detection means 11 for identifying the user area (subject area) UA and the background area BA by comparing the image information of the digital camera 3B, and further detecting the user's head area A1 and the hand area A2 from the identified user area UA. In addition, the image recognition program IP having the specifying means 12 for calculating the three-dimensional coordinates of the feature points P1 and P2 of the areas A1 and A2 and the user's gesture are determined by tracking the specified feature points P1 and P2. A gesture discrimination program RP having a gesture discrimination means 13 for performing That.

検出手段11は、自乗誤差の計算を行うことにより、第一デジタルカメラ3A及び第二デジタルカメラ3Bにおけるそれぞれの画素値の差の相対的に大きい領域をユーザー領域UAとし、かつ、画素値の差の相対的に小さい領域を背景領域BAとする処理を行う。   The detection means 11 calculates a square error, thereby setting a region having a relatively large difference between the pixel values of the first digital camera 3A and the second digital camera 3B as the user region UA and a difference between the pixel values. A process in which a relatively small area is set as the background area BA is performed.

特定手段12は、ユーザー領域UAの頂部を頭領域A1、かつ該頂部よりも尖った領域を手先領域A2、として特定する処理を行う。ジェスチャ判別手段13は、特定された特徴点を追跡して得られた3次元的な挙動をジェスチャと判別する。そして、予め決められたジェスチャとユーザーの意思表示との対応テーブルに基づき、ユーザーの意思を認識する。   The specifying unit 12 performs a process of specifying the top of the user area UA as the head area A1 and the area sharper than the top as the hand area A2. The gesture discriminating means 13 discriminates a three-dimensional behavior obtained by tracking the specified feature point as a gesture. Then, the user's intention is recognized based on the correspondence table between the predetermined gesture and the user's intention display.

次に、本実施形態に係る画像認識方法及びジェスチャ動作認識方法について、ジェスチャ動作認識プログラムJPの処理の流れと合わせて説明する。   Next, the image recognition method and the gesture motion recognition method according to the present embodiment will be described together with the processing flow of the gesture motion recognition program JP.

この画像認識方法、ジェスチャ動作認識方法及びそのプログラムによる処理は、3次元情報算出用画像認識モジュール2、第一デジタルカメラ3A及び第二デジタルカメラ3B、コマンド入力用コンピュータ6、ジェスチャ判定処理用コンピュータ7の間で行われる。
そして、第一デジタルカメラ3A及び第二デジタルカメラ3Bにより同一ユーザーの画像を撮像する撮像ステップ(ST1)と、同一時刻における第一デジタルカメラ3A及び第二デジタルカメラ3Bの画像情報を対比してユーザー領域UAと背景領域BAとを識別する検出ステップ(ST2)と、識別されたユーザー領域UAからさらにユーザーの頭領域A1及び手先領域A2を検出するとともに、領域A1,A2の各々の特徴点P1,P2を三次元座標算出する特定ステップ(ST3)と、特定された特徴点P1,P2を追跡して被写体のジェスチャを判別するジェスチャ判別ステップ(ST4)と、を備えている。なお、これらの処理は、各種処理と並行してマルチタスクで実行される。
The image recognition method, the gesture motion recognition method, and the processing by the program include a three-dimensional information calculation image recognition module 2, a first digital camera 3A and a second digital camera 3B, a command input computer 6, and a gesture determination processing computer 7. Between.
The imaging step (ST1) for capturing the image of the same user by the first digital camera 3A and the second digital camera 3B is compared with the image information of the first digital camera 3A and the second digital camera 3B at the same time. A detection step (ST2) for identifying the area UA and the background area BA, a user's head area A1 and a hand area A2 are further detected from the identified user area UA, and feature points P1, each of the areas A1, A2 are detected. A specifying step (ST3) for calculating three-dimensional coordinates of P2 and a gesture determining step (ST4) for determining the gesture of the subject by tracking the specified feature points P1 and P2 are provided. Note that these processes are executed by multitasking in parallel with various processes.

撮像ステップ(ST1)では、まず、所定のコマンドをコマンド入力用コンピュータ6に入力することにより、第一デジタルカメラ3A及び第二デジタルカメラ3Bから撮像データ3a,3bを取得する。   In the imaging step (ST1), first, imaging data 3a and 3b are acquired from the first digital camera 3A and the second digital camera 3B by inputting a predetermined command to the command input computer 6.

ここで、同時刻の二つのデジタルカメラ3A,3Bから得た撮像データ3a,3bを対比したとき、デジタルカメラ3A,3Bの近くの物体は視差が大きく、またデジタルカメラ3A,3Bから離れている物体は視差が小さい。そこで、検出ステップ(ST2)では、自乗誤差の計算を行い、ユーザー領域UAを背景領域BAから抽出する。   Here, when the imaging data 3a and 3b obtained from the two digital cameras 3A and 3B at the same time are compared, an object near the digital cameras 3A and 3B has a large parallax and is separated from the digital cameras 3A and 3B. The object has a small parallax. Therefore, in the detection step (ST2), the square error is calculated, and the user area UA is extracted from the background area BA.

特定ステップ(ST3)では、ユーザー領域UAの肌色部分のうち、頂部の頭領域A1から特徴点P1を算出し、頂部よりも尖った領域の手先領域A2から特徴点P2を算出して、その2点のみを追跡して画像解析を行う。   In the specific step (ST3), among the skin color portions of the user area UA, the feature point P1 is calculated from the top area A1, and the feature point P2 is calculated from the hand area A2 that is sharper than the top area. Image analysis is performed by tracking only points.

ジェスチャ判別ステップ(ST4)では、ジェスチャとユーザーの意思表示との対応テーブルに基づき、得られた特徴点P1,P2の動きと、ジェスチャとの対応付けの処理が行われる。   In the gesture determination step (ST4), a process of associating the movements of the obtained feature points P1 and P2 with the gesture is performed based on the correspondence table between the gesture and the user's intention display.

この3次元情報算出用画像認識モジュール2、画像認識方法及び画像認識プログラムIPによれば、検出手段11と特定手段12とを備えているので、従来のように予めユーザーに所定の色の帽子を被らせ、手には所定の色の手袋をはめさせて背景とは異なる色情報を付与しておかなくても、ユーザーの頭と手との3次元情報を得ることができる。したがって、ジェスチャとしての識別精度を向上するとともに短時間で処理を行うことができる。   According to the image recognition module 2 for calculating the three-dimensional information, the image recognition method, and the image recognition program IP, the detection unit 11 and the identification unit 12 are provided. The three-dimensional information of the user's head and hand can be obtained without putting the color information different from the background by putting a glove of a predetermined color on the hand. Therefore, the identification accuracy as a gesture can be improved and processing can be performed in a short time.

また、検出手段11によって、第一デジタルカメラ3A及び第二デジタルカメラ3Bにおける視差を利用してそれぞれの画素値の差の相対的に大きい領域(視差大)をユーザー領域UAとし、かつ、画素値の差の相対的に小さい領域(視差小)を背景領域BAとすることができる。   In addition, the detection unit 11 uses the parallax in the first digital camera 3A and the second digital camera 3B as a user area UA that has a relatively large difference in pixel value (large parallax), and the pixel value. An area having a relatively small difference (small parallax) can be used as the background area BA.

さらに、特定手段12によって、ユーザー領域UAのうち、頭領域A1及び手先領域A2のそれぞれの三次元座標で算出された特徴点P1,P2の動きから手先の指す方向を特定することができる。 Further, the specifying unit 12 can specify the direction indicated by the hand from the movement of the feature points P1 and P2 calculated from the three-dimensional coordinates of the head area A1 and the hand area A2 in the user area UA.

また、ジェスチャ動作認識システム1は、3次元情報算出用画像認識モジュール2に加えてジャスチャ判定処理用コンピュータ7を備えている。そして、特定された特徴点P1,P2を追跡してユーザーのジェスチャを判別するジェスチャ判別手段13を備えている。したがって、ユーザーの意志とジェスチャとの関係を予め決めておくことによって、カメラ3A,3Bが撮像した手の動きによりユーザーのジェスチャを特定することができ、さらにこのジェスチャからユーザーの意志を読み取ることができる。 The gesture motion recognition system 1 includes a gesture determination processing computer 7 in addition to the three-dimensional information calculation image recognition module 2. And the gesture discrimination | determination means 13 which tracks the specified feature points P1 and P2 and discriminate | determines a user's gesture is provided. Therefore, by determining the relationship between the user's will and the gesture in advance, the user's gesture can be specified by the hand movement captured by the cameras 3A and 3B, and the user's will can be read from the gesture. it can.

なお、本発明の技術範囲は上記実施の形態に限定されるものではなく、本発明の趣旨を逸脱しない範囲において種々の変更を加えることが可能である。
例えば、上記実施形態ではカメラを2台としているが、3台以上のカメラを使用するものでも構わない。
The technical scope of the present invention is not limited to the above embodiment, and various modifications can be made without departing from the spirit of the present invention.
For example, in the above embodiment, two cameras are used, but three or more cameras may be used.

また、図4に示すように、ロボットのような移動体15にジェスチャ動作認識システム1が搭載されていてもよい。この場合、ジェスチャによってロボットとコミュニケーションを図ることができる。   Further, as shown in FIG. 4, the gesture motion recognition system 1 may be mounted on a moving body 15 such as a robot. In this case, it is possible to communicate with the robot by a gesture.

さらに、図5に示すように、明るさが調節可能な照明Lと、開閉可能な窓Wと、チャンネル変更及び音量調節が可能なテレビTVと、再生・停止、巻き戻し、早送りが可能なビデオVと、が操作制御機器Cと接続された部屋の中で、これらの機器を操作する際にジェスチャ動作認識システム1が適用されてもよい。この場合、図6に示すように、機器の選択のためのジェスチャ及び選択した機器の操作のためのジェスチャとユーザーの意思表示との対応テーブルを予め用意しておく。これにより、ユーザーのジェスチャによって各機器の操作を自動的に行うことができる。   Furthermore, as shown in FIG. 5, the illumination L that can be adjusted in brightness, the window W that can be opened and closed, the TV TV that can change the channel and the volume, and the video that can be played, stopped, rewinded, and fast-forwarded. The gesture motion recognition system 1 may be applied when operating these devices in a room connected to the operation control device C. In this case, as shown in FIG. 6, a correspondence table of gestures for selecting a device and gestures for operating the selected device and a user's intention display is prepared in advance. Thereby, operation of each apparatus can be automatically performed by a user's gesture.

従来の画像処理技術では人の頭と手の位置の画像処理に200ms要していたが、本願の手法であげた画像認識モジュール2等を用いることにより、50msと処理の高速化を行うことができた。ここで、図7は、人を上から見たときの頭と手の位置の画像処理結果を示す。図7に示すように、精度比較においても従来の画像処理技術では、point1〜3における自乗誤差平均が16.3cmなのに対し、本願手法では13.4cmとなり、より良い精度を得ることができた。   In the conventional image processing technology, 200 ms was required for image processing of the position of a person's head and hand. However, by using the image recognition module 2 or the like described in the method of the present application, the processing speed can be increased to 50 ms. did it. Here, FIG. 7 shows the image processing result of the position of the head and hand when a person is viewed from above. As shown in FIG. 7, in the accuracy comparison, the conventional image processing technique has an average square error at points 1 to 3 of 16.3 cm, whereas the method of the present application is 13.4 cm, and a better accuracy can be obtained.

本発明の一実施形態に係るジェスチャ動作認識システムを示す全体構成機能ブロック図である。1 is an overall functional block diagram showing a gesture motion recognition system according to an embodiment of the present invention. 本発明の一実施形態に係るジェスチャ動作認識方法を示すフロー図である。It is a flowchart which shows the gesture operation | movement recognition method which concerns on one Embodiment of this invention. 本発明の一実施形態に係るジェスチャ動作認識処理の流れを示す説明図である。It is explanatory drawing which shows the flow of the gesture action recognition process which concerns on one Embodiment of this invention. 本発明の一実施形態に係るジェスチャ動作認識システムを移動体に搭載した応用例を示す図である。It is a figure which shows the application example which mounts the gesture motion recognition system which concerns on one Embodiment of this invention in the mobile body. 本発明の一実施形態に係るジェスチャ動作認識システムの他の応用例を示す全体概要図である。It is a whole schematic diagram which shows the other application example of the gesture motion recognition system which concerns on one Embodiment of this invention. 図6に示す応用例におけるジェスチャ対応テーブルを示す図である。FIG. 7 is a diagram illustrating a gesture correspondence table in the application example illustrated in FIG. 6. 本発明の一実施形態に係るジェスチャ動作認識システムと従来例との処理結果の比較を示すグラフである。It is a graph which shows the comparison of the processing result of the gesture action recognition system which concerns on one Embodiment of this invention, and a prior art example.

符号の説明Explanation of symbols

1 ジェスチャ動作認識システム
2 3次元情報算出用画像認識モジュール(画像処理装置)
3A 第一デジタルカメラ(第一カメラ)
3B 第二デジタルカメラ(第二カメラ)
10 撮像手段
11 検出手段
12 特定手段
13 ジェスチャ判別手段
A1 頭領域
A2 手先領域
BA 背景領域
IP 画像認識プログラム
JP ジェスチャ認識プログラム
UA ユーザー領域(被写体領域)
P1,P2 特徴点
DESCRIPTION OF SYMBOLS 1 Gesture movement recognition system 2 3D information calculation image recognition module (image processing apparatus)
3A 1st digital camera (1st camera)
3B Second digital camera (second camera)
DESCRIPTION OF SYMBOLS 10 Imaging means 11 Detection means 12 Identification means 13 Gesture discrimination means A1 Head area A2 Hand area BA Background area IP Image recognition program JP Gesture recognition program UA User area (subject area)
P1, P2 feature points

Claims (12)

異なる位置に配された第一カメラ及び第二カメラと接続され、
前記第一カメラ及び前記第二カメラにより同一被写体の画像を撮像する撮像手段と、
同一時刻における前記第一カメラ及び前記第二カメラの画像情報を対比して被写体領域と背景領域とを識別する検出手段と、
識別された前記被写体領域からさらに被写体の頭領域及び手先領域を検出するとともに、前記領域の各々の特徴点を三次元座標算出する特定手段と、
を備えていることを特徴とする画像認識装置。
Connected to the first camera and the second camera located at different positions,
Imaging means for capturing an image of the same subject by the first camera and the second camera;
Detecting means for comparing the image information of the first camera and the second camera at the same time to identify a subject area and a background area;
A detecting means for further detecting a head region and a hand region of the subject from the identified subject region, and a three-dimensional coordinate calculating unit for each feature point of the region;
An image recognition apparatus comprising:
前記検出手段が、前記第一カメラ及び前記第二カメラにおけるそれぞれの画素値の差の相対的に大きい領域を被写体領域とし、かつ、画素値の差の相対的に小さい領域を背景領域とすることを特徴とする請求項1に記載の画像認識装置。   The detection means sets a region having a relatively large difference in pixel value between the first camera and the second camera as a subject region and a region having a relatively small difference in pixel value as a background region. The image recognition apparatus according to claim 1. 前記特定手段が、前記被写体領域の頂部を前記頭領域、かつ該頂部よりも尖った領域を前記手先領域と特定することを特徴とする請求項1又は2に記載の画像認識装置。 The image recognition apparatus according to claim 1, wherein the specifying unit specifies a top portion of the subject region as the head region and a region sharper than the top portion as the hand region. 請求項1から3の何れか一つに記載の画像認識装置と、
特定された前記特徴点を追跡して被写体のジェスチャを判別するジェスチャ判別手段と、
を備えていることを特徴とするジェスチャ動作認識システム。
An image recognition device according to any one of claims 1 to 3,
Gesture determining means for determining the gesture of the subject by tracking the identified feature points;
A gesture motion recognition system comprising:
異なる位置に配された第一カメラ及び第二カメラと接続され、
前記第一カメラ及び前記第二カメラにより同一被写体の画像を撮像する撮像ステップと、
同一時刻における前記第一カメラ及び前記第二カメラの画像情報を対比して被写体領域と背景領域とを識別する検出ステップと、
識別された前記被写体領域からさらに被写体の頭領域及び手先領域を検出するとともに、前記領域の各々の特徴点を三次元座標算出する特定ステップと、
を備えていることを特徴とする画像認識方法。
Connected to the first camera and the second camera located at different positions,
An imaging step of capturing an image of the same subject by the first camera and the second camera;
A detection step of comparing a subject area and a background area by comparing image information of the first camera and the second camera at the same time;
A step of further detecting a head region and a hand region of the subject from the identified subject region and calculating three-dimensional coordinates of each feature point of the region;
An image recognition method comprising:
前記検出ステップが、前記第一カメラ及び前記第二カメラにおけるそれぞれの画素値の差の相対的に大きい領域を被写体領域とし、かつ、画素値の差の相対的に小さい領域を背景領域とすることを特徴とする請求項5に記載の画像認識方法。   In the detection step, an area having a relatively large difference in pixel value between the first camera and the second camera is set as a subject area, and an area having a relatively small difference in pixel value is set as a background area. The image recognition method according to claim 5. 前記特定ステップが、被写体領域の頂部を前記頭領域、かつ該頂部よりも尖った領域を前記手先領域と特定することを特徴とする請求項5又は6に記載の画像認識方法。 7. The image recognition method according to claim 5, wherein the specifying step specifies the top of the subject region as the head region and a region sharper than the top as the hand region. 請求項5から7の何れか一つに記載の画像認識方法と、
特定された前記特徴点を追跡して被写体のジェスチャを判別するジェスチャ判別ステップと、
を備えていることを特徴とするジェスチャ動作認識方法。
An image recognition method according to any one of claims 5 to 7,
A gesture determination step of determining the gesture of the subject by tracking the identified feature points;
A gesture motion recognition method comprising:
コンピュータを、
異なる位置に配された第一カメラ及び第二カメラと接続され、
前記第一カメラ及び前記第二カメラにより同一被写体の画像を撮像する撮像手段と、
同一時刻における前記第一カメラ及び前記第二カメラの画像情報を対比して被写体領域と背景領域とを識別する検出手段と、
識別された前記被写体領域からさらに被写体の頭領域及び手先領域を検出するとともに、前記領域の各々の特徴点を三次元座標算出する特定手段と、
して機能させることを特徴とする画像認識プログラム。
Computer
Connected to the first camera and the second camera located at different positions,
Imaging means for capturing an image of the same subject by the first camera and the second camera;
Detecting means for comparing the image information of the first camera and the second camera at the same time to identify a subject area and a background area;
A detecting means for further detecting a head region and a hand region of the subject from the identified subject region, and a three-dimensional coordinate calculating unit for each feature point of the region;
An image recognition program characterized by being made to function.
前記検出手段が、前記第一カメラ及び前記第二カメラにおけるそれぞれの画素値の差の相対的に大きい領域を被写体領域とし、かつ、画素値の差の相対的に小さい領域を背景領域とする処理を行うことを特徴とする請求項9に記載の画像認識プログラム。 A process in which the detection unit sets a region having a relatively large difference in pixel value between the first camera and the second camera as a subject region and a region having a relatively small difference in pixel value as a background region. The image recognition program according to claim 9, wherein: 前記特定手段が、前記被写体領域の頂部を前記頭領域、かつ該頂部よりも尖った領域を前記手先領域と特定する処理を行うことを特徴とする請求項9又は10に記載の画像認識プログラム。 11. The image recognition program according to claim 9, wherein the specifying unit performs a process of specifying a top portion of the subject region as the head region and a region sharper than the top portion as the hand region. 請求項9から11の何れか一つに記載の画像認識プログラムと、
特定された前記特徴点を追跡して被写体のジェスチャを判別するジェスチャ判別手段と、
を備えていることを特徴とするジェスチャ動作認識プログラム。

An image recognition program according to any one of claims 9 to 11,
Gesture determining means for determining the gesture of the subject by tracking the identified feature points;
A gesture motion recognition program characterized by comprising:

JP2008055564A 2008-03-05 2008-03-05 Image recognition apparatus, image recognition method, image recognition program, gesture motion recognition system, gesture motion recognition method, and gesture motion recognition program Expired - Fee Related JP5174492B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2008055564A JP5174492B2 (en) 2008-03-05 2008-03-05 Image recognition apparatus, image recognition method, image recognition program, gesture motion recognition system, gesture motion recognition method, and gesture motion recognition program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2008055564A JP5174492B2 (en) 2008-03-05 2008-03-05 Image recognition apparatus, image recognition method, image recognition program, gesture motion recognition system, gesture motion recognition method, and gesture motion recognition program

Publications (2)

Publication Number Publication Date
JP2009211563A true JP2009211563A (en) 2009-09-17
JP5174492B2 JP5174492B2 (en) 2013-04-03

Family

ID=41184622

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2008055564A Expired - Fee Related JP5174492B2 (en) 2008-03-05 2008-03-05 Image recognition apparatus, image recognition method, image recognition program, gesture motion recognition system, gesture motion recognition method, and gesture motion recognition program

Country Status (1)

Country Link
JP (1) JP5174492B2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012003364A (en) * 2010-06-15 2012-01-05 Nippon Hoso Kyokai <Nhk> Person movement determination device and program for the same
WO2012058782A1 (en) * 2010-11-01 2012-05-10 Technicolor(China) Technology Co., Ltd Method and device for detecting gesture inputs
JP2012133666A (en) * 2010-12-22 2012-07-12 Sogo Keibi Hosho Co Ltd Portion recognition device, portion recognition method and portion recognition program
JP2012133665A (en) * 2010-12-22 2012-07-12 Sogo Keibi Hosho Co Ltd Held object recognition device, held object recognition method and held object recognition program
CN103135883A (en) * 2011-12-02 2013-06-05 深圳泰山在线科技有限公司 Method and system for control of window
CN103713738A (en) * 2013-12-17 2014-04-09 武汉拓宝电子***有限公司 Man-machine interaction method based on visual tracking and gesture recognition
CN104050859A (en) * 2014-05-08 2014-09-17 南京大学 Interactive digital stereoscopic sand table system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05196425A (en) * 1992-01-21 1993-08-06 Ezel Inc Three-dimensional position detection method for human being
JP2000099741A (en) * 1998-09-25 2000-04-07 Atr Media Integration & Communications Res Lab Method for estimating personal three-dimensional posture by multi-eye image processing
JP2001005973A (en) * 1999-04-20 2001-01-12 Atr Media Integration & Communications Res Lab Method and device for estimating three-dimensional posture of person by color image
JP2004265222A (en) * 2003-03-03 2004-09-24 Nippon Telegr & Teleph Corp <Ntt> Interface method, system, and program
JP2004303014A (en) * 2003-03-31 2004-10-28 Honda Motor Co Ltd Gesture recognition device, its method and gesture recognition program
JP2007235924A (en) * 2006-02-03 2007-09-13 Olympus Imaging Corp Camera

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05196425A (en) * 1992-01-21 1993-08-06 Ezel Inc Three-dimensional position detection method for human being
JP2000099741A (en) * 1998-09-25 2000-04-07 Atr Media Integration & Communications Res Lab Method for estimating personal three-dimensional posture by multi-eye image processing
JP2001005973A (en) * 1999-04-20 2001-01-12 Atr Media Integration & Communications Res Lab Method and device for estimating three-dimensional posture of person by color image
JP2004265222A (en) * 2003-03-03 2004-09-24 Nippon Telegr & Teleph Corp <Ntt> Interface method, system, and program
JP2004303014A (en) * 2003-03-31 2004-10-28 Honda Motor Co Ltd Gesture recognition device, its method and gesture recognition program
JP2007235924A (en) * 2006-02-03 2007-09-13 Olympus Imaging Corp Camera

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012003364A (en) * 2010-06-15 2012-01-05 Nippon Hoso Kyokai <Nhk> Person movement determination device and program for the same
WO2012058782A1 (en) * 2010-11-01 2012-05-10 Technicolor(China) Technology Co., Ltd Method and device for detecting gesture inputs
US9189071B2 (en) 2010-11-01 2015-11-17 Thomson Licensing Method and device for detecting gesture inputs
JP2012133666A (en) * 2010-12-22 2012-07-12 Sogo Keibi Hosho Co Ltd Portion recognition device, portion recognition method and portion recognition program
JP2012133665A (en) * 2010-12-22 2012-07-12 Sogo Keibi Hosho Co Ltd Held object recognition device, held object recognition method and held object recognition program
CN103135883A (en) * 2011-12-02 2013-06-05 深圳泰山在线科技有限公司 Method and system for control of window
CN103135883B (en) * 2011-12-02 2016-07-06 深圳泰山在线科技有限公司 Control the method and system of window
CN103713738A (en) * 2013-12-17 2014-04-09 武汉拓宝电子***有限公司 Man-machine interaction method based on visual tracking and gesture recognition
CN103713738B (en) * 2013-12-17 2016-06-29 武汉拓宝科技股份有限公司 A kind of view-based access control model follows the tracks of the man-machine interaction method with gesture identification
CN104050859A (en) * 2014-05-08 2014-09-17 南京大学 Interactive digital stereoscopic sand table system

Also Published As

Publication number Publication date
JP5174492B2 (en) 2013-04-03

Similar Documents

Publication Publication Date Title
Lee et al. Handy AR: Markerless inspection of augmented reality objects using fingertip tracking
JP5174492B2 (en) Image recognition apparatus, image recognition method, image recognition program, gesture motion recognition system, gesture motion recognition method, and gesture motion recognition program
US10445887B2 (en) Tracking processing device and tracking processing system provided with same, and tracking processing method
EP2344983B1 (en) Method, apparatus and computer program product for providing adaptive gesture analysis
JP6007682B2 (en) Image processing apparatus, image processing method, and program
US9256324B2 (en) Interactive operation method of electronic apparatus
JP6259545B2 (en) System and method for inputting a gesture in a 3D scene
US8897490B2 (en) Vision-based user interface and related method
JP5205187B2 (en) Input system and input method
US20130249786A1 (en) Gesture-based control system
US20130279756A1 (en) Computer vision based hand identification
KR101551576B1 (en) Robot cleaner, apparatus and method for recognizing gesture
US9836130B2 (en) Operation input device, operation input method, and program
TW201123031A (en) Robot and method for recognizing human faces and gestures thereof
JPWO2018154709A1 (en) Motion learning device, skill discrimination device and skill discrimination system
JP2012238293A (en) Input device
KR20140019950A (en) Method for generating 3d coordinate using finger image from mono camera in terminal and mobile terminal for generating 3d coordinate using finger image from mono camera
KR101396488B1 (en) Apparatus for signal input and method thereof
CN112109069A (en) Robot teaching device and robot system
US10074188B2 (en) Method and apparatus for processing images for use with a three-dimensional hand model database
Kopinski et al. A time-of-flight-based hand posture database for human-machine interaction
Siam et al. Human computer interaction using marker based hand gesture recognition
Hannuksela et al. Face tracking for spatially aware mobile user interfaces
CN113192127A (en) Image processing method and device, electronic equipment and storage medium
JP2832333B2 (en) Object shape / posture detection device

Legal Events

Date Code Title Description
A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20080310

A711 Notification of change in applicant

Free format text: JAPANESE INTERMEDIATE CODE: A711

Effective date: 20101213

A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20101221

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A821

Effective date: 20101213

RD04 Notification of resignation of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7424

Effective date: 20120106

RD02 Notification of acceptance of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7422

Effective date: 20120215

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20120222

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20120228

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20120424

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20121225

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20121228

R150 Certificate of patent or registration of utility model

Ref document number: 5174492

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

S531 Written request for registration of change of domicile

Free format text: JAPANESE INTERMEDIATE CODE: R313531

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

S533 Written request for registration of change of name

Free format text: JAPANESE INTERMEDIATE CODE: R313533

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

LAPS Cancellation because of no payment of annual fees