JP6703812B2 - 3D object inspection device - Google Patents

3D object inspection device Download PDF

Info

Publication number
JP6703812B2
JP6703812B2 JP2015154781A JP2015154781A JP6703812B2 JP 6703812 B2 JP6703812 B2 JP 6703812B2 JP 2015154781 A JP2015154781 A JP 2015154781A JP 2015154781 A JP2015154781 A JP 2015154781A JP 6703812 B2 JP6703812 B2 JP 6703812B2
Authority
JP
Japan
Prior art keywords
dimensional
recognition
recognition target
orientation
dimensional model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2015154781A
Other languages
Japanese (ja)
Other versions
JP2017033429A (en
Inventor
徐 剛
剛 徐
朋弘 仲道
朋弘 仲道
貴央 中山
貴央 中山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kyoto Robotics Corp
Original Assignee
Kyoto Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kyoto Robotics Corp filed Critical Kyoto Robotics Corp
Priority to JP2015154781A priority Critical patent/JP6703812B2/en
Publication of JP2017033429A publication Critical patent/JP2017033429A/en
Application granted granted Critical
Publication of JP6703812B2 publication Critical patent/JP6703812B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Manipulator (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Description

本発明は、認識対象物と当該認識対象物の3次元モデルとの違いを検査するための3次元物体検査装置に関する。 The present invention relates to a three-dimensional object inspection device for inspecting a difference between a recognition target object and a three-dimensional model of the recognition target object.

生産ラインにおいて、ロボットのハンド等により部品等の認識対象物に対して正確な操作を可能とするために、バラ積みにされた部品等を個々に認識し、各部品の位置及び姿勢を認識するための3次元物体認識装置が近年開発されている。 In the production line, in order to enable accurate operation of the recognition object such as parts by the hand of the robot, etc., the parts etc. that are piled up individually are recognized and the position and orientation of each part are recognized. In recent years, a three-dimensional object recognition device has been developed.

従来、このような3次元物体認識装置では、例えば、3次元形状を有する認識対象物を所定方向からカメラで撮影した画像から認識対象物のエッジすなわち輪郭等の特徴を抽出し、撮影画像を構成する各画素について最も近いエッジまでの距離をそれぞれ計算し、認識対象物の輪郭形状を表わす3次元モデルを撮影画像上に射影して照合することにより認識対象物の位置姿勢を認識するもの等がある(例えば、特許文献1参照)。 Conventionally, in such a three-dimensional object recognition device, for example, a feature such as an edge of the recognition target, that is, a contour, is extracted from an image of a recognition target having a three-dimensional shape photographed by a camera from a predetermined direction to form a photographed image. One that recognizes the position and orientation of the recognition target object by calculating the distance to the nearest edge for each pixel, projecting a three-dimensional model representing the contour shape of the recognition target object on the captured image, and collating the three-dimensional model. There is (for example, refer to Patent Document 1).

特開2010−205095号公報JP, 2010-205095, A

一方で、ユーザからは、認識対象物と3次元モデルとが同じであるか否かを判定し、本来の形状や模様等と異なっている認識対象物について把握したいという要求がある。しかしながら、従来の3次元物体認識装置では、認識対象物の位置姿勢を認識することはできるが、認識対象物と3次元モデルとの差を検査することは行われていないため、認識対象物が本来の形状や模様等と異なっているような場合でも、そのまま見過ごされてしまう虞があった。 On the other hand, there is a request from the user to determine whether or not the recognition target object and the three-dimensional model are the same, and to understand the recognition target object that is different from the original shape or pattern. However, in the conventional three-dimensional object recognition device, although the position and orientation of the recognition target object can be recognized, since the difference between the recognition target object and the three-dimensional model is not inspected, the recognition target object is Even if the shape or pattern is different from the original shape, it may be overlooked.

本発明は、上記のような課題に鑑みてなされたものであって、認識対象物が本来の形状や模様等と異なっている否かを把握することができる3次元物体検査装置を提供することを目的とする。 The present invention has been made in view of the above problems, and provides a three-dimensional object inspection device capable of grasping whether or not a recognition target object is different from the original shape, pattern, or the like. With the goal.

上記目的を達成するために、本発明に係る3次元物体検査装置は、認識対象物の表面形状、又は、認識対象物の表面形状と、輪郭及びテクスチャのいずれか又は両方とを表す3次元モデルを記憶する3次元モデル記憶手段と、前記認識対象物に対して作業を行うロボットと、前記ロボットの作業前後の前記認識対象物の表面の点の3次元座標を示す3次元点群を取得、又は、前記ロボットの作業前後の前記認識対象物の表面の点の3次元座標を示す3次元点群及び前記認識対象物の画像を取得するための撮像手段と、前記3次元モデル記憶手段に記憶された前記3次元モデルと、前記撮像手段によって取得された前記ロボット作業前の前記認識対象物の前記画像又は前記3次元点群とを用いて3次元認識処理を行う、又は、前記3次元モデルと、前記撮像手段によって取得された前記ロボット作業後の前記認識対象物の前記画像又は前記3次元点群とを用いて3次元認識処理を行うことにより、前記認識対象物と前記3次元モデルの位置姿勢を合わせる3次元認識手段と、前記ロボットの作業前に前記3次元認識手段によって求められる前記認識対象物の位置姿勢の結果に基づき、前記認識対象物が所定の位置姿勢となるように前記ロボットを動作させるロボット制御手段と、前記3次元認識手段による前記ロボット作業後の前記認識対象物と前記3次元モデルの位置姿勢合わせの結果を用いて、前記3次元モデルの3次元形状と前記撮像手段によって取得された前記所定の位置姿勢にある前記認識対象物の3次元形状との差、及び/又は、前記3次元モデル上のテクスチャと前記所定の位置姿勢にある前記認識対象物の前記画像上のテクスチャとの差を求め、その差が所定の基準以上であるか否かを判定する判定手段と、を備えることを特徴としている。 In order to achieve the above object, a three-dimensional object inspection apparatus according to the present invention is a three-dimensional model representing the surface shape of a recognition target object, or the surface shape of a recognition target object and either or both of a contour and a texture. A three-dimensional model storage unit that stores the three-dimensional model storage means, a robot that performs a work on the recognition target object, and a three-dimensional point cloud that indicates the three-dimensional coordinates of points on the surface of the recognition target object before and after the work of the robot, Alternatively, a three-dimensional point group indicating three-dimensional coordinates of points on the surface of the recognition target object before and after the work of the robot and an imaging unit for acquiring an image of the recognition target object and a storage unit in the three-dimensional model storage unit. Performs a three-dimensional recognition process by using the three-dimensional model thus obtained and the image of the recognition target before the robot work or the three-dimensional point group acquired by the imaging means, or the three-dimensional model. And a three-dimensional recognition process using the image of the recognition target after the robot work or the three-dimensional point group acquired by the image capturing unit, thereby performing the three-dimensional recognition of the recognition target and the three-dimensional model. Based on the result of the position and orientation of the recognition target object obtained by the three-dimensional recognition unit before the work of the robot, the three-dimensional recognition means for matching the position and orientation, and the recognition target object to have a predetermined position and orientation. Using the robot control means for operating the robot and the result of position and orientation alignment of the recognition target object and the three-dimensional model after the robot work by the three-dimensional recognition means, the three-dimensional shape of the three- dimensional model and the imaging The difference between the three-dimensional shape of the recognition target object in the predetermined position and orientation acquired by the means, and/or the texture on the three-dimensional model and the image of the recognition target object in the predetermined position and orientation. A determination unit that determines a difference from the above texture and determines whether or not the difference is equal to or larger than a predetermined reference.

また、本発明に係る3次元物体検査装置は、前記判定手段によって、前記3次元モデルの3次元形状と前記撮像手段によって取得された前記所定の位置姿勢にある前記認識対象物の3次元形状との差、及び/又は、前記3次元モデル上のテクスチャと前記所定の位置姿勢にある前記認識対象物の前記画像上のテクスチャとの差が前記所定の基準以上であると判定された場合に、前記3次元モデルの3次元形状と前記撮像手段によって取得された前記所定の位置姿勢にある前記認識対象物の3次元形状との差、及び/又は、前記3次元モデル上のテクスチャと前記所定の位置姿勢にある前記認識対象物の前記画像上のテクスチャとの差を出力する出力手段を備えることを特徴としている。 Further, in the three-dimensional object inspection device according to the present invention, the determination means determines the three-dimensional shape of the three-dimensional model and the three-dimensional shape of the recognition target object in the predetermined position and orientation acquired by the imaging means. , And/or the difference between the texture on the three-dimensional model and the texture on the image of the recognition object in the predetermined position and orientation is determined to be equal to or more than the predetermined reference, The difference between the three-dimensional shape of the three-dimensional model and the three-dimensional shape of the recognition target object in the predetermined position and orientation acquired by the imaging unit, and/or the texture on the three-dimensional model and the predetermined one. It is characterized by comprising an output means for outputting a difference between the position and orientation of the recognition target and the texture on the image.

また、本発明に係る3次元物体検査装置は、前記3次元認識手段が、前記3次元モデル記憶手段に記憶された前記3次元モデルと前記撮像手段によって取得された前記認識対象物の前記3次元点群との位置姿勢の初期値を全探索で求め、前記初期値を用いて、前記認識対象物の位置姿勢のロバスト推定法を用いた最適化を行うことを特徴としている。 Further, in the three-dimensional object inspection device according to the present invention, the three-dimensional recognition means is configured to store the three-dimensional model of the recognition target acquired by the three-dimensional model stored in the three-dimensional model storage means and the imaging means. An initial value of the position and orientation with respect to the point cloud is obtained by a full search, and the initial value is used to perform optimization using a robust estimation method for the position and orientation of the recognition target.

本発明に係る3次元物体検査装置によれば、3次元認識手段によって認識対象物の表面形状、又は、認識対象物の表面形状と、輪郭及びテクスチャのいずれか又は両方とを表わす3次元モデルと、撮像手段によって取得した認識対象物の3次元点群、又は、認識対象物の3次元点群及び画像を用いて3次元認識処理を行うことにより、認識対象物と3次元モデルの位置姿勢合わせを行い、判定手段によって3次元モデルの3次元形状と認識対象物の3次元形状との差、及び/又は、3次元モデル上に含まれる文字や模様等のテクスチャと認識対象物の画像上のテクスチャとの差を求め、その差が所定の基準以上であるか否かを判定するので、認識対象物が本来の形状や模様等と異なっている否かを把握することができる。 According to the three-dimensional object inspection apparatus of the present invention, the three-dimensional recognition means provides a surface shape of the recognition target object, or a three-dimensional model representing the surface shape of the recognition target object and one or both of the contour and the texture. The position and orientation of the recognition target and the three-dimensional model are adjusted by performing a three-dimensional recognition process using the three-dimensional point cloud of the recognition target or the three-dimensional point cloud and the image of the recognition target acquired by the imaging unit. The determination means determines the difference between the three-dimensional shape of the three-dimensional model and the three-dimensional shape of the recognition target, and/or the texture such as characters and patterns included in the three-dimensional model and the image of the recognition target. Since the difference from the texture is obtained and it is determined whether or not the difference is equal to or larger than a predetermined reference, it is possible to grasp whether or not the recognition target is different from the original shape or pattern.

また、本発明に係る3次元物体検査装置によれば、ロボット制御手段によって、認識対象物が所定の位置姿勢になるようにロボットを動作させることができるので、例えば、認識対象物に含まれる文字や模様等のテクスチャが撮像手段の位置から見て裏側に隠れた状態にあるような場合でも、撮像手段によってテクスチャの画像を取得できるように認識対象物の位置姿勢を変えることができる。これにより、認識対象物がランダムな状態で置かれているような場合でも、認識対象物が本来の形状や模様等と異なっている否かを適切に把握することができる。 Further, according to the three-dimensional object inspection device of the present invention, the robot control unit can operate the robot so that the recognition target object has a predetermined position and orientation. Therefore, for example, characters included in the recognition target object Even in the case where a texture such as a pattern or a pattern is hidden behind the image pickup means, the position and orientation of the recognition target can be changed so that the image of the texture can be obtained by the image pickup means. As a result, even when the recognition target object is placed in a random state, it is possible to appropriately grasp whether or not the recognition target object is different from the original shape or pattern.

また、本発明に係る3次元物体検査装置によれば、判定手段によって、3次元モデルの3次元形状と撮像手段によって取得された認識対象物の3次元形状との差、及び/又は、3次元モデル上のテクスチャと認識対象物の画像上のテクスチャとの差が所定の基準以上であると判定された場合には、その差を出力手段によって出力するので、ユーザは、認識対象物が本来の形状や模様等とどのような差があるかを容易に把握することができる。 Further, according to the three-dimensional object inspection device of the present invention, the difference between the three-dimensional shape of the three-dimensional model and the three-dimensional shape of the recognition target object acquired by the imaging means by the determination means, and/or the three-dimensional shape. When it is determined that the difference between the texture on the model and the texture on the image of the recognition target is equal to or larger than a predetermined reference, the difference is output by the output unit, so that the user recognizes that the recognition target is the original one. It is possible to easily understand how there is a difference in shape or pattern.

また、本発明に係る3次元物体検査装置によれば、3次元認識手段は、3次元モデルと撮像手段によって取得された認識対象物の3次元点群との位置姿勢の初期値を全探索で求め、その初期値を用いて、認識対象物の位置姿勢のロバスト推定法を用いた最適化を行うので、認識対象物の形状が3次元モデルの形状と異なるような場合でも、その形状の差によって認識対象物と3次元モデルの位置姿勢合わせにずれが生じることを防止することができる。これにより、認識対象物が本来の形状や模様等と異なっている否かをより適切に把握することができる。 Further, according to the three-dimensional object inspection apparatus of the present invention, the three-dimensional recognition means can perform the full search for the initial values of the position and orientation of the three-dimensional point cloud of the recognition target acquired by the three-dimensional model and the imaging means. Since the initial value is used to perform optimization using the robust estimation method of the position and orientation of the recognition target object, even if the shape of the recognition target object differs from the shape of the three-dimensional model, the difference in the shape Thus, it is possible to prevent the position and orientation of the recognition target object and the three-dimensional model from being misaligned. This makes it possible to more appropriately grasp whether or not the recognition target object is different from the original shape or pattern.

本発明の実施形態に係る3次元物体検査装置の一例を示す概略模式図である。It is a schematic diagram showing an example of a three-dimensional object inspection device according to an embodiment of the present invention. 本発明の実施形態に係る3次元物体検査装置の処理の流れの一例を示すフローチャートである。It is a flow chart which shows an example of a flow of processing of a three-dimensional object inspection device concerning an embodiment of the present invention.

以下、本発明の実施形態に係る3次元物体検査装置1について、図面を参照しつつ説明する。3次元物体検査装置1は、例えば、図1に示すように、作業台2の上に載置されている3次元形状を有するワーク(認識対象物)3が本来の形状やテクスチャ31を有するものであるか否かを検査するものであって、ワーク3の画像の取得又はワーク3の表面の点の3次元座標を示す3次元点群の計測を行うための撮像装置4(4a、4b)と、ワーク3に対して作業を行うロボット5と、撮像装置4により得られた画像又は3次元点群データに基づいてロボットの動作を制御するコンピュータ6とを備えている。 Hereinafter, a three-dimensional object inspection device 1 according to an embodiment of the present invention will be described with reference to the drawings. In the three-dimensional object inspection apparatus 1, for example, as shown in FIG. 1, a work (recognition target) 3 having a three-dimensional shape placed on a workbench 2 has an original shape and texture 31. And an image pickup device 4 (4a, 4b) for acquiring an image of the work 3 or measuring a three-dimensional point group indicating the three-dimensional coordinates of points on the surface of the work 3. And a robot 5 for performing work on the work 3, and a computer 6 for controlling the operation of the robot based on an image obtained by the imaging device 4 or three-dimensional point cloud data.

ワーク3は、例えば、図1に示すように、略直方体形状に形成されており、模様や文字等のテクスチャ31を有するものである。尚、ワーク3の形状は特に限定されるものではなく、テクスチャ31を有していないものであっても良い。 The work 3 is formed in a substantially rectangular parallelepiped shape, for example, as shown in FIG. 1, and has a texture 31 such as a pattern or a character. The shape of the work 3 is not particularly limited, and the work 3 may not have the texture 31.

撮像装置4としては、例えば、ワーク3の画像を取得するための機能及びワーク3の表面の点の3次元座標を示す3次元点群の計測を行う機能を有する従来公知の3次元センサ等を用いることができる。また、撮像装置4として、例えば、ワーク3に対してパターン光を投光する投光手段(不図示)と、このパターン光が投光されたワーク3を異なる位置に設けられた基準カメラと参照カメラとからなるステレオカメラとを備え、該ステレオカメラにより撮像して得られた複数の画像間で対応する画素を特定し、対応付けられた基準画像上の画素と、参照画像上の画素との位置の差(視差)に三角測量の原理を適用することにより、基準カメラから当該画素に対応する計測対象物上の点までの距離を計測してワーク3の3次元点群を取得しても良い。尚、撮像装置4の数は、特に限定されるものではなく、ワーク3の画像の取得及び3次元点群の計測を行うことができれば良く、1台又は2台以上の複数であっても良い。また、撮像装置4は、ワーク3の画像を取得するための機能と3次元計測を行うための機能を別々に設けるように構成されていても良い。 As the imaging device 4, for example, a conventionally known three-dimensional sensor having a function of acquiring an image of the work 3 and a function of measuring a three-dimensional point group indicating the three-dimensional coordinates of points on the surface of the work 3 is used. Can be used. Further, as the image pickup device 4, for example, refer to a light projecting means (not shown) that projects pattern light onto the work 3, and a work camera 3 on which the pattern light is projected to a reference camera provided at a different position. A stereo camera including a camera, and identifies pixels corresponding to each other between a plurality of images obtained by capturing images with the stereo camera, and identifies pixels on the associated standard image and pixels on the reference image. By applying the principle of triangulation to the position difference (parallax), even if the distance from the reference camera to the point on the measurement object corresponding to the pixel is measured and the three-dimensional point cloud of the work 3 is acquired. good. The number of the imaging devices 4 is not particularly limited as long as the image of the work 3 can be acquired and the three-dimensional point group can be measured, and one device or a plurality of devices may be used. .. Further, the imaging device 4 may be configured to separately provide a function for acquiring an image of the work 3 and a function for performing three-dimensional measurement.

ロボット5は、ワーク3に対して作業を行うためのものであって、図1に示すように、ワーク3に対して作業を行えるように、ワーク3が載置されている作業台2の近傍に配置されている。このロボット5は、例えば、生産システム内の床や壁等の設置面に固定される土台51と、土台51に対して基端が回転可能に連結されているロボットアーム部52と、ロボットアーム部52の先端に取り付けられ、ワーク3を把持するためのロボットハンド53等を備えている。ロボット5は、詳しくは図示しないが、それぞれの回転軸にサーボモータ及びそれぞれの回転位置を検出するエンコーダ等が設けられており、ロボットコントローラ(ロボット制御手段)7から送られる制御信号に基づいて、それぞれのサーボモータが動作するように構成されている。 The robot 5 is for performing work on the work 3, and as shown in FIG. 1, in the vicinity of the work table 2 on which the work 3 is placed so that the work 3 can be performed. It is located in. The robot 5 includes, for example, a base 51 fixed to an installation surface such as a floor or a wall in a production system, a robot arm unit 52 having a base end rotatably connected to the base 51, and a robot arm unit 52. A robot hand 53 and the like attached to the tip of the robot 3 for gripping the work 3 are provided. Although not shown in detail, the robot 5 is provided with servomotors and encoders for detecting respective rotational positions on respective rotary axes, and based on a control signal sent from a robot controller (robot control means) 7, Each servo motor is configured to operate.

コンピュータ6は、図1に示すように、撮像装置4により得られた画像データや3次元点群データ等を記憶する画像メモリ8と、ワーク3の認識を行うための処理プログラムや後述する3次元モデルの3次元形状と撮像装置4によって取得した認識対象物の3次元形状との差、及び/又は、3次元モデル上のテクスチャと認識対象物の画像上のテクスチャとの差を求めるための処理プログラム等を格納するハードディスク9と、該ハードディスク9から読み出される処理プログラムを一時記憶するRAM(Random Access Memory)10と、この処理プログラムに従って3次元認識処理を行うCPU(Central Proceessing Unit)11と、画像メモリ8に記憶された画像データやCPU11によって求められた結果等を出力するための液晶ディスプレイ等で構成される出力部12と、マウスやキーボード等で構成される操作部13と、これら各部を互いに接続するシステムバス14とを有している。尚、本実施形態では、ワーク3の認識を行う処理プログラムをハードディスク9に格納している例を示しているが、これに代えて、コンピュータ読み取り可能な記憶媒体(不図示)に格納しておき、この記録媒体から処理プログラムを読み出すように構成することも可能である。また、出力部12は、ディスプレイに限定されるものではなく、画像データやCPU11によって求められた結果等を出力できるものであれば良く、プリンタ等の印刷手段やファイル保存を行うファイル出力手段等であっても良い。 As shown in FIG. 1, the computer 6 includes an image memory 8 for storing image data obtained by the imaging device 4, three-dimensional point cloud data, and the like, a processing program for recognizing the work 3, and a three-dimensional image described later. Processing for obtaining the difference between the three-dimensional shape of the model and the three-dimensional shape of the recognition target acquired by the imaging device 4, and/or the difference between the texture on the three-dimensional model and the texture on the image of the recognition target A hard disk 9 that stores programs and the like, a RAM (Random Access Memory) 10 that temporarily stores a processing program read from the hard disk 9, a CPU (Central Processing Unit) 11 that performs three-dimensional recognition processing according to the processing program, and an image The output unit 12 configured by a liquid crystal display or the like for outputting the image data stored in the memory 8 and the result obtained by the CPU 11, the operation unit 13 configured by a mouse, a keyboard, etc. It has a system bus 14 to be connected. In the present embodiment, the processing program for recognizing the work 3 is stored in the hard disk 9, but instead of this, it is stored in a computer-readable storage medium (not shown). It is also possible to read the processing program from this recording medium. Further, the output unit 12 is not limited to a display, and may be any unit capable of outputting image data, a result obtained by the CPU 11 and the like, and may be a printing means such as a printer or a file output means for saving a file. It may be.

以下、3次元物体検査装置1による処理の流れについて図2のフローチャートを用いながら説明する。本実施形態に係る3次元物体検査装置1では、図2に示すように、例えば、まずオフラインでワーク3の認識用の3次元モデルを作成して3次元モデル記憶手段15に記憶しておく(S101)。3次元モデルは、ワーク3の3次元形状情報及びワーク3に含まれる模様や文字等のテクスチャ画像における各姿勢でのテクスチャモデルを含むものであって、ワーク3の表面形状を表わす3次元面点データ、輪郭形状を表わす3次元輪郭点データ、及び、テクスチャを表わすテクスチャデータ等を有しており、3次元CAD等を利用して作成される。ここでは、例えば、3次元CAD等を利用して、予めオフラインで撮像装置4の位置から考えて可能性のある全範囲に渡って、あらゆる姿勢(3自由度)における3次元モデルを作成し、3次元モデル記憶手段15に記憶している。尚、この3次元モデルの作成方法は、特に限定されるものではなく、ワーク3の3次元形状情報及びテクスチャ情報を含むものであれば良く、従来公知の方法を用いることができる。また、3次元モデルは、検査の種類によっては、ワーク3の表面形状と輪郭、ワーク3の表面形状とテクスチャ、ワーク3の表面形状と輪郭とテクスチャ、又は、ワーク3の表面形状のみのいずれかを表わすものであれば良い。 The flow of processing by the three-dimensional object inspection device 1 will be described below with reference to the flowchart of FIG. In the three-dimensional object inspection device 1 according to the present embodiment, as shown in FIG. 2, for example, first, a three-dimensional model for recognizing the work 3 is created offline and stored in the three-dimensional model storage means 15 ( S101). The 3D model includes 3D shape information of the work 3 and a texture model in each posture in a texture image such as a pattern or a character included in the work 3, and a 3D surface point representing the surface shape of the work 3. It has data, three-dimensional contour point data representing a contour shape, texture data representing a texture, and the like, and is created by using a three-dimensional CAD or the like. Here, for example, using a three-dimensional CAD or the like, a three-dimensional model in all postures (three degrees of freedom) is created off-line in advance over the entire range that is possible considering the position of the imaging device 4. It is stored in the three-dimensional model storage means 15. The method of creating the three-dimensional model is not particularly limited as long as it includes the three-dimensional shape information and texture information of the work 3, and a conventionally known method can be used. Depending on the type of inspection, the three-dimensional model is either the surface shape and contour of the work 3, the surface shape and texture of the work 3, the surface shape and contour and texture of the work 3, or only the surface shape of the work 3. Anything that represents

次に、撮像装置4によりワーク3の表面の点の3次元座標を示す3次元点群の計測及びワーク3の画像を取得する(S102)。そして、3次元認識手段16では、撮像装置4により取得した画像又は3次元点群を用いて、ワーク3の位置及び姿勢を求める(S103)。このような撮像装置4により得られた画像又は3次元点群を用いて、ワーク3の位置姿勢を求めるための方法としては、例えば、特開2010−205095号公報、特開2011−129082号公報、特開2012−026974号公報、特開2014−178967号公報等に記載の位置姿勢の評価方法を好適に用いることができる。また、その他、従来公知の画像上の輪郭を用いた位置姿勢の評価方法や3次元点群を用いた位置姿勢の評価方法等を用いてワーク3の位置姿勢の認識を行っても良い。 Next, the image pickup device 4 measures the three-dimensional point group indicating the three-dimensional coordinates of the points on the surface of the work 3 and acquires the image of the work 3 (S102). Then, the three-dimensional recognition means 16 obtains the position and orientation of the work 3 by using the image or the three-dimensional point group acquired by the imaging device 4 (S103). As a method for obtaining the position and orientation of the work 3 using the image or the three-dimensional point group obtained by the image pickup device 4 as described in, for example, JP 2010-205095 A and JP 2011-129082 A. The position/orientation evaluation methods described in JP 2012-026974 A, JP 2014-178967 A, and the like can be preferably used. Alternatively, the position/orientation of the work 3 may be recognized using a conventionally known position/orientation evaluation method using contours on an image, a position/orientation evaluation method using a three-dimensional point group, or the like.

そして、ロボットコントローラ7では、ワーク3が所定の位置姿勢になるようにロボット5を動作させるために、3次元認識手段16によって求められたワーク3の位置姿勢に基づいて、ロボット5を制御するための制御信号を生成し、ロボット5へと送信する。ここでは、所定の位置姿勢として、例えば、撮像装置4によってワーク3のテクスチャ31を含む画像を取得できるように、ロボット5によってワーク3の位置姿勢を変更させる(S104)。ロボット5によってワーク3を所定の位置姿勢にする方法としては、例えば、ロボットハンド53によってワーク3を把持したまま所定の位置姿勢になるように移動、又は、ロボット5によって所定の位置姿勢に変えたワーク3を作業台2の上に載置するようにすれば良い。また、本実施形態では、1つのロボットアーム部52を有するロボット5を用いる例を示しているが、例えば、双腕のロボットを用いて、ワーク3を所定の位置姿勢に持ち替えるように動作させても良い。 The robot controller 7 controls the robot 5 based on the position and orientation of the work 3 obtained by the three-dimensional recognition means 16 in order to operate the robot 5 so that the work 3 has a predetermined position and orientation. Control signal is generated and transmitted to the robot 5. Here, as the predetermined position and orientation, for example, the position and orientation of the work 3 are changed by the robot 5 so that the image pickup device 4 can acquire an image including the texture 31 of the work 3 (S104). As a method of placing the work 3 in a predetermined position and orientation by the robot 5, for example, the work 3 is moved by the robot hand 53 so as to be in a predetermined position and orientation, or changed by the robot 5 to a predetermined position and orientation. The work 3 may be placed on the work table 2. Further, in the present embodiment, an example in which the robot 5 having one robot arm unit 52 is used is shown. However, for example, a dual-arm robot is used to operate the work 3 so as to change the work 3 to a predetermined position/posture. Is also good.

次に、3次元物体検査装置1では、撮像装置4によって、S104の処理により所定の位置姿勢になっているワーク3の表面の点の3次元座標を示す3次元点群の計測及びワーク3の画像を取得する(S105)。そして、3次元認識手段16では、3次元モデル記憶手段15に記憶された3次元モデルと、撮像装置4によって取得されたワーク3の画像又は3次元点群とを用いて3次元認識処理を行うことにより、ワーク3と3次元モデルの位置姿勢合わせを行う(S106)。また、ここでは、例えば、ワーク3の形状が3次元モデルの形状と異なるような場合でも、その形状の差によってワーク3と3次元モデルの位置姿勢合わせにずれが生じることを防止するために、3次元モデルと撮像装置4によって取得されたワーク3の3次元点群との位置姿勢の初期値を全探索で求め、その初期値を用いて、ロバスト推定法を用いた最適化を行う。つまり、撮像装置4によって計測された3次元点群データの外れ値によって、誤った位置姿勢合わせになることを防止するために、例えば、誤差の絶対値が大きいデータには小さな重み(0を含む)を与え、誤差が小さいデータには大きな重みを与える。このような重みは、例えば、Tukeyの関数により与えられる。尚、重みを与える関数は、特にTukeyの関数に限定されるものではなく、例えば、Huberの関数等、誤差が大きいデータには小さな重みを与え、誤差が小さいデータには大きな重みを与えるその他の従来公知のものを用いても良い。 Next, in the three-dimensional object inspection device 1, the imaging device 4 measures the three-dimensional point group indicating the three-dimensional coordinates of the points on the surface of the work 3 in the predetermined position and orientation by the processing of S104 and measures the work 3. An image is acquired (S105). Then, the three-dimensional recognition means 16 performs three-dimensional recognition processing using the three-dimensional model stored in the three-dimensional model storage means 15 and the image of the work 3 or the three-dimensional point group acquired by the imaging device 4. By doing so, the position and orientation of the work 3 and the three-dimensional model are adjusted (S106). In addition, here, for example, even when the shape of the work 3 is different from the shape of the three-dimensional model, in order to prevent the position and orientation of the work 3 and the three-dimensional model from being misaligned due to the difference in shape, An initial value of the position and orientation of the three-dimensional model and the three-dimensional point group of the work 3 acquired by the imaging device 4 is obtained by a full search, and the initial value is used to perform optimization using the robust estimation method. That is, in order to prevent erroneous position/orientation alignment due to outliers of the three-dimensional point cloud data measured by the imaging device 4, for example, data having a large absolute value of error includes a small weight (including 0. ) Is given, and a large weight is given to data with a small error. Such a weight is given by, for example, a Tukey function. It should be noted that the function for giving the weight is not particularly limited to the Tukey's function, and for example, such as Huber's function, a small weight is given to data with a large error, and a large weight is given to data with a small error. A conventionally known one may be used.

そして、判定手段17では、3次元認識手段16によるワーク3と3次元モデルの位置姿勢合わせの結果を用いて、3次元モデルの3次元形状と撮像装置4によって計測されたワーク3の3次元形状との差、及び、3次元モデル上のテクスチャとワーク3の画像上のテクスチャとの差を求め、その差が所定の基準以上であるか否かを判定する(S107)。つまり、判定手段17では、所定の基準以上の差がある場合には、ワーク3が本来有しているはずの形状やテクスチャとは異なるものであると判定する。尚、3次元モデル上のテクスチャとワーク3の画像上のテクスチャとの差とは、例えば、テクスチャ31の位置ずれ、欠け、誤植、印刷ミス、色違い等による差を含むものである。また、本来の形状やテクスチャと異なっているか否かを判定するための所定の基準については、例えば、ワーク3の種類や用途等に応じて予め設定されるものであり、その範囲は特に限定されるものではない。 Then, the determination unit 17 uses the result of the position and orientation alignment of the workpiece 3 and the 3D model by the 3D recognition unit 16 and the 3D shape of the 3D model and the 3D shape of the workpiece 3 measured by the imaging device 4. And the difference between the texture on the three-dimensional model and the texture on the image of the work 3 are determined, and it is determined whether or not the difference is equal to or larger than a predetermined reference (S107). In other words, the determination means 17 determines that the shape or texture that the work 3 should have originally is different if there is a difference equal to or greater than a predetermined reference. The difference between the texture on the three-dimensional model and the texture on the image of the work 3 includes, for example, a positional shift of the texture 31, a chip, a typographical error, a printing error, a color difference, and the like. Further, the predetermined reference for determining whether or not the shape or texture is different from the original shape is set in advance according to, for example, the type or application of the work 3, and its range is not particularly limited. Not something.

そして、判定手段17によって、3次元モデルの3次元形状と撮像装置4によって計測されたワーク3の3次元形状との差、又は、3次元モデル上のテクスチャとワーク3の画像上のテクスチャとの差が所定の基準以上であると判定された場合には、その差を結果として出力部12に出力させる(S108)。この際の出力部12では、例えば、差が生じている箇所に異なる色等を付して出力する。これにより、ユーザは、ワーク3が本来の形状やテクスチャとどのような差があるかを容易に把握することができる。尚、本実施形態に係る3次元物体検査装置1では、S102〜S104の処理によってワーク3をロボット5によって所定の位置姿勢にした後、ワーク3の3次元形状及びテクスチャについて検査を行う例について説明したが、ワーク3が最初から所定の位置姿勢になっている場合には、S102〜S104の処理は省略するようにしても良い。また、テクスチャ31が含まれていないワーク3やテクスチャ31の検査の必要がないワーク3については、S107のテクスチャ検査については省略される。また、逆にワーク3の3次元形状検査が必要なく、テクスチャ検査のみが必要なものの場合には、S107の3次元形状検査を省略すれば良い。 Then, the determination means 17 determines the difference between the three-dimensional shape of the three-dimensional model and the three-dimensional shape of the work 3 measured by the imaging device 4, or the texture on the three-dimensional model and the texture on the image of the work 3. When it is determined that the difference is equal to or larger than the predetermined reference, the difference is output to the output unit 12 as a result (S108). In this case, the output unit 12 adds different colors or the like to the places where the difference occurs, and outputs the different places. This allows the user to easily understand how the work 3 is different from the original shape and texture. In the three-dimensional object inspection device 1 according to the present embodiment, an example in which the work 3 is placed in a predetermined position and posture by the process of S102 to S104 and then the three-dimensional shape and texture of the work 3 are inspected will be described. However, when the work 3 is in the predetermined position and posture from the beginning, the processes of S102 to S104 may be omitted. The texture inspection in S107 is omitted for the work 3 that does not include the texture 31 and the work 3 that does not require the inspection of the texture 31. On the other hand, when the three-dimensional shape inspection of the work 3 is not required and only the texture inspection is required, the three-dimensional shape inspection of S107 may be omitted.

尚、本発明の実施の形態は上述の形態に限るものではなく、本発明の思想の範囲を逸脱しない範囲で適宜変更することができる。 It should be noted that the embodiment of the present invention is not limited to the above-described embodiment, and can be appropriately modified without departing from the scope of the idea of the present invention.

1 3次元物体検査装置
3 ワーク(認識対象物)
4 撮像装置(撮像手段)
5 ロボット
7 ロボットコントローラ(ロボット制御手段)
12 出力部(出力手段)
15 3次元モデル記憶手段
16 3次元認識手段
17 判定手段
1 3D object inspection device 3 Work (recognition target)
4 Imaging device (imaging means)
5 Robot 7 Robot controller (robot control means)
12 Output unit (output means)
15 3D model storage means 16 3D recognition means 17 Judgment means

Claims (3)

認識対象物の表面形状、又は、認識対象物の表面形状と、輪郭及びテクスチャのいずれか又は両方とを表す3次元モデルを記憶する3次元モデル記憶手段と、
前記認識対象物に対して作業を行うロボットと、
前記ロボットの作業前後の前記認識対象物の表面の点の3次元座標を示す3次元点群を取得、又は、前記ロボットの作業前後の前記認識対象物の表面の点の3次元座標を示す3次元点群及び前記認識対象物の画像を取得するための撮像手段と、
前記3次元モデル記憶手段に記憶された前記3次元モデルと、前記撮像手段によって取得された前記ロボット作業前の前記認識対象物の前記画像又は前記3次元点群とを用いて3次元認識処理を行う、又は、前記3次元モデルと、前記撮像手段によって取得された前記ロボット作業後の前記認識対象物の前記画像又は前記3次元点群とを用いて3次元認識処理を行うことにより、前記認識対象物と前記3次元モデルの位置姿勢を合わせる3次元認識手段と、
前記ロボットの作業前に前記3次元認識手段によって求められる前記認識対象物の位置姿勢の結果に基づき、前記認識対象物が所定の位置姿勢となるように前記ロボットを動作させるロボット制御手段と、
前記3次元認識手段による前記ロボット作業後の前記認識対象物と前記3次元モデルの位置姿勢合わせの結果を用いて、前記3次元モデルの3次元形状と前記撮像手段によって取得された前記所定の位置姿勢にある前記認識対象物の3次元形状との差、及び/又は、前記3次元モデル上のテクスチャと前記所定の位置姿勢にある前記認識対象物の前記画像上のテクスチャとの差を求め、その差が所定の基準以上であるか否かを判定する判定手段と、を備えることを特徴とする3次元物体検査装置。
A three-dimensional model storage means for storing a three-dimensional model representing the surface shape of the recognition target object, or the surface shape of the recognition target object and either or both of the contour and the texture;
A robot for performing work on the recognition object,
A three-dimensional point cloud indicating the three-dimensional coordinates of points on the surface of the recognition object before and after the robot operation is acquired, or 3 indicating the three-dimensional coordinates of points on the surface of the recognition object before and after the robot operation. Imaging means for acquiring an image of the dimensional point cloud and the recognition object;
A three-dimensional recognition process is performed using the three-dimensional model stored in the three-dimensional model storage means and the image of the recognition target object or the three-dimensional point group before the robot work acquired by the imaging means. Or by performing a three-dimensional recognition process using the three-dimensional model and the image or the three-dimensional point group of the recognition target after the robot work acquired by the imaging unit, the recognition Three-dimensional recognition means for matching the position and orientation of the object and the three-dimensional model,
Robot control means for operating the robot so that the recognition target object has a predetermined position and orientation based on the result of the position and orientation object of the recognition target object obtained by the three-dimensional recognition means before the work of the robot;
The three- dimensional shape of the three- dimensional model and the predetermined position acquired by the imaging unit are used by using the result of the position and orientation alignment of the recognition target object and the three-dimensional model after the robot work by the three-dimensional recognition unit. Determining a difference between the recognition target in a posture and the three-dimensional shape, and/or a difference between a texture on the three-dimensional model and a texture on the image of the recognition target in the predetermined position and posture; A three-dimensional object inspection apparatus comprising: a determination unit that determines whether or not the difference is equal to or greater than a predetermined reference.
前記判定手段によって、前記3次元モデルの3次元形状と前記撮像手段によって取得された前記所定の位置姿勢にある前記認識対象物の3次元形状との差、及び/又は、前記3次元モデル上のテクスチャと前記所定の位置姿勢にある前記認識対象物の前記画像上のテクスチャとの差が前記所定の基準以上であると判定された場合に、前記3次元モデルの3次元形状と前記撮像手段によって取得された前記所定の位置姿勢にある前記認識対象物の3次元形状との差、及び/又は、前記3次元モデル上のテクスチャと前記所定の位置姿勢にある前記認識対象物の前記画像上のテクスチャとの差を出力する出力手段を備えることを特徴とする請求項1記載の3次元物体検査装置。 The difference between the three-dimensional shape of the three-dimensional model and the three-dimensional shape of the recognition target object in the predetermined position and orientation acquired by the imaging means by the determination means, and/or on the three-dimensional model When it is determined that the difference between the texture and the texture on the image of the recognition target object in the predetermined position and orientation is equal to or greater than the predetermined reference, the three-dimensional shape of the three-dimensional model and the image capturing unit are used. The difference between the acquired three-dimensional shape of the recognition target object in the predetermined position and orientation , and/or the texture on the three-dimensional model and the recognition target object in the predetermined position and orientation on the image. The three-dimensional object inspection device according to claim 1 , further comprising an output unit that outputs a difference from the texture. 前記3次元認識手段は、前記3次元モデル記憶手段に記憶された前記3次元モデルと前記撮像手段によって取得された前記認識対象物の前記3次元点群との位置姿勢の初期値を全探索で求め、前記初期値を用いて、前記認識対象物の位置姿勢のロバスト推定法を用いた最適化を行うことを特徴とする請求項1又は2に記載の3次元物体検査装置。 The three-dimensional recognition means may perform a full search for initial values of the position and orientation of the three-dimensional model stored in the three-dimensional model storage means and the three-dimensional point group of the recognition target acquired by the imaging means. 3. The three-dimensional object inspection apparatus according to claim 1 , wherein the three-dimensional object inspection apparatus obtains and performs optimization using a robust estimation method of the position and orientation of the recognition target using the initial value.
JP2015154781A 2015-08-05 2015-08-05 3D object inspection device Active JP6703812B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2015154781A JP6703812B2 (en) 2015-08-05 2015-08-05 3D object inspection device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2015154781A JP6703812B2 (en) 2015-08-05 2015-08-05 3D object inspection device

Publications (2)

Publication Number Publication Date
JP2017033429A JP2017033429A (en) 2017-02-09
JP6703812B2 true JP6703812B2 (en) 2020-06-03

Family

ID=57989324

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2015154781A Active JP6703812B2 (en) 2015-08-05 2015-08-05 3D object inspection device

Country Status (1)

Country Link
JP (1) JP6703812B2 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6358287B2 (en) 2015-06-22 2018-07-18 株式会社デンソー Inspection apparatus and inspection method
JP6258557B1 (en) 2017-04-04 2018-01-10 株式会社Mujin Control device, picking system, distribution system, program, control method, and production method
DE112017007397B4 (en) 2017-04-04 2021-09-30 Mujin, Inc. Control device, gripping system, distribution system, program, control method and manufacturing method
CN115385039A (en) 2017-04-04 2022-11-25 牧今科技 Control device, information processing device, control method, and information processing method
DE112017007398B4 (en) 2017-04-04 2021-11-18 Mujin, Inc. Control device, gripping system, distribution system, program and control method
JP6363294B1 (en) 2017-04-04 2018-07-25 株式会社Mujin Information processing apparatus, picking system, distribution system, program, and information processing method
JP6508691B1 (en) 2018-10-15 2019-05-08 株式会社Mujin Control device, work robot, program, and control method
JP6511681B1 (en) * 2018-10-15 2019-05-15 株式会社Mujin Shape information generation device, control device, unloading device, distribution system, program, and control method
WO2021014514A1 (en) * 2019-07-19 2021-01-28 株式会社島津製作所 Aircraft inspection assistance device and aircraft inspection assistance method
CN113666036A (en) * 2020-05-14 2021-11-19 泰科电子(上海)有限公司 Automatic unstacking system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3009205B2 (en) * 1990-11-02 2000-02-14 株式会社日立製作所 Inspection method and apparatus
JP3518596B2 (en) * 2000-10-02 2004-04-12 株式会社スキャンテクノロジー Soft bag comprehensive inspection system
JP2010184300A (en) * 2009-02-10 2010-08-26 Seiko Epson Corp Attitude changing device and attitude changing method
JP5716433B2 (en) * 2011-02-07 2015-05-13 株式会社Ihi Shape recognition device, shape recognition method, and program thereof
JP6371044B2 (en) * 2013-08-31 2018-08-08 国立大学法人豊橋技術科学大学 Surface defect inspection apparatus and surface defect inspection method

Also Published As

Publication number Publication date
JP2017033429A (en) 2017-02-09

Similar Documents

Publication Publication Date Title
JP6703812B2 (en) 3D object inspection device
US11511421B2 (en) Object recognition processing apparatus and method, and object picking apparatus and method
US9672630B2 (en) Contour line measurement apparatus and robot system
JP5897624B2 (en) Robot simulation device for simulating workpiece removal process
JP5469216B2 (en) A device for picking up bulk items by robot
JP6465789B2 (en) Program, apparatus and method for calculating internal parameters of depth camera
JP4004899B2 (en) Article position / orientation detection apparatus and article removal apparatus
JP6117901B1 (en) Position / orientation measuring apparatus for a plurality of articles and a robot system including the position / orientation measuring apparatus
JP6180087B2 (en) Information processing apparatus and information processing method
JP6331517B2 (en) Image processing apparatus, system, image processing method, and image processing program
JP5743499B2 (en) Image generating apparatus, image generating method, and program
JP7376268B2 (en) 3D data generation device and robot control system
KR20140008262A (en) Robot system, robot, robot control device, robot control method, and robot control program
JP2018144144A (en) Image processing device, image processing method and computer program
EP3229208B1 (en) Camera pose estimation
JP4766269B2 (en) Object detection method, object detection apparatus, and robot equipped with the same
JP2010071743A (en) Method of detecting object, object detection device and robot system
JP2016170050A (en) Position attitude measurement device, position attitude measurement method and computer program
EP3322959B1 (en) Method for measuring an artefact
JP5858773B2 (en) Three-dimensional measurement method, three-dimensional measurement program, and robot apparatus
JP5481397B2 (en) 3D coordinate measuring device
CN111742349B (en) Information processing apparatus, information processing method, and information processing storage medium
JP7439410B2 (en) Image processing device, image processing method and program
JP2014178967A (en) Three-dimensional object recognition device and three-dimensional object recognition method
JP7450857B2 (en) Measurement parameter optimization method and device, and computer control program

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20180803

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20190625

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20190702

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20190902

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20200220

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20200330

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20200414

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20200511

R150 Certificate of patent or registration of utility model

Ref document number: 6703812

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

S111 Request for change of ownership or part of ownership

Free format text: JAPANESE INTERMEDIATE CODE: R313111

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250