JPS63228273A - Three dimensional structure recognition device - Google Patents

Three dimensional structure recognition device

Info

Publication number
JPS63228273A
JPS63228273A JP62061965A JP6196587A JPS63228273A JP S63228273 A JPS63228273 A JP S63228273A JP 62061965 A JP62061965 A JP 62061965A JP 6196587 A JP6196587 A JP 6196587A JP S63228273 A JPS63228273 A JP S63228273A
Authority
JP
Japan
Prior art keywords
dimensional
hierarchy
information
parts
section
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP62061965A
Other languages
Japanese (ja)
Inventor
Hiroshi Mizoguchi
溝口 博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Corp
Original Assignee
Toshiba Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Corp filed Critical Toshiba Corp
Priority to JP62061965A priority Critical patent/JPS63228273A/en
Publication of JPS63228273A publication Critical patent/JPS63228273A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

PURPOSE:To recognize the three dimensional structure of an object by hierarchically sorting a three dimensional measured value in a specific direction, obtaining the outline of a section for every hierarchy to specify constituting parts appearing on the respective hierarchies and integrating between the hierarchies by a structure recognizing means. CONSTITUTION:A stereoscopic camera 2 variously moves the position of its own with respect to the object 1 to be recognized to pickup the image of the object 1 to be recognized from plural directions, and a three dimensional measuring instrument 3 obtains the three dimensional coordinates of the respective vertexes, the edge lines of the object 1 to be recognized from plural direction stereoscopic pictures. A sorting device 4 for every hierarchy sorts the three dimensional coordinates of the respective vertexes and the edge lines into the respective hierarchies in the direction of a height and obtains section outline information for every hierarchy. A section interpreting device 5 identifies parts included in the section from the section outline information for every hierarchy, extracts the information of the parts included for every hierarchy, the structure recognition device 6 integrates parts information for every hierarchy between hierarchies and outputs a three dimensional structure description consisting of the three dimensional position and the attitude information of the respective parts constituting the object as a final result.

Description

【発明の詳細な説明】 [発明の目的コ (産業上の利用分野) この発明は、物体の3次元構造を認識する3次元構造認
識装置に係わり、特に、認識対象物体が未知で、かつそ
の物体を構成している各部品が既知である場合に前記認
識対象物体を認識できるようにした3次元構造認識装置
に関する。
[Detailed Description of the Invention] [Purpose of the Invention (Field of Industrial Application) This invention relates to a three-dimensional structure recognition device that recognizes the three-dimensional structure of an object, and in particular, the present invention relates to a three-dimensional structure recognition device that recognizes the three-dimensional structure of an object. The present invention relates to a three-dimensional structure recognition device capable of recognizing the object to be recognized when each component constituting the object is known.

(従来の技術) 3次元画像処理においては、従来、形状、構造が既知の
物体であれば、その物体の3次元モデルを用意しておく
ことにより、その物体をシーンの中から発見し、物体の
位置と姿勢とを求めることが可能であった。しかし、構
造が既知ではない物体に関しては、たとえその物体が既
知の部品から組立てられている場合であっても、物体全
体の3次元モデルが用意されていないかぎり、3次元構
造を認識することは不可能であった。CT(Compu
ter 7omography )を用いた場合でも、
単に断層像が得られるだけであり、各断層像の中から意
味のある情報を読取って、物体の3次元構造を認識する
のは人間が行なわざるを得なかった。
(Prior art) In three-dimensional image processing, conventionally, if an object has a known shape and structure, a three-dimensional model of the object is prepared, the object is discovered in the scene, and the object is identified. It was possible to determine the position and posture of However, for objects whose structure is unknown, even if the object is assembled from known parts, it is impossible to recognize the 3D structure unless a 3D model of the entire object is prepared. It was impossible. CT (Computer
Even when using ter 7omography),
It simply obtains tomographic images, and humans have had to read meaningful information from each tomographic image and recognize the three-dimensional structure of the object.

ところが、物体の形状・構造が比較的単純な単一部品の
3次元モデルの作成でさえも多大な労力を必要とするの
が現状であり、ましてこれらの部品から構成される多数
の認識対象物体について3次元モデルを作成するのは、
極めて困難な作業であった。しかも、部品の組合わせが
予想され咋ない認識物体も存在する。
However, the current situation is that even creating a 3D model of a single part whose shape and structure is relatively simple requires a great deal of effort, and even more so when creating a 3D model of a single part that is relatively simple in shape and structure. Creating a 3D model for
It was an extremely difficult task. Furthermore, there are recognition objects that do not behave as expected due to the combination of parts.

さらに、画像は2次元である為、認識に際しては、3次
元モデルから2次元投影を作り出す必要があり、膨大な
計IIを要する。従って、3次元空間中での物体の姿勢
が不明であって任意の方向の投影を考えなくてはならな
い場合には、いくつもの方向について、2次元投影を参
照する必要があり、一層多くの計算所が必要であった。
Furthermore, since the image is two-dimensional, it is necessary to create a two-dimensional projection from the three-dimensional model for recognition, which requires a huge amount of time. Therefore, when the orientation of an object in three-dimensional space is unknown and projections in arbitrary directions must be considered, it is necessary to refer to two-dimensional projections in multiple directions, which requires even more calculations. I needed a place.

(発明が解決しようとする問題点〉 上述したように、従来の3次元物体認識技術では、予め
構造形状の分っている物体の位置・姿勢を求めることは
可能であるが、未知のものに対しては、たとえ個々の部
分品が既知であっても、全体の構造をHlすることは不
可能であり、多大な労力を費やして認識対象物毎に3次
元モデルを作成して予め登録しておく必要があった。
(Problems to be solved by the invention) As mentioned above, with conventional three-dimensional object recognition technology, it is possible to determine the position and orientation of an object whose structural shape is known in advance, but it is difficult to determine the position and orientation of an object whose structural shape is known in advance. However, even if the individual parts are known, it is impossible to visualize the entire structure, and a great deal of effort is required to create a 3D model for each recognition target and register it in advance. I needed to keep it.

本発明は、このような問題を解決すべくなされたもので
、既知の部分品の組合わせで構成されている認識対象物
体に関しては、全体の構造が未知であっても、即ち、予
めモデル登録されていなくても、その構造を認識するこ
とができる3次元構造認識装置を提供することを目的と
する。
The present invention was made to solve such problems, and even if the entire structure of a recognition target object that is made up of a combination of known parts is unknown, that is, the model is registered in advance. An object of the present invention is to provide a three-dimensional structure recognition device that can recognize the structure even if the structure is not displayed.

[発明の構成] (問題点を解決するための手段) 本発明は、認識対象物体の複数方向から把握される2次
元画像から前記認識対象物体の頂点及び稜線を求めてそ
れらの3次元座標を得る3次元計測手段と、この手段で
求められた3次元座標値を特定の方向に階層別に分類す
る階層別分類手段と、この手段によって分類された各階
層の3次元座標から特定される各階層の断面情報と既知
の部品に関する情報とに基づき各階層毎の部品の配置情
報を得る断面解釈手段と、この手段で得られた各階層毎
の部品の配置情報を階Il1間で統合して前記認識対象
物体の3次元構造記述を生成する構造認識手段とを具備
したことを¥f徴としている。
[Structure of the Invention] (Means for Solving the Problems) The present invention obtains vertices and ridgelines of a recognition target object from two-dimensional images grasped from a plurality of directions of the recognition target object, and calculates their three-dimensional coordinates. a three-dimensional measuring means for obtaining the information, a hierarchical classification means for classifying the three-dimensional coordinate values obtained by this means into strata in a specific direction, and each stratum specified from the three-dimensional coordinates of each stratum classified by this means. cross-section interpretation means for obtaining part placement information for each level based on the cross-section information and information on known parts; The feature is that it is equipped with a structure recognition means for generating a three-dimensional structural description of the object to be recognized.

(作用) 本発明によれば、3次元計測手段によって計測された3
次元測定値を、階層別分類手段により、特定の方向(例
えば高さ方向)に階層的に分類し、各階層毎の断面の外
形線を得ている。したがって、この断面の外形線とこの
外形線上に現われる境界線を示す点等と既知の部品の断
面データ等を断面解釈器で照合することにより、各階層
に現われる構成部品を特定することができる。したがっ
て、求められた各階層の構成部品を構造認識手段によっ
て階PyAで統合することにより、たとえ対象物体全体
の3次元モデルを用意していなくても、その物体の3次
元構造を認識することができる。
(Function) According to the present invention, the three-dimensional
The dimensional measurement values are hierarchically classified in a specific direction (for example, height direction) by a hierarchical classification means, and the outline of the cross section for each hierarchy is obtained. Therefore, by comparing the outline of this cross-section, the points showing the boundary line appearing on this outline, and the cross-sectional data of known parts using a cross-section interpreter, it is possible to specify the component parts appearing in each layer. Therefore, by integrating the found components of each layer at the layer PyA using the structure recognition means, it is possible to recognize the 3D structure of the object even if a 3D model of the entire object is not prepared. can.

(実施例) 以下、図面に基づいて本発明の一実施例に係る3次元構
造認識装置について説明する。
(Example) Hereinafter, a three-dimensional structure recognition device according to an example of the present invention will be described based on the drawings.

第1図にこの装置の概略構成を示す。FIG. 1 shows the schematic configuration of this device.

なお、認識対象物1は、それ自体未知の物体であるが、
既知の部品(装置内に3次元モデルが用意されている部
品)で組立てられたものとする。
Note that although the recognition target object 1 is itself an unknown object,
It is assumed that the assembly is made of known parts (parts for which three-dimensional models are prepared within the device).

ステレオカメラ2は、自らの位置を上記認識対象物体1
に対して種々移動させながら、あるいは上記認識対象物
体1を回転させながら、上記認識対象物1を複数の方向
から撮像する。認識対象物体を種々の方向から観測する
のは、一方向から物体を観測しただけでは見えない部分
の測定を行なうためである。
The stereo camera 2 determines its own position based on the object 1 to be recognized.
Images of the recognition target object 1 are taken from a plurality of directions while moving the recognition target object 1 in various ways or rotating the recognition target object 1. The purpose of observing the object to be recognized from various directions is to measure parts that cannot be seen by observing the object from only one direction.

3次元計測器3は、上記ステレオカメラ2で躍徴された
複数方向のステレオ画像から認識対象物体1の各頂点、
稜線の3次元座標を求める。
The three-dimensional measuring device 3 detects each vertex of the object to be recognized 1 from the stereo images taken in multiple directions by the stereo camera 2,
Find the three-dimensional coordinates of the edge.

階層別分類器4は、上記3次元計測器3で求められた認
識対像物体1の各頂点、稜線の3次元座標を、高さ方向
の各階層に分類し、W4層毎の断面外形線情報を得る。
The hierarchical classifier 4 classifies the three-dimensional coordinates of each vertex and ridgeline of the object to be recognized 1 obtained by the three-dimensional measuring device 3 into each hierarchical level in the height direction, and calculates the cross-sectional outline of each W4 layer. get information.

断面解釈器5は、上記階層別分類器4で得られた階層別
の断面外形線情報から、その断面に含まれる部品を同定
し、各階層毎に含まれる部品の情報を抽出する。
The cross-section interpreter 5 identifies the parts included in the cross-section from the layer-by-layer cross-section outline information obtained by the layer-by-layer classifier 4, and extracts information on the parts included in each layer.

構造認識器6は、断面解釈器5によって抽出された各階
層毎の部品情報を階層間で統合し、最終結果として対象
物体1を構成する各部品の3次元位置及び姿勢情報から
なる3次元構造記述を出力する。
The structure recognizer 6 integrates the parts information for each layer extracted by the cross-section interpreter 5 between layers, and as a final result a three-dimensional structure consisting of three-dimensional position and orientation information of each part constituting the target object 1. Output the description.

第2図に3次元計測器3の更に具体的な構成を示す。ス
テレオカメラ2から入力された左右の両像データは、そ
れぞれ画像メモリ11.12に格納される。画像メモリ
11.12に記憶された各画像データは、線分検出器1
3.14にそれぞれ与えられており、ここで頂点及び稜
線が油出される。この線分検出器13.14は、周知の
空間微分フィルタ、2i111化回路等で構成すること
ができる。抽出された線分画像は、左右の視差分のずれ
をもっているので、ステレオ対応付は器14によって左
右の画像の対応付けがなされる。具体的には、例えば一
方のカメラで求められる頂点、稜線に番号付けをし、そ
のカメラの位置とそのカメラでvA測された頂点とを結
ぶ線上に、他のカメラで観測された頂点が存在するがど
うかを求め、存在する頂点に同一の番号を付与し、量カ
メラの位置、量画面上の同一番号の頂点の位置等からそ
の頂点の3次元座標を求めて行けば良い。
FIG. 2 shows a more specific configuration of the three-dimensional measuring instrument 3. Both left and right image data input from the stereo camera 2 are stored in image memories 11 and 12, respectively. Each image data stored in the image memory 11.12 is transmitted to the line segment detector 1.
3.14, respectively, where the vertices and edges are extracted. The line segment detectors 13 and 14 can be constructed from well-known spatial differential filters, 2i111 converting circuits, and the like. Since the extracted line segment images have a shift equal to the parallax between the left and right sides, the left and right images are matched by the stereo correspondence unit 14. Specifically, for example, the vertices and ridges found by one camera are numbered, and vertices observed by another camera exist on the line connecting the position of that camera and the vertices measured by vA with that camera. Then, the same number is given to the existing vertices, and the three-dimensional coordinates of that vertex are found from the position of the quantity camera, the position of the vertices with the same number on the quantity screen, etc.

以上の処理を行なうことにより、3次元計測器3で求め
られた3次元座標データは、第5図中aで示すように、
各頂点PI 、 P2 、・・・がXYZ座標値で表さ
れ、稜線L1 、 L2 、・・・が視点及び終点とな
る頂点の番号によって表されたデータとなる。
By performing the above processing, the three-dimensional coordinate data obtained by the three-dimensional measuring instrument 3 is as shown by a in FIG.
Each vertex PI, P2, . . . is represented by XYZ coordinate values, and the ridge lines L1, L2, .

次に階層別分類器4は、第3図に示すように構成されて
いる。即ち、入力される頂点、稜線の3次元座標は、^
さ成分(Z軸成分)の分類器21によって、各階層別に
設けられた階層別メモリ221〜22nに振分けられる
。高さ方向に分類したのは、認識対象物体1が高さ方向
に沿って複数の部品により構築されていると考えられた
ためで、これは例えば図示゛しない3次元座標変換手段
を用いることにより、他の方向に分類しても良いことは
明らかである。
Next, the hierarchical classifier 4 is configured as shown in FIG. In other words, the three-dimensional coordinates of the input vertices and edges are ^
The Z-axis component (Z-axis component) is sorted by the classifier 21 into hierarchical memories 221 to 22n provided for each hierarchical level. The reason why the recognition target object 1 is classified in the height direction is that it is considered that the recognition target object 1 is constructed from a plurality of parts along the height direction. It is clear that the classification may be done in other directions.

この3次元座標データが階層別分類器4で分類されると
、第5図中すで示すように、^さ方向(Z軸方向)の座
11taを同じくする頂点及び同じ高さの稜線が、各高
さく階層H1、H2、・・・。
When this three-dimensional coordinate data is classified by the hierarchical classifier 4, as already shown in FIG. Each height level H1, H2,...

Hn)毎にまとめられて断面外形線情報となる。Hn) are grouped together to form cross-sectional outline information.

この断面外形線情報は、図示されるように、各階層で観
測される断面の外形線と、その外形線上に現われる部品
間の境目を示す頂点とを表している。
As shown in the figure, this cross-sectional outline information represents the outline of the cross section observed at each level and the vertices that appear on the outline and indicate the boundaries between parts.

第4図は、断面解釈器5の詳細を示す図である。FIG. 4 is a diagram showing details of the section interpreter 5.

頂点・稜線対応器31は、入力された階層別の頂点と稜
線の情報をもとに、頂点と稜線の対応付けを行ない、稜
線同士の接続情報を生成し、これを接続情報メモリ32
に格納する。接続情報メモリ32に格納された接続情報
は、解釈器33に導かれている。解釈器33は、予め用
意されている部品亀の稜線解釈ルール34に基づいて、
稜線の解釈を行ない、部品の存在を同定する。
The vertex/edge corresponder 31 associates vertices and edges based on the input information on vertices and edges for each hierarchy, generates connection information between edges, and stores this in the connection information memory 32.
Store in. The connection information stored in the connection information memory 32 is guided to an interpreter 33. The interpreter 33, based on the part turtle ridgeline interpretation rules 34 prepared in advance,
Interpret the edges and identify the existence of parts.

この部品の同定によって、第5図Cに示すような各階層
毎に含まれる部品の情報が抽出される。
By identifying the parts, information on the parts included in each hierarchy as shown in FIG. 5C is extracted.

そして、構造認識部6で階層毎の部品を階層間で統合す
ることにより、第5図中dで示すような構成部品のリス
ト、つまり各構成部品の型、位置、姿勢情報が得られる
Then, by integrating the parts of each layer between layers in the structure recognition unit 6, a list of component parts as shown by d in FIG. 5, that is, type, position, and orientation information of each component is obtained.

つまり、この装置によれば、構成部品が既知であれば、
それら構成部品から構成される未知の3次元物体の構造
を認識することができる。
In other words, according to this device, if the component parts are known,
It is possible to recognize the structure of an unknown three-dimensional object made up of these component parts.

[発明の効果] 上記したように、本発明によれば、構造が既知である物
体のみならず、全体の外形は未知であるが、既知の物体
を部品として組立てられた認識対象物体の3次元構造を
認識することができる。このことは、従来の3次元モデ
ルに基づきHaする方式とは大幅に異なり、モデルが予
め登録されていない物体を認識することが可能であると
いうことであり、3次元モデルの作成、入力の手間が大
幅に低減される。
[Effects of the Invention] As described above, according to the present invention, not only objects whose structure is known but also objects to be recognized whose overall external shape is unknown but which are assembled using known objects as parts, can be recognized in three dimensions. Able to recognize structure. This is significantly different from the conventional Ha method based on 3D models, which means that it is possible to recognize objects for which the model has not been registered in advance, reducing the time and effort required to create and input 3D models. is significantly reduced.

更に、3次元モデルに基づき、各視線方向毎の2次元投
影を算出しながら認識する方法とは異なリ、膨大な計算
場を必要としないので、高速な認識処理が可能であると
いう効果を奏する。
Furthermore, unlike recognition methods that calculate two-dimensional projections for each line of sight direction based on a three-dimensional model, this method does not require an enormous amount of calculation space, so it has the effect of enabling high-speed recognition processing. .

【図面の簡単な説明】[Brief explanation of the drawing]

第1図は本発明の一実施例に係る3次元構造認識装置の
構成を示すブロック図、第2図は同装置における3次元
計測器の詳細ブロック図、第3図は同装置における階層
別分類器の詳細ブロック図、第4図は同装置における断
面解釈器の詳細ブロック図、第5図は同装置によって生
成される各情報を説明するための図である。 1・・・認識対象物体、2・・・ステレオカメラ、3・
・・3次元計測器、4・・・階層別分類器、5・・・断
面解釈器、6・・・構造認識器。
Fig. 1 is a block diagram showing the configuration of a three-dimensional structure recognition device according to an embodiment of the present invention, Fig. 2 is a detailed block diagram of a three-dimensional measuring instrument in the same device, and Fig. 3 is a hierarchical classification in the same device. FIG. 4 is a detailed block diagram of the cross-section interpreter in the device, and FIG. 5 is a diagram for explaining each piece of information generated by the device. 1... Recognition target object, 2... Stereo camera, 3.
... Three-dimensional measuring instrument, 4... Hierarchical classifier, 5... Cross section interpreter, 6... Structure recognizer.

Claims (2)

【特許請求の範囲】[Claims] (1)認識対象物体の複数方向から把握される2次元画
像から前記認識対象物体の頂点及び稜線を求めてそれら
の3次元座標を得る3次元計測手段と、この手段で求め
られた3次元座標値を特定の方向に階層別に分類する階
層別分類手段と、この手段によって分類された各階層の
3次元座標から特定される各階層の断面情報と既知の部
品に関する情報とに基づき各階層毎に部品の配置情報を
得る断面解釈手段と、この手段で得られた各階層毎の部
品の配置情報を階層間で統合して前記認識対象物体の3
次元構造記述を生成する構造認識手段とを具備したこと
を特徴とする3次元構造認識装置。
(1) A three-dimensional measuring means for determining the vertices and ridge lines of the recognition target object from a two-dimensional image grasped from multiple directions of the recognition target object to obtain their three-dimensional coordinates, and the three-dimensional coordinates determined by this means. A hierarchical classification means for classifying values by hierarchy in a specific direction, and cross-sectional information for each hierarchy identified from the three-dimensional coordinates of each hierarchy classified by this means and information regarding known parts for each hierarchy. A cross-section interpretation means for obtaining part arrangement information, and a cross-section interpretation means for integrating the parts arrangement information for each layer obtained by this means between the layers,
A three-dimensional structure recognition device comprising: structure recognition means for generating a dimensional structure description.
(2)前記階層別分類手段は、前記認識対象物体を高さ
方向に分類するものであることを特徴とする特許請求の
範囲第1項記載の3次元構造認識装置。
(2) The three-dimensional structure recognition apparatus according to claim 1, wherein the hierarchical classification means classifies the recognition target object in a height direction.
JP62061965A 1987-03-17 1987-03-17 Three dimensional structure recognition device Pending JPS63228273A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP62061965A JPS63228273A (en) 1987-03-17 1987-03-17 Three dimensional structure recognition device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP62061965A JPS63228273A (en) 1987-03-17 1987-03-17 Three dimensional structure recognition device

Publications (1)

Publication Number Publication Date
JPS63228273A true JPS63228273A (en) 1988-09-22

Family

ID=13186400

Family Applications (1)

Application Number Title Priority Date Filing Date
JP62061965A Pending JPS63228273A (en) 1987-03-17 1987-03-17 Three dimensional structure recognition device

Country Status (1)

Country Link
JP (1) JPS63228273A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004326264A (en) * 2003-04-22 2004-11-18 Matsushita Electric Works Ltd Obstacle detecting device and autonomous mobile robot using the same and obstacle detecting method and obstacle detecting program

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004326264A (en) * 2003-04-22 2004-11-18 Matsushita Electric Works Ltd Obstacle detecting device and autonomous mobile robot using the same and obstacle detecting method and obstacle detecting program

Similar Documents

Publication Publication Date Title
Zhang et al. Image engineering
CN103988226B (en) Method for estimating camera motion and for determining real border threedimensional model
CN108898676B (en) Method and system for detecting collision and shielding between virtual and real objects
US20160342861A1 (en) Method for Training Classifiers to Detect Objects Represented in Images of Target Environments
CN109345510A (en) Object detecting method, device, equipment, storage medium and vehicle
EP1959392A1 (en) Method, medium, and system implementing 3D model generation based on 2D photographic images
Herman et al. Incremental acquisition of a three-dimensional scene model from images
Zollmann et al. Interactive 4D overview and detail visualization in augmented reality
Shalaby et al. Algorithms and applications of structure from motion (SFM): A survey
Negahdaripour et al. Recovering shape and motion from undersea images
Koch Automatic reconstruction of buildings from stereoscopic image sequences
Kokovkina et al. The algorithm of EKF-SLAM using laser scanning system and fisheye camera
Li et al. 3D mapping based VSLAM for UAVs
CN117197419A (en) Lei Dadian cloud labeling method and device, electronic equipment and storage medium
JPS63228273A (en) Three dimensional structure recognition device
Guo et al. Improved marching tetrahedra algorithm based on hierarchical signed distance field and multi-scale depth map fusion for 3D reconstruction
Khan et al. Skeleton based human action recognition using a structured-tree neural network
Hesselink Evaluation of flow topology from numerical data
Hao et al. Development of 3D feature detection and on board mapping algorithm from video camera for navigation
Balzer et al. Volumetric reconstruction applied to perceptual studies of size and weight
Zancajo-Blazquez et al. Segmentation of indoor mapping point clouds applied to crime scenes reconstruction
JP2021140429A (en) Three-dimentional model generation method
Gomes et al. Volumetric Occupancy Detection: A Comparative Analysis of Mapping Algorithms
Muharom et al. Real-Time 3D Modeling and Visualization Based on RGB-D Camera using RTAB-Map through Loop Closure
Luchetti et al. Omnidirectional camera pose estimation and projective texture mapping for photorealistic 3D virtual reality experiences