JP2021061014A5 - - Google Patents

Download PDF

Info

Publication number
JP2021061014A5
JP2021061014A5 JP2020206993A JP2020206993A JP2021061014A5 JP 2021061014 A5 JP2021061014 A5 JP 2021061014A5 JP 2020206993 A JP2020206993 A JP 2020206993A JP 2020206993 A JP2020206993 A JP 2020206993A JP 2021061014 A5 JP2021061014 A5 JP 2021061014A5
Authority
JP
Japan
Prior art keywords
information
orientations
positions
operating means
data generated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2020206993A
Other languages
Japanese (ja)
Other versions
JP7349423B2 (en
JP2021061014A (en
Filing date
Publication date
Priority claimed from JP2019113637A external-priority patent/JP7051751B2/en
Application filed filed Critical
Priority to JP2020206993A priority Critical patent/JP7349423B2/en
Priority claimed from JP2020206993A external-priority patent/JP7349423B2/en
Publication of JP2021061014A publication Critical patent/JP2021061014A/en
Publication of JP2021061014A5 publication Critical patent/JP2021061014A5/ja
Application granted granted Critical
Publication of JP7349423B2 publication Critical patent/JP7349423B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Claims (19)

物体を操作する操作手段と、
ニューラルネットワークモデルに前記物体の情報を入力し、前記操作手段の少なくとも位置及び姿勢のいずれかに関する情報を推定する、少なくとも1つのプロセッサと、を備え、
前記操作手段は、推定された前記少なくとも位置及び姿勢のいずれかに関する情報に基づいて前記物体を操作し、
前記ニューラルネットワークモデルは、シミュレーション技術を用いて生成されたデータを用いて学習されたものである、
操作システム。
Operation means for manipulating objects and
It comprises at least one processor that inputs information about the object into a neural network model and estimates information about at least one of the positions and orientations of the operating means.
The operating means manipulates the object based on information about any of the estimated at least positions and orientations.
The neural network model was trained using data generated by using a simulation technique.
Operation system.
前記シミュレーション技術を用いて生成されたデータは、少なくとも仮想的物体及び拡張的物体のいずれかを用いて生成されたデータを含む、
請求項1に記載の操作システム。
The data generated using the simulation technique includes at least data generated using either a virtual object or an extended object.
The operation system according to claim 1.
前記少なくとも仮想的物体及び拡張的物体のいずれかは、検知装置によって取得された情報に基づいて生成されたものである、
請求項2に記載の操作システム。
At least one of the virtual object and the extended object is generated based on the information acquired by the detection device.
The operation system according to claim 2.
前記シミュレーション技術を用いて生成されたデータは、前記少なくとも仮想的物体及び拡張的物体のいずれかを操作する操作手段の少なくとも位置及び姿勢のいずれかに関する情報を含む、
請求項2又は請求項3に記載の操作システム。
The data generated using the simulation technique includes information regarding at least one of the positions and orientations of the operating means for manipulating at least one of the virtual and extended objects.
The operating system according to claim 2 or 3.
前記シミュレーション技術は、少なくともVR(Virtual Reality)技術及びAR(Augmented Reality)技術のいずれかである、
請求項1乃至請求項4のいずれかに記載の操作システム。
The simulation technique is at least one of VR (Virtual Reality) technique and AR (Augmented Reality) technique.
The operating system according to any one of claims 1 to 4.
推定された前記少なくとも位置及び姿勢のいずれかに関する情報に基づいて、前記操作手段を制御するコントローラ、を更に備える、
請求項1乃至請求項5のいずれかに記載の操作システム。
It further comprises a controller that controls the operating means based on information about any of the estimated at least positions and orientations.
The operation system according to any one of claims 1 to 5.
前記物体の情報を取得する検知装置が前記操作手段に設置されている、
請求項1乃至請求項6のいずれかに記載の操作システム。
A detection device for acquiring information on the object is installed in the operating means.
The operation system according to any one of claims 1 to 6.
前記物体の情報を取得する検知装置は、距離情報を取得可能なカメラである、
請求項1乃至請求項7のいずれかに記載の操作システム。
The detection device that acquires the information of the object is a camera that can acquire the distance information.
The operation system according to any one of claims 1 to 7.
前記物体の情報を取得する検知装置は、1台以上のカメラである、
請求項1乃至請求項8のいずれかに記載の操作システム。
The detection device that acquires the information of the object is one or more cameras.
The operating system according to any one of claims 1 to 8.
推定された前記姿勢に関する情報は、複数の軸周りの回転角度を表現可能な情報を含む、
請求項1乃至請求項9のいずれかに記載の操作システム。
The estimated information about the posture includes information capable of expressing rotation angles around a plurality of axes.
The operating system according to any one of claims 1 to 9.
前記ニューラルネットワークモデルの各層の出力は、前記物体の位置、姿勢及び面積以外の情報を含む、
請求項1乃至請求項10のいずれかに記載の操作システム。
The output of each layer of the neural network model contains information other than the position, orientation and area of the object.
The operation system according to any one of claims 1 to 10.
前記操作手段は、推定された前記少なくとも位置及び姿勢のいずれかに関する情報に基づいて前記物体を把持する、
請求項1乃至請求項11のいずれかに記載の操作システム。
The operating means grips the object based on information about any of the estimated at least positions and orientations.
The operation system according to any one of claims 1 to 11.
少なくとも1つのプロセッサにより、
物体に関する情報が入力されると、操作手段の少なくとも位置及び姿勢のいずれかに関する情報を出力するニューラルネットワークモデルを生成する、モデル生成方法であって、
シミュレーション技術を用いて生成されたデータを用いて前記ニューラルネットワークモデルを学習する、
モデル生成方法。
By at least one processor
A model generation method that, when information about an object is input, generates a neural network model that outputs information about at least one of the positions and orientations of the operating means.
The neural network model is trained using the data generated by using the simulation technique.
Model generation method.
前記シミュレーション技術を用いて生成されたデータは、少なくとも仮想的物体及び拡張的物体のいずれかを用いて生成されたデータを含む、
請求項13に記載のモデル生成方法。
The data generated using the simulation technique includes at least data generated using either a virtual object or an extended object.
The model generation method according to claim 13.
前記シミュレーション技術を用いて生成されたデータは、前記少なくとも仮想的物体及び拡張的物体のいずれかを操作する操作手段の少なくとも位置及び姿勢のいずれかに関する情報を含む、
請求項14に記載のモデル生成方法。
The data generated using the simulation technique includes information regarding at least one of the positions and orientations of the operating means for manipulating at least one of the virtual and extended objects.
The model generation method according to claim 14.
前記シミュレーション技術は、少なくともVR(Virtual Reality)技術及びAR(Augmented Reality)技術のいずれかである、
請求項13乃至請求項15のいずれかに記載のモデル生成方法。
The simulation technique is at least one of VR (Virtual Reality) technique and AR (Augmented Reality) technique.
The model generation method according to any one of claims 13 to 15.
前記ニューラルネットワークモデルが出力する前記姿勢に関する情報は、複数の軸周りの回転角度を表現可能な情報を含む、
請求項13乃至請求項16のいずれかに記載のモデル生成方法。
The information regarding the posture output by the neural network model includes information capable of expressing rotation angles around a plurality of axes.
The model generation method according to any one of claims 13 to 16.
前記ニューラルネットワークモデルの各層の出力は、前記物体の位置、姿勢及び面積以外の情報を含む、
請求項13乃至17のいずれかに記載のモデル生成方法。
The output of each layer of the neural network model contains information other than the position, orientation and area of the object.
The model generation method according to any one of claims 13 to 17.
少なくとも1つのプロセッサにより、
シミュレーション技術を用いて生成されたデータに基づいて学習されたニューラルネットワークモデルに、物体の情報を入力し、
操作手段の少なくとも位置及び姿勢のいずれかに関する情報を推定し、
推定された前記少なくとも位置及び姿勢のいずれかに関する情報に基づいて、前記操作手段で前記物体を操作する、
操作方法。
By at least one processor
Input object information into a neural network model trained based on data generated using simulation technology.
Estimate information about at least one of the positions and orientations of the operating means
Manipulating the object with the operating means based on the estimated information about any of the at least positions and orientations.
Method of operation.
JP2020206993A 2019-06-19 2020-12-14 Learning device, learning method, learning model, detection device and grasping system Active JP7349423B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020206993A JP7349423B2 (en) 2019-06-19 2020-12-14 Learning device, learning method, learning model, detection device and grasping system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019113637A JP7051751B2 (en) 2019-06-19 2019-06-19 Learning device, learning method, learning model, detection device and gripping system
JP2020206993A JP7349423B2 (en) 2019-06-19 2020-12-14 Learning device, learning method, learning model, detection device and grasping system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
JP2019113637A Division JP7051751B2 (en) 2019-06-19 2019-06-19 Learning device, learning method, learning model, detection device and gripping system

Publications (3)

Publication Number Publication Date
JP2021061014A JP2021061014A (en) 2021-04-15
JP2021061014A5 true JP2021061014A5 (en) 2021-07-29
JP7349423B2 JP7349423B2 (en) 2023-09-22

Family

ID=88021853

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2020206993A Active JP7349423B2 (en) 2019-06-19 2020-12-14 Learning device, learning method, learning model, detection device and grasping system

Country Status (1)

Country Link
JP (1) JP7349423B2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240091951A1 (en) * 2022-09-15 2024-03-21 Samsung Electronics Co., Ltd. Synergies between pick and place: task-aware grasp estimation

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6445964B1 (en) * 1997-08-04 2002-09-03 Harris Corporation Virtual reality simulation-based training of telekinegenesis system for training sequential kinematic behavior of automated kinematic machine
CN101396829A (en) 2007-09-29 2009-04-01 株式会社Ihi Robot control method and robot
JP6522488B2 (en) 2015-07-31 2019-05-29 ファナック株式会社 Machine learning apparatus, robot system and machine learning method for learning work taking-out operation
JP6219897B2 (en) * 2015-09-28 2017-10-25 ファナック株式会社 Machine tools that generate optimal acceleration / deceleration

Similar Documents

Publication Publication Date Title
Qian et al. Developing a gesture based remote human-robot interaction system using kinect
US11195041B2 (en) Generating a model for an object encountered by a robot
US20210205986A1 (en) Teleoperating Of Robots With Tasks By Mapping To Human Operator Pose
WO2021103648A1 (en) Hand key point detection method, gesture recognition method, and related devices
WO2018103635A1 (en) Processing method and device for climb operation in vr scenario, and readable storage medium
JP6826069B2 (en) Robot motion teaching device, robot system and robot control device
WO2020110505A1 (en) Image generation device, robot training system, image generation method, and image generation program
CN113034652A (en) Virtual image driving method, device, equipment and storage medium
CN115070781B (en) Object grabbing method and two-mechanical-arm cooperation system
JP2021061014A5 (en)
Inoue et al. Transfer learning from synthetic to real images using variational autoencoders for robotic applications
JP3742879B2 (en) Robot arm / hand operation control method, robot arm / hand operation control system
Son et al. Synthetic deep neural network design for lidar-inertial odometry based on CNN and LSTM
Khalil et al. Human motion retargeting to Pepper humanoid robot from uncalibrated videos using human pose estimation
Zhao et al. Neural network-based image moments for robotic visual servoing
CN110008873B (en) Facial expression capturing method, system and equipment
Lovi et al. Predictive display for mobile manipulators in unknown environments using online vision-based monocular modeling and localization
Doisy et al. Spatially unconstrained, gesture-based human-robot interaction
Zhu et al. A robotic semantic grasping method for pick-and-place tasks
Cazzato et al. Real-time human head imitation for humanoid robots
Shruthi et al. Path planning for autonomous car
Al-Junaid ANN based robotic arm visual servoing nonlinear system
Lai et al. Homography-based visual servoing for eye-in-hand robots with unknown feature positions
Deherkar et al. Gesture controlled virtual reality based conferencing
WO2023082404A1 (en) Control method for robot, and robot, storage medium, and grabbing system