JP2021061014A5 - - Google Patents
Download PDFInfo
- Publication number
- JP2021061014A5 JP2021061014A5 JP2020206993A JP2020206993A JP2021061014A5 JP 2021061014 A5 JP2021061014 A5 JP 2021061014A5 JP 2020206993 A JP2020206993 A JP 2020206993A JP 2020206993 A JP2020206993 A JP 2020206993A JP 2021061014 A5 JP2021061014 A5 JP 2021061014A5
- Authority
- JP
- Japan
- Prior art keywords
- information
- orientations
- positions
- operating means
- data generated
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Claims (19)
ニューラルネットワークモデルに前記物体の情報を入力し、前記操作手段の少なくとも位置及び姿勢のいずれかに関する情報を推定する、少なくとも1つのプロセッサと、を備え、
前記操作手段は、推定された前記少なくとも位置及び姿勢のいずれかに関する情報に基づいて前記物体を操作し、
前記ニューラルネットワークモデルは、シミュレーション技術を用いて生成されたデータを用いて学習されたものである、
操作システム。 Operation means for manipulating objects and
It comprises at least one processor that inputs information about the object into a neural network model and estimates information about at least one of the positions and orientations of the operating means.
The operating means manipulates the object based on information about any of the estimated at least positions and orientations.
The neural network model was trained using data generated by using a simulation technique.
Operation system.
請求項1に記載の操作システム。 The data generated using the simulation technique includes at least data generated using either a virtual object or an extended object.
The operation system according to claim 1.
請求項2に記載の操作システム。 At least one of the virtual object and the extended object is generated based on the information acquired by the detection device.
The operation system according to claim 2.
請求項2又は請求項3に記載の操作システム。 The data generated using the simulation technique includes information regarding at least one of the positions and orientations of the operating means for manipulating at least one of the virtual and extended objects.
The operating system according to claim 2 or 3.
請求項1乃至請求項4のいずれかに記載の操作システム。 The simulation technique is at least one of VR (Virtual Reality) technique and AR (Augmented Reality) technique.
The operating system according to any one of claims 1 to 4.
請求項1乃至請求項5のいずれかに記載の操作システム。 It further comprises a controller that controls the operating means based on information about any of the estimated at least positions and orientations.
The operation system according to any one of claims 1 to 5.
請求項1乃至請求項6のいずれかに記載の操作システム。 A detection device for acquiring information on the object is installed in the operating means.
The operation system according to any one of claims 1 to 6.
請求項1乃至請求項7のいずれかに記載の操作システム。 The detection device that acquires the information of the object is a camera that can acquire the distance information.
The operation system according to any one of claims 1 to 7.
請求項1乃至請求項8のいずれかに記載の操作システム。 The detection device that acquires the information of the object is one or more cameras.
The operating system according to any one of claims 1 to 8.
請求項1乃至請求項9のいずれかに記載の操作システム。 The estimated information about the posture includes information capable of expressing rotation angles around a plurality of axes.
The operating system according to any one of claims 1 to 9.
請求項1乃至請求項10のいずれかに記載の操作システム。 The output of each layer of the neural network model contains information other than the position, orientation and area of the object.
The operation system according to any one of claims 1 to 10.
請求項1乃至請求項11のいずれかに記載の操作システム。 The operating means grips the object based on information about any of the estimated at least positions and orientations.
The operation system according to any one of claims 1 to 11.
物体に関する情報が入力されると、操作手段の少なくとも位置及び姿勢のいずれかに関する情報を出力するニューラルネットワークモデルを生成する、モデル生成方法であって、
シミュレーション技術を用いて生成されたデータを用いて前記ニューラルネットワークモデルを学習する、
モデル生成方法。 By at least one processor
A model generation method that, when information about an object is input, generates a neural network model that outputs information about at least one of the positions and orientations of the operating means.
The neural network model is trained using the data generated by using the simulation technique.
Model generation method.
請求項13に記載のモデル生成方法。 The data generated using the simulation technique includes at least data generated using either a virtual object or an extended object.
The model generation method according to claim 13.
請求項14に記載のモデル生成方法。 The data generated using the simulation technique includes information regarding at least one of the positions and orientations of the operating means for manipulating at least one of the virtual and extended objects.
The model generation method according to claim 14.
請求項13乃至請求項15のいずれかに記載のモデル生成方法。 The simulation technique is at least one of VR (Virtual Reality) technique and AR (Augmented Reality) technique.
The model generation method according to any one of claims 13 to 15.
請求項13乃至請求項16のいずれかに記載のモデル生成方法。 The information regarding the posture output by the neural network model includes information capable of expressing rotation angles around a plurality of axes.
The model generation method according to any one of claims 13 to 16.
請求項13乃至17のいずれかに記載のモデル生成方法。 The output of each layer of the neural network model contains information other than the position, orientation and area of the object.
The model generation method according to any one of claims 13 to 17.
シミュレーション技術を用いて生成されたデータに基づいて学習されたニューラルネットワークモデルに、物体の情報を入力し、
操作手段の少なくとも位置及び姿勢のいずれかに関する情報を推定し、
推定された前記少なくとも位置及び姿勢のいずれかに関する情報に基づいて、前記操作手段で前記物体を操作する、
操作方法。 By at least one processor
Input object information into a neural network model trained based on data generated using simulation technology.
Estimate information about at least one of the positions and orientations of the operating means
Manipulating the object with the operating means based on the estimated information about any of the at least positions and orientations.
Method of operation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020206993A JP7349423B2 (en) | 2019-06-19 | 2020-12-14 | Learning device, learning method, learning model, detection device and grasping system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019113637A JP7051751B2 (en) | 2019-06-19 | 2019-06-19 | Learning device, learning method, learning model, detection device and gripping system |
JP2020206993A JP7349423B2 (en) | 2019-06-19 | 2020-12-14 | Learning device, learning method, learning model, detection device and grasping system |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2019113637A Division JP7051751B2 (en) | 2019-06-19 | 2019-06-19 | Learning device, learning method, learning model, detection device and gripping system |
Publications (3)
Publication Number | Publication Date |
---|---|
JP2021061014A JP2021061014A (en) | 2021-04-15 |
JP2021061014A5 true JP2021061014A5 (en) | 2021-07-29 |
JP7349423B2 JP7349423B2 (en) | 2023-09-22 |
Family
ID=88021853
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2020206993A Active JP7349423B2 (en) | 2019-06-19 | 2020-12-14 | Learning device, learning method, learning model, detection device and grasping system |
Country Status (1)
Country | Link |
---|---|
JP (1) | JP7349423B2 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20240091951A1 (en) * | 2022-09-15 | 2024-03-21 | Samsung Electronics Co., Ltd. | Synergies between pick and place: task-aware grasp estimation |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6445964B1 (en) * | 1997-08-04 | 2002-09-03 | Harris Corporation | Virtual reality simulation-based training of telekinegenesis system for training sequential kinematic behavior of automated kinematic machine |
CN101396829A (en) | 2007-09-29 | 2009-04-01 | 株式会社Ihi | Robot control method and robot |
JP6522488B2 (en) | 2015-07-31 | 2019-05-29 | ファナック株式会社 | Machine learning apparatus, robot system and machine learning method for learning work taking-out operation |
JP6219897B2 (en) * | 2015-09-28 | 2017-10-25 | ファナック株式会社 | Machine tools that generate optimal acceleration / deceleration |
-
2020
- 2020-12-14 JP JP2020206993A patent/JP7349423B2/en active Active
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Qian et al. | Developing a gesture based remote human-robot interaction system using kinect | |
US11195041B2 (en) | Generating a model for an object encountered by a robot | |
US20210205986A1 (en) | Teleoperating Of Robots With Tasks By Mapping To Human Operator Pose | |
WO2021103648A1 (en) | Hand key point detection method, gesture recognition method, and related devices | |
WO2018103635A1 (en) | Processing method and device for climb operation in vr scenario, and readable storage medium | |
JP6826069B2 (en) | Robot motion teaching device, robot system and robot control device | |
WO2020110505A1 (en) | Image generation device, robot training system, image generation method, and image generation program | |
CN113034652A (en) | Virtual image driving method, device, equipment and storage medium | |
CN115070781B (en) | Object grabbing method and two-mechanical-arm cooperation system | |
JP2021061014A5 (en) | ||
Inoue et al. | Transfer learning from synthetic to real images using variational autoencoders for robotic applications | |
JP3742879B2 (en) | Robot arm / hand operation control method, robot arm / hand operation control system | |
Son et al. | Synthetic deep neural network design for lidar-inertial odometry based on CNN and LSTM | |
Khalil et al. | Human motion retargeting to Pepper humanoid robot from uncalibrated videos using human pose estimation | |
Zhao et al. | Neural network-based image moments for robotic visual servoing | |
CN110008873B (en) | Facial expression capturing method, system and equipment | |
Lovi et al. | Predictive display for mobile manipulators in unknown environments using online vision-based monocular modeling and localization | |
Doisy et al. | Spatially unconstrained, gesture-based human-robot interaction | |
Zhu et al. | A robotic semantic grasping method for pick-and-place tasks | |
Cazzato et al. | Real-time human head imitation for humanoid robots | |
Shruthi et al. | Path planning for autonomous car | |
Al-Junaid | ANN based robotic arm visual servoing nonlinear system | |
Lai et al. | Homography-based visual servoing for eye-in-hand robots with unknown feature positions | |
Deherkar et al. | Gesture controlled virtual reality based conferencing | |
WO2023082404A1 (en) | Control method for robot, and robot, storage medium, and grabbing system |