CN111230858B - Visual robot motion control method based on reinforcement learning - Google Patents

Visual robot motion control method based on reinforcement learning Download PDF

Info

Publication number
CN111230858B
CN111230858B CN201910169395.0A CN201910169395A CN111230858B CN 111230858 B CN111230858 B CN 111230858B CN 201910169395 A CN201910169395 A CN 201910169395A CN 111230858 B CN111230858 B CN 111230858B
Authority
CN
China
Prior art keywords
information
robot body
robot
sub
control method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910169395.0A
Other languages
Chinese (zh)
Other versions
CN111230858A (en
Inventor
吴朝明
徐晨光
李璠
田伟
张绍泉
王军
汪胜前
邓承志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang Institute of Technology
Original Assignee
Nanchang Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang Institute of Technology filed Critical Nanchang Institute of Technology
Priority to CN201910169395.0A priority Critical patent/CN111230858B/en
Publication of CN111230858A publication Critical patent/CN111230858A/en
Application granted granted Critical
Publication of CN111230858B publication Critical patent/CN111230858B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning

Abstract

The invention discloses a visual robot motion control method based on reinforcement learning, which belongs to the technical field of robot control and comprises the following steps: the method comprises the steps of main imaging data acquisition, branch sub-information acquisition, range space model establishment, movement track strategy formulation, branch information periodic transmission and real-time correction of a movement path, and according to path change information acquired by a five-branch sub-probe in the step, the established range space model is supplemented, and the movement track information is corrected in real time. Touch complements vision in view sharing.

Description

Vision robot motion control method based on reinforcement learning
Technical Field
The invention relates to the technical field of robot control, in particular to a visual robot motion control method based on reinforcement learning.
Background
The visual robot not only takes visual information as input, but also processes the information, extracts useful information and provides the useful information for the robot, and the main working principle is as follows: three-dimensional objects in the objective world are converted into two-dimensional plane images by a sensor (such as a camera), and then the images of the objects are output after image processing. Generally, two types of information, namely distance information and brightness information, are required for a robot to judge the position and the shape of an object. Of course, there is color information as object visual information, but it is not as important as the first two types of information for recognizing the position and shape of an object. The robot vision system has a great dependence on light rays, and often needs good illumination conditions so as to make an image formed by an object be most clear, enhance detection information, and overcome the problems of shadow, low contrast, mirror reflection and the like.
The reinforcement learning is developed from theories such as animal learning, random approximation, optimization control and the like, is an online learning technology without a guide, and is used for mapping learning from an environmental state to an action, so that an Agent adopts an optimal strategy according to a maximum reward value; the Agent senses state information in the environment, searches strategies (which strategy can generate the most effective learning) and selects the optimal action, so that the state is changed and a delay return value is obtained, an evaluation function is updated, after one learning process is completed, the next round of learning training is started, and the loop iteration is repeated until the whole learning condition is met, so that the learning is terminated.
In some dangerous environments which are not suitable for manual operation or occasions which cannot meet the requirements of manual vision easily, the machine vision is used for replacing the manual vision, and the production efficiency and the automation degree are greatly improved. Machine vision is therefore also increasingly working in conjunction with robots. However, at present, a robot controller generally does not have a machine vision function, and the robot controller can realize cooperative work with a robot only by developing or purchasing a corresponding machine vision module, and the special machine vision module is usually expensive and unique in price and is a supplier, which brings a series of difficulties for the use and maintenance of robot equipment. The Chinese invention publication No. CN105643624B discloses a machine vision control method, a robot controller and a robot control system. The machine vision control method is characterized in that image acquisition is carried out through a network camera, and control processing is carried out through an embedded dual-core microprocessor; the embedded dual-core microprocessor comprises a motion core and an application core; the motion core processes the PLC command and the motion command; and performing image processing by using the kernel and sending the processed image to the motion kernel. The robot controller has high integration level, integrates the machine vision and the robot controller on the same controller, has small volume and low power consumption, is convenient to move, and only needs to purchase a general network camera when a user applies the robot controller without purchasing special machine vision equipment.
Although the technical scheme solves the problem that the robot vision and the robot controller are integrated on the unified controller, in the motion control process of the robot, the robot needs to firstly position the position and posture information of the robot, then knows the target position information through the machine vision, and then calculates the motion track through an internal system, so as to calculate the correct moving mode, including the rotation or swing range of a plurality of motion shaft joints of the robot, but the working environment of part of the robot is more special, such as seabed or high corrosion environment.
Disclosure of Invention
1. Technical problem to be solved
Aiming at the problems in the prior art, the invention aims to provide a visual robot motion control method based on reinforcement learning, which can supplement the vision field range of a robot through a reinforcement learning algorithm and an external machine vision probe, timely adjust a motion track, reduce the deviation of the motion track, improve the motion track running accuracy of the robot, perform self-checking on the external change of the robot, and reduce the influence of corrosion caused by the external environment on the motion control of the robot. In view sharing, touch supplements vision.
2. Technical scheme
In order to solve the above problems, the present invention adopts the following technical solutions.
The visual robot motion control method based on reinforcement learning comprises the following steps:
the method comprises the following steps: acquiring main imaging data, wherein the robot body acquires information of the position of the robot body and a target position by using a camera respectively, and records the acquired information in an internal memory;
step two: the method comprises the steps of acquiring branch sub-information, namely sending target position information to branch sub-probes by a robot body, respectively acquiring path information of the robot body and the target position information by utilizing a plurality of branch sub-probes arranged outside, and transmitting the path information into the robot body by utilizing communication equipment;
step three: establishing a range space model, namely filling and integrating path information in the branch sub-probe and position information recorded by a camera of the robot body, calling an algorithm module, and establishing the range space model;
step four: making a moving track strategy, and making a moving track according to the range space model established in the step three;
step five: the shunt information is periodically transmitted, the robot body moves according to the movement track appointed in the step four and moves in the recording range of the shunt sub-probe, and the shunt sub-probe continuously records the acquired in-situ position information and the robot body position information updated in real time and periodically transmits a signal to the robot body;
step six: the motion path is corrected in real time, the established range space model is supplemented according to the path change information acquired by the five-branch sub-probe in the step, the moving track information is corrected in real time, the field of view range of the robot can be supplemented through an enhanced learning algorithm and an external machine vision probe, the motion track is adjusted in time, the deviation amount of the motion track is reduced, the operation accuracy of the motion track of the robot is improved, meanwhile, the external change of the robot is self-checked, and the influence of corrosion caused by the external environment on the motion control of the robot is reduced. Touch complements vision in view sharing.
Furthermore, in the two-branch sub-information acquisition process, the two-branch sub-probes acquire all-dimensional information of the initial shape of the robot body, transmit the shape information to the inside of the robot body to establish a body model, subsequently transmit the shape information of the robot body according to subsequent displacement visual angle changes, supplement the established robot body model, and facilitate the perfection of the established robot body shape model so as to detect the shape change of the robot body.
Furthermore, in the step of collecting the two-branch sub-information, the two-branch sub-probe records the initial time of time while recording the change of the position information, and collects the position information of the robot body at subsequent periodicity, so that the connection between the position information and the time information is conveniently established, and the motion posture of the robot body is conveniently judged.
Furthermore, the recorded time information and the position information are sent to the robot body by the branch sub information, the position information and the time information are integrated and calculated, the displacement speed of the robot body in the motion process is obtained, and therefore the motion track is measured and calculated in a supplementing mode.
Further, when the shunt sub-probe monitors that the shape change of the robot body is not matched with the original model, the robot body is judged to be deformed, and the posture of the motion track of the robot is corrected according to the deformation quantity.
Furthermore, an LED light sensing element is arranged at the outer end of the robot body, an optical touch system is arranged in the robot body, information collected by the optical touch system is transmitted into the range space model, and the information collected by the optical touch system is transmitted into the range space model, so that external information which is not collected by the shunting molecular probe is conveniently supplemented, and a more complete and correct range space model is established.
Further, in the process of establishing the three-range space model, the three-dimensional model reconstruction algorithm is a Poisson surface reconstruction algorithm.
Furthermore, in the process of formulating the moving track strategy in the fourth step, a dynamic planning equation is used for path planning operation.
Furthermore, a three-dimensional laser scanner is adopted on each branch sub-probe, and the measuring mode adopted by the three-dimensional laser scanner is a pulse type.
Furthermore, in the process of establishing the range space model, firstly, acquired robot body information is utilized to construct sparse point cloud by utilizing antipodal geometry and factorization, the sparse point cloud is converted into dense point cloud, texture information is mapped into the grid model, and the range space model is established.
3. Advantageous effects
Compared with the prior art, the invention has the advantages that:
(1) According to the scheme, the vision field range of the robot can be supplemented through the enhanced learning algorithm and the external machine vision probe, the movement track can be adjusted in time, the movement track deviation amount is reduced, the movement track operation accuracy of the robot is improved, meanwhile, self-checking is carried out on the external change of the robot, and the influence of corrosion caused by the external environment on the movement control of the robot is reduced. Touch complements vision in view sharing.
(2) In the two-branch sub-information acquisition process, the branch sub-probes acquire all-around information of the initial shape of the robot body, transmit the shape information to the inside of the robot body to establish a body model, subsequently transmit the shape information of the robot body according to subsequent displacement visual angle changes, supplement the established robot body model, and facilitate the perfection of the established robot body shape model so as to detect the shape changes of the robot body.
(3) In the step of collecting the two-branch sub information, the branch sub probe records the time initial time while recording the change of the position information, and collects the position information of the robot body at the subsequent periodicity, so that the connection of the position information and the time information is convenient to establish, and the motion posture of the robot body is convenient to judge.
(4) The recorded time information and the position information are sent to the robot body through the branch sub information, the position information and the time information are integrated and calculated, the displacement speed of the robot body in the motion process is obtained, and therefore the motion track is measured and calculated in a supplementary mode.
(5) When the sub-probes monitor that the shape change of the robot body is not matched with the original model, the robot body is judged to be deformed, and the posture of the motion track of the robot is corrected according to the deformation.
(6) The outer end of the robot body is provided with an LED light sensing element, the robot body is internally provided with an optical touch system, information collected by the optical touch system is transmitted into the range space model, and the information collected by the optical touch system is transmitted into the range space model, so that external information which is not collected by the branch sub-probe is conveniently supplemented, and a more complete and correct range space model is established.
(7) In the process of establishing the range space model, firstly, acquired robot body information is utilized to construct sparse point cloud by antipodal geometry and factorization, the sparse point cloud is converted into dense point cloud, texture information is mapped into a grid model, and the range space model is established.
Drawings
FIG. 1 is a main flow chart of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention; it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments, and all other embodiments obtained by those skilled in the art without inventive work are within the scope of the present invention.
In the description of the present invention, it should be noted that the terms "upper", "lower", "inner", "outer", "top/bottom", and the like indicate orientations or positional relationships based on orientations or positional relationships shown in the drawings, which are merely for convenience of description and simplification of description, but do not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise specifically stated or limited, the terms "mounted," "disposed," "sleeved/connected," "connected," and the like are used in a broad sense, and for example, "connected," may be fixedly connected, detachably connected, integrally connected, mechanically connected, electrically connected, directly connected, indirectly connected through an intermediate medium, and connected between two elements.
Example 1:
referring to fig. 1, the method for controlling the motion of the visual robot based on reinforcement learning includes:
the method comprises the following steps: acquiring main imaging data, wherein the robot body respectively acquires information of the position of the robot body and a target position by using a camera, and records the acquired information in an internal memory;
step two: the method comprises the following steps of branch sub-information acquisition, wherein target position information is sent to branch sub-probes by a robot body, path information acquisition is respectively carried out on the robot body and the target position information by utilizing a plurality of peripheral branch sub-probes, and the path information is transmitted into the robot body by utilizing communication equipment;
step three: establishing a range space model, namely filling and integrating path information in the shunt sub probe and position information recorded by a camera of the robot body, calling an algorithm module, establishing the range space model, and using a three-dimensional model reconstruction algorithm as a Poisson surface reconstruction algorithm;
step four: a moving track strategy is formulated, a moving track is formulated according to the range space model established in the step three, and a dynamic planning equation is used for path planning operation;
step five: the shunt information is periodically transmitted, the robot body moves according to the movement track appointed in the step four and moves in the recording range of the shunt sub-probe, and the shunt sub-probe continuously records the acquired in-situ position information and the robot body position information updated in real time and periodically transmits a signal to the robot body;
step six: and (3) correcting the motion path in real time, supplementing the established range space model according to the path change information acquired by the five-branch sub-probe in the step, and correcting the motion track information in real time, so that the vision field range of the robot can be supplemented through an enhanced learning algorithm and an external machine vision probe, the motion track can be adjusted in time, the deviation amount of the motion track is reduced, the operation accuracy of the motion track of the robot is improved, and meanwhile, the external change of the robot is self-checked, so that the influence of corrosion caused by the external environment on the motion control of the robot is reduced. In view sharing, touch complements vision.
In the two-branch sub-information acquisition process, the two-branch sub-probes acquire all-around information of the initial shape of the robot body, transmit the appearance information to the inside of the robot body to establish a body model, subsequently transmit the appearance information of the robot body according to subsequent displacement visual angle changes, and supplement the established robot body model, so that the established robot body appearance model is conveniently perfected, and the appearance change of the robot body is detected.
In the step of collecting the two-branch sub information, the branch sub probe records the time initial time while recording the change of the position information, and collects the position information of the robot body at the subsequent periodicity, so that the connection of the position information and the time information is convenient to establish, and the motion posture of the robot body is convenient to judge.
The recorded time information and the position information are sent to the robot body through the branch sub information, the position information and the time information are integrated and calculated, the displacement speed of the robot body in the motion process is obtained, and therefore the motion trail is measured and calculated in a supplementary mode.
And when the sub-probes monitor that the shape change of the robot body is not matched with the original model, judging that the robot body is deformed, and correcting the posture of the motion track of the robot according to the deformation quantity.
The outer end of the robot body is provided with an LED light sensing element, the robot body is internally provided with an optical touch system, information collected by the optical touch system is transmitted into the range space model, and the information collected by the optical touch system is transmitted into the range space model, so that external information which is not collected by the shunting sub-probe is conveniently supplemented, and a more complete and correct range space model is built.
The method comprises the steps that three-dimensional laser scanners are adopted on a plurality of branch sub-probes, the measuring modes adopted by the three-dimensional laser scanners are pulse type, in the process of establishing a range space model, collected robot body information is firstly utilized to construct sparse point clouds through antipodal geometry and factorization, the sparse point clouds are converted into dense point clouds, texture information is mapped into a grid model, and the range space model is established.
The invention can supplement the vision field range of the robot by an enhanced learning algorithm and an external machine vision probe, adjust the motion trail in time, reduce the deviation amount of the motion trail, improve the motion trail operation accuracy of the robot, carry out self-checking on the external change of the robot and reduce the influence of corrosion caused by the external environment on the motion control of the robot. Touch complements vision in view sharing.
The above are merely preferred embodiments of the present invention; the scope of the invention is not limited thereto. Any person skilled in the art should be able to cover the protection scope of the present invention by equivalent or modified solutions and modifications within the technical scope of the present disclosure.

Claims (8)

1. The visual robot motion control method based on reinforcement learning is characterized in that: the method comprises the following steps:
the method comprises the following steps: acquiring main imaging data, wherein the robot body acquires information of the position and a target position respectively by using a camera, and records the acquired information in an internal memory;
step two: the method comprises the following steps of branch sub-information acquisition, wherein target position information is sent to branch sub-probes by a robot body, path information acquisition is respectively carried out on the robot body and the target position information by utilizing a plurality of branch sub-probes which are arranged outside, and the path information is transmitted into the robot body by utilizing communication equipment;
step three: establishing a range space model, namely filling and integrating path information in the branch sub-probe and position information recorded by a camera of the robot body, calling an algorithm module, and establishing the range space model;
step four: making a moving path strategy, and making a moving track according to the range space model established in the step three;
step five: the shunting information is transmitted periodically, the robot body moves according to the moving path appointed in the step four and moves in the recording range of the shunting sub-probe, and the shunting sub-probe continuously records the acquired original position information and the position information of the robot body updated in real time and periodically sends a signal to be transmitted into the robot body;
step six: correcting the moving path in real time, supplementing the established range space model according to the path change information acquired by the five-branch sub-probe in the step, and correcting the moving track information in real time;
in the two-branch sub information acquisition process, the branch sub probes acquire all-around information of the initial shape of the robot body, transmit appearance information to the inside of the robot body to establish a body model, subsequently transmit the appearance information of the robot body according to subsequent displacement visual angle changes, and supplement the established robot body model; and when the sub-probes monitor that the shape change of the robot body is not matched with the original model, judging that the robot body is deformed, and correcting the posture of the moving path of the robot according to the deformation.
2. The reinforcement learning-based visual robot motion control method according to claim 1, characterized in that: in the two-branch sub information acquisition process, the two-branch sub probe records time initial time while recording position information change, and acquires the position information of the robot body at subsequent periodicity.
3. The reinforcement learning-based visual robot motion control method according to claim 2, characterized in that: the shunt sub-information sends the recorded time information and the position information to the robot body together, and the position information and the time information are integrated and calculated to obtain the displacement speed of the robot body in the motion process, so that the moving path is subjected to supplementary measurement and calculation.
4. The reinforcement learning-based visual robot motion control method according to claim 1, characterized in that: the outer end of the robot body is provided with an LED light sensing element, the robot body is internally provided with an optical touch system, and information collected by the optical touch system is transmitted into the range space model.
5. The reinforcement learning-based visual robot motion control method according to claim 1, characterized in that: and in the process of establishing the three-range space model, the three-dimensional model reconstruction algorithm is a Poisson surface reconstruction algorithm.
6. The reinforcement learning-based visual robot motion control method according to claim 1, characterized in that: and fourthly, in the process of formulating the moving track strategy, a dynamic planning equation is used for path planning operation.
7. The reinforcement learning-based visual robot motion control method according to claim 1, characterized in that: a three-dimensional laser scanner is adopted on each branch sub-probe, and the measuring mode adopted by the three-dimensional laser scanner is pulse type.
8. The visual robot motion control method based on reinforcement learning according to claim 5, characterized in that: in the process of establishing the range space model, firstly, acquired robot body information is utilized to construct sparse point cloud by utilizing epipolar geometry and factorization, the sparse point cloud is converted into dense point cloud, texture information is mapped into a grid model, and the range space model is established.
CN201910169395.0A 2019-03-06 2019-03-06 Visual robot motion control method based on reinforcement learning Active CN111230858B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910169395.0A CN111230858B (en) 2019-03-06 2019-03-06 Visual robot motion control method based on reinforcement learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910169395.0A CN111230858B (en) 2019-03-06 2019-03-06 Visual robot motion control method based on reinforcement learning

Publications (2)

Publication Number Publication Date
CN111230858A CN111230858A (en) 2020-06-05
CN111230858B true CN111230858B (en) 2022-11-22

Family

ID=70879330

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910169395.0A Active CN111230858B (en) 2019-03-06 2019-03-06 Visual robot motion control method based on reinforcement learning

Country Status (1)

Country Link
CN (1) CN111230858B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102895092A (en) * 2011-12-13 2013-01-30 冷春涛 Multi-sensor integration based three-dimensional environment identifying system for walker aid robot
CN104656676A (en) * 2014-12-25 2015-05-27 北京理工大学 Hand, leg and eye servo control device and method for humanoid robot
CN104848851A (en) * 2015-05-29 2015-08-19 山东鲁能智能技术有限公司 Transformer substation patrol robot based on multi-sensor data fusion picture composition and method thereof
CN108274469A (en) * 2018-04-23 2018-07-13 东北大学 Vacuum mechanical-arm anticollision detecting system and detection method based on multidimensional vision sensor

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7085622B2 (en) * 2002-04-19 2006-08-01 Applied Material, Inc. Vision system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102895092A (en) * 2011-12-13 2013-01-30 冷春涛 Multi-sensor integration based three-dimensional environment identifying system for walker aid robot
CN104656676A (en) * 2014-12-25 2015-05-27 北京理工大学 Hand, leg and eye servo control device and method for humanoid robot
CN104848851A (en) * 2015-05-29 2015-08-19 山东鲁能智能技术有限公司 Transformer substation patrol robot based on multi-sensor data fusion picture composition and method thereof
CN108274469A (en) * 2018-04-23 2018-07-13 东北大学 Vacuum mechanical-arm anticollision detecting system and detection method based on multidimensional vision sensor

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《室外环境基于多传感器履带式机器人自主导航》;韩伟;《中国优秀硕士学位论文全文数据库 信息科技》;20180415;全文 *
Robust tracking for camera control on an irregular terrain vehicle;J. Ding,H. Kondou等;《Proceedings of the 41st SICE Annual Conference. SICE 2002.》;20030429;全文 *
基于强化学习与动态运动基元的移动机器人抓取研究;胡英柏;《中国优秀硕士学位论文全文数据库 信息科技辑》;20180715;全文 *

Also Published As

Publication number Publication date
CN111230858A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
CN108724190A (en) A kind of industrial robot number twinned system emulation mode and device
CN100573389C (en) All fours type bionic robot control device
CN106091961A (en) High-rate laser inner diameter measurement system
Zou et al. An end-to-end calibration method for welding robot laser vision systems with deep reinforcement learning
CN110211180A (en) A kind of autonomous grasping means of mechanical arm based on deep learning
CN113189977B (en) Intelligent navigation path planning system and method for robot
CN112288823B (en) Calibration method of standard cylinder curved surface point measuring equipment
CN107246866A (en) A kind of high-precision six-freedom degree measuring system and method
CN109465830B (en) Robot monocular stereoscopic vision calibration system and method
CN109129492A (en) A kind of industrial robot platform that dynamic captures
CN109465829A (en) A kind of industrial robot geometric parameter discrimination method based on transition matrix error model
Lu et al. Automatic 3D seam extraction method for welding robot based on monocular structured light
CN110039536A (en) The auto-navigation robot system and image matching method of indoor map construction and positioning
CN107421466A (en) A kind of synchronous acquisition device and acquisition method of two and three dimensions image
CN109752724A (en) A kind of image laser integral type navigation positioning system
CN100523382C (en) Moving position gesture measuring method based on double image sensor suitable for top bridge construction
CN111230858B (en) Visual robot motion control method based on reinforcement learning
CN203772217U (en) Non-contact type flexible online dimension measuring device
CN113681559B (en) Line laser scanning robot hand-eye calibration method based on standard cylinder
CN112001945B (en) Multi-robot monitoring method suitable for production line operation
Suzuki Proximity-based non-contact perception and omnidirectional point-cloud generation based on hierarchical information on fingertip proximity sensors
CN201124413Y (en) Control device of four-footed bionic robot
CN107942748B (en) Mechanical arm space dynamic obstacle avoidance induction bracelet and control system
CN1672881A (en) On-line robot hand and eye calibrating method based on motion selection
CN112013868B (en) Adaptive parameter police dog attitude estimation method based on visual inertial navigation odometer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant