CN116619376A - Robot teaching control method based on virtual vision - Google Patents

Robot teaching control method based on virtual vision Download PDF

Info

Publication number
CN116619376A
CN116619376A CN202310662882.7A CN202310662882A CN116619376A CN 116619376 A CN116619376 A CN 116619376A CN 202310662882 A CN202310662882 A CN 202310662882A CN 116619376 A CN116619376 A CN 116619376A
Authority
CN
China
Prior art keywords
coordinate
virtual
information
control
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310662882.7A
Other languages
Chinese (zh)
Other versions
CN116619376B (en
Inventor
简志雄
武交峰
张晓鹏
李梓满
张维威
罗瑜清
欧俊杰
车海波
卢冬雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Vocational College of Environmental Protection Engineering
Original Assignee
Guangdong Vocational College of Environmental Protection Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Vocational College of Environmental Protection Engineering filed Critical Guangdong Vocational College of Environmental Protection Engineering
Priority to CN202310662882.7A priority Critical patent/CN116619376B/en
Publication of CN116619376A publication Critical patent/CN116619376A/en
Application granted granted Critical
Publication of CN116619376B publication Critical patent/CN116619376B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/1605Simulation of manipulator lay-out, design, modelling of manipulator
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Numerical Control (AREA)

Abstract

The application discloses a robot teaching control method based on virtual vision, which comprises the steps that a virtual teaching environment is built at a teaching end, the control end acquires information of a robot and a workpiece in the virtual teaching environment, a modeling end reconstructs a three-dimensional model of the workpiece according to the information, an image of the model is acquired, and coordinates of the workpiece under a camera coordinate system are calibrated; the control end responds to the selection and triggering instructions of the control sub-interface to acquire calibration data and images of the model, constructs the mapping between the calibration data and the robot coordinate, and converts the pose of the workpiece from the workpiece coordinate system to the robot coordinate system according to the images of the model and the mapping relation; and the teaching end controls the robot to clamp the workpiece according to the converted pose of the workpiece so as to complete teaching. According to the application, the teaching process of the robot is controlled and displayed through the interactive display technology, the teaching difficulty and the learning difficulty are reduced, the uniformity of data is ensured, the pose of the workpiece is accurately registered to the actually clamped position, and the teaching effect is improved.

Description

Robot teaching control method based on virtual vision
Technical Field
The application relates to the technical field of robot simulation control, in particular to a robot teaching control method based on virtual vision.
Background
The ABB robot has the characteristics of high efficiency, good flexibility, intellectualization and the like, and is widely applied to the industrial and service fields. Depending on the task requirements, robotic applications typically need to integrate one or more end effectors with the work object to interact and drive the end effectors to move in accordance with a specified spatial trajectory. The generation of the motion trail of the end effector is realized through a robot teaching process. Currently, robot teaching mainly comprises three methods of direct teaching, dragging teaching and off-line programming. The method of direct teaching and dragging teaching is more traditional, while the method of off-line programming is an emerging method, and the defects of the method of direct teaching and dragging teaching can be overcome. The existing offline programming robot teaching method still has the following defects:
(1) the teaching method of offline programming usually presents the action of the robot by a source code method, and the teaching action is not intuitive enough, which is not beneficial for a learner to learn;
(2) current teaching generally recognizes a workpiece in a workpiece coordinate system by visual recognition software, and provides the recognized workpiece coordinates directly to a robot to achieve workpiece clamping. Firstly, the existing visual recognition process has a perspective defect, which causes the problem that recognized coordinate values deviate from actual values; secondly, the operation reference coordinate system of the robot is a joint coordinate system or a tool coordinate system, the coordinate system of the visual recognition software is a camera coordinate system, and the two reference coordinate systems are different, so that the coordinate data of the workpiece obtained by the robot also have deviation, and the obtained coordinate data of the workpiece is different from the actual clamped coordinate position. Because the accuracy of the three-dimensional model of the workpiece and the relative pose of the three-dimensional model and the robot will influence the teaching effect, if the position of the workpiece under the workpiece coordinate system cannot be registered to the position which is actually required to be clamped, the workpiece clamping operation of the robot will have an operation error, the robot cannot finish clamping the workpiece according to the preset programming, and the expected teaching effect cannot be achieved.
(3) When teaching is carried out, a user is usually required to open a plurality of software such as robot simulation software, programming software and the like at the terminal, so that the load of the terminal is easily increased, the operation efficiency of the terminal is low, and the terminal is seriously down. In addition, the plurality of pieces of software participating in teaching do not perform data interaction and mapping, and when a user operates the plurality of pieces of software, data errors or disorder easily occur, so that the teaching effect is affected.
(4) For beginners, the operation difficulty of a plurality of software such as robot simulation software, three-dimensional modeling software, programming software and the like is high, and the problems that the beginners are unskilled in operation or cannot operate easily occur. This clearly aggravates the teaching difficulty and teaching burden of the learner, and also increases the difficulty of the learner in learning the control of the robot, which is unfavorable for the learner to quickly get on hand to operate and control the robot.
The technical problems are to be solved.
Disclosure of Invention
The application aims to provide a robot teaching control method based on virtual vision, which solves one or more technical problems in the prior art and at least provides a beneficial selection or creation condition.
The application solves the technical problems as follows: in a first aspect, the present application provides a robot teaching control method based on virtual vision, including the steps of:
Building a virtual teaching environment, wherein a virtual platform and a virtual robot are arranged in the virtual teaching environment, a plurality of virtual workpieces are randomly generated, and the virtual workpieces are placed on the virtual platform;
and acquiring second coordinate information sent by a control end, controlling the virtual robot to grasp the virtual workpiece placed on the virtual platform according to the second coordinate information, and performing simulation teaching.
In a second aspect, the present application provides a robot teaching control method based on virtual vision, comprising the steps of:
acquiring pose information of the virtual workpiece sent by a control end, constructing a three-dimensional perspective model matched with the virtual workpiece, and adjusting the pose of the three-dimensional perspective model according to the pose information;
and acquiring a overlook image of the three-dimensional perspective model through a virtual camera, generating a workpiece perspective image, and calibrating coordinate information of the three-dimensional perspective model under a camera coordinate system.
In a third aspect, the present application provides a robot teaching control method based on virtual vision, including the steps of:
displaying a main interface, wherein the main interface comprises a teaching communication area, a modeling communication area and a control area;
responding to a selected instruction of the teaching communication area, constructing connection with a teaching end, and acquiring first coordinate information of the virtual robot and pose information of the virtual workpiece;
The first coordinate information comprises axis coordinate information of the virtual robot under a robot coordinate system, and the pose information of the virtual workpiece comprises axis coordinate information of the virtual workpiece under a workpiece coordinate system;
responding to a selected instruction of the modeling communication area, constructing connection with a modeling end, and transmitting the virtual workpiece and pose information thereof to the modeling end;
responding to a selected instruction of the control area, displaying a control sub-interface, wherein the control sub-interface comprises a calibration setting area, a binding area, a configuration area and an operation control, the binding area comprises a binding boundary surface control, and the configuration area comprises a calibration selection sub-area and an execution setting control;
responding to a selected instruction of the calibration setting area, acquiring calibration data and a workpiece perspective image corresponding to the selected instruction of the execution setting area, wherein the calibration data comprises a plurality of coordinate calibration templates, each coordinate calibration template corresponds to one virtual workpiece, and the workpiece perspective image is a top view image of a three-dimensional perspective model of the virtual workpiece;
displaying a data binding window in response to a trigger instruction for binding a boundary control, mapping the coordinate calibration template into the first coordinate information through the data binding window, and displaying the control sub-interface in response to a trigger instruction for exiting the control in the data binding window;
Displaying a coordinate calibration template corresponding to the selection instruction of the calibration selection subarea in response to the selection instruction of the calibration selection subarea, and determining an execution mode corresponding to the trigger instruction of the execution setting control in response to the trigger instruction of the execution setting control;
and responding to a trigger instruction of the operation control, converting the coordinate information of the virtual workpiece under the camera coordinate system into the coordinate information under the robot coordinate system by utilizing a coordinate calibration template corresponding to the selection instruction of the calibration selection sub-region according to the execution mode, and outputting the coordinate information of the virtual workpiece under the robot coordinate system to the end as second coordinate information.
The beneficial effects of the application are as follows: the robot teaching control method based on the virtual vision is provided, the interactive display technology is combined, the teaching process of the robot is controlled and intuitively displayed through the interactive display technology, the teaching difficulty of a demonstrator and the learning difficulty of a learner are effectively reduced, meanwhile, the data interaction and mapping among a plurality of software are realized, the uniformity of data is ensured, the problem of data disorder or error is avoided, and the adverse effect of the data problem on the teaching effect is reduced; in addition, the virtual workpiece is converted from the workpiece coordinate system to the robot coordinate system in a data mapping mode, so that the pose of the workpiece is accurately registered to the actually clamped position, the accuracy of clamping the workpiece by the robot is improved, the visual effect is provided while the teaching effect is improved, and the teaching process is conveniently observed by a learner.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
FIG. 1 is a flow chart of a teaching control method applied to a control end provided by the application;
FIG. 2 is a schematic diagram of a virtual teaching environment built by a teaching end provided by the application;
FIG. 3 is a schematic diagram of simulation teaching of a teaching end provided by the application;
FIG. 4 is a schematic diagram of a modeling interface of a modeling end provided by the present application;
FIG. 5 is a schematic diagram of a communication interface of a modeling terminal according to the present application;
FIG. 6 is a schematic diagram of a control sub-interface provided by the present application;
FIG. 7 is a schematic diagram of a data binding window provided by the present application;
FIG. 8 is a schematic diagram of a connection sub-interface provided by the present application;
FIG. 9 is a schematic diagram of a read-write sub-interface provided by the present application;
FIG. 10 is a schematic diagram of a read sub-interface provided by the present application;
fig. 11 is a schematic diagram of a communication sub-interface provided in the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The application will be further described with reference to the drawings and specific examples. The described embodiments should not be taken as limitations of the present application, and all other embodiments that would be obvious to one of ordinary skill in the art without making any inventive effort are intended to be within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
The robot has the characteristics of high efficiency, good flexibility, intelligence and the like, so that the robot is widely applied to the industrial and service fields. ABB robots have been successfully used in the production automation in the automotive, food, pharmaceutical and other industries for the last decades, and have become the primary automated equipment for handling, assembly, spraying, packaging and the like. Depending on the task requirements, robotic applications typically need to integrate one or more end effectors with the work object to interact and drive the end effectors to move in accordance with a specified spatial trajectory. The generation of the motion trail of the end effector is realized through a robot teaching process.
Currently, robot teaching mainly comprises three methods of direct teaching, dragging teaching and off-line programming. The teaching mode of direct teaching refers to that an operator directly operates a robot through a demonstrator to drive an end effector to move to a required pose, and records the movement track of the robot so as to complete teaching. The teaching mode of dragging teaching refers to manual control through a robot, so that an operator can drag an end effector of the robot to reach a specified pose to complete teaching. The direct teaching and dragging teaching modes are traditional teaching modes, the teaching of the robot is realized by adopting the principle of repeated reproduction of a robot path, an operator is usually required to operate and observe the robot in a short distance, the problems of potential safety hazard, long teaching time consumption, low efficiency and the like are caused, the operation time of the robot is occupied, and the normal operation of the robot is interfered.
The off-line programming teaching mode is an emerging teaching mode, and the method comprises the steps of firstly reconstructing a three-dimensional virtual environment of a working scene through robot simulation software, and giving the pose of a robot in the three-dimensional virtual environment; then, according to factors such as the size, shape, material and the like of the machined workpiece, generating a three-dimensional solid model of the machined workpiece through three-dimensional modeling software; then, according to the three-dimensional solid model, combining the pose of the robot, and programming a program for the robot to clamp the motion trail of the processed workpiece offline through programming software; finally, inputting the programming program into the simulation software of the robot to complete the simulation teaching of the robot.
Although the off-line programming teaching mode can well overcome the defects of the traditional teaching mode, the off-line programming teaching mode still has the following defects:
(1) the teaching method of offline programming usually presents the action of the robot by a source code method, and the teaching action is not intuitive enough, which is not beneficial for a learner to learn;
(2) ABB robots mainly include four types of coordinate systems: world coordinate system (stand coordinate system), joint coordinate system, tool coordinate system and workpiece coordinate system. The world coordinate system refers to a coordinate system taking the earth as a reference. The joint coordinate system is a coordinate system set in a joint of the robot, and the position and posture of the robot in the joint coordinate system are determined based on the joint coordinate system on the base side of each joint. The tool coordinate system refers to a rectangular coordinate system of a tool center shop and a tool posture, and a special part such as a clamp is usually fixed on an end effector of an industrial robot, and a coordinate system is usually established at a certain fixed position on the tool, and is the tool coordinate system. When the tool coordinate system is undefined, the tool coordinate system is typically replaced by a mechanical interface coordinate system. The user coordinate system, i.e. the object coordinate system, refers to the coordinate system established on the object or platform being operated.
Current teaching generally recognizes a workpiece in a workpiece coordinate system by visual recognition software, and provides the recognized workpiece coordinates directly to a robot to achieve workpiece clamping. Firstly, the existing visual recognition process has a perspective defect, which causes the problem that recognized coordinate values deviate from actual values; secondly, the operation reference coordinate system of the robot is a joint coordinate system or a tool coordinate system, the coordinate system of the visual recognition software is a camera coordinate system, and the two reference coordinate systems are different, so that the coordinate data of the workpiece obtained by the robot also have deviation, and the obtained coordinate data of the workpiece is different from the actual clamped coordinate position. Because the accuracy of the three-dimensional model of the workpiece and the relative pose of the three-dimensional model and the robot will influence the teaching effect, if the position of the workpiece under the workpiece coordinate system cannot be registered to the position which is actually required to be clamped, the workpiece clamping operation of the robot will have an operation error, the robot cannot finish clamping the workpiece according to the preset programming, and the expected teaching effect cannot be achieved.
(3) When teaching is carried out, a user is usually required to open a plurality of software such as robot simulation software, programming software and the like at the terminal, so that the load of the terminal is easily increased, the operation efficiency of the terminal is low, and the terminal is seriously down. In addition, the plurality of pieces of software participating in teaching do not perform data interaction and mapping, and when a user operates the plurality of pieces of software, data errors or disorder easily occur, so that the teaching effect is affected.
(4) For beginners, the operation difficulty of a plurality of software such as robot simulation software, three-dimensional modeling software, programming software and the like is high, and the problems that the beginners are unskilled in operation or cannot operate easily occur. This clearly aggravates the teaching difficulty and teaching burden of the learner, and also increases the difficulty of the learner in learning the control of the robot, which is unfavorable for the learner to quickly get on hand to operate and control the robot.
Aiming at the problems to be solved, the application provides a robot teaching control method and device based on virtual vision, which are mainly applied to teaching of an ABB robot. The robot teaching control method provided by the application is applied to a teaching platform, namely a teaching device. The teaching device comprises a teaching end, a control end and a modeling end, wherein the teaching end is a terminal carrying ROBOT STUDIO software, the control end is a terminal carrying UNITY 3D software, and the modeling end is a terminal carrying Vision Master software. Optionally, the teaching end, the control end and the modeling end are connected through TCP/IP protocol.
The robot teaching control method provided by the application is mainly divided into three parts: the teaching control method applied to the teaching end, the teaching control method applied to the modeling end and the teaching control method applied to the control end.
The teaching control method applied to the teaching end can include, but is not limited to, the following steps.
S101, building a virtual teaching environment.
In the step, a teaching interface is displayed on a teaching end, and the teaching interface is used for displaying the built virtual teaching environment. The virtual teaching environment is a scene of the actual operation of the ABB robot, a virtual platform and a virtual robot are arranged in the virtual teaching environment, and parameters of the virtual robot are the same as those of the ABB robot of the entity.
S102, randomly generating a plurality of virtual workpieces and placing the virtual workpieces on a virtual platform.
In this step, referring to the schematic diagram of the virtual teaching environment shown in fig. 2, the teaching end randomly generates a plurality of virtual workpieces through the SMART component inside the teaching end, and randomly places the virtual workpieces on the virtual platform. The pose of the virtual workpiece is the pose under the coordinate system of the workpiece.
S103, acquiring second coordinate information sent by the control end, controlling the virtual robot to grasp the virtual workpiece placed on the virtual platform according to the second coordinate information, and performing simulation teaching.
The second coordinate information is a coordinate of the virtual object in the robot coordinate system.
In this step, referring to fig. 3, the teaching interface is further used to display a teaching process in which the virtual robot grips a virtual workpiece placed on the virtual platform.
In this embodiment, the teaching end mainly includes: providing a virtual environment for simulation teaching, and controlling the constructed virtual robot to perform the simulation teaching according to coordinate information obtained by registration of a control end and a pre-written program. Optionally, the pre-written program is pre-written by RAPID software.
The teaching control method applied to the modeling end can comprise, but is not limited to, the following steps.
S201, pose information of the virtual workpiece, which is sent by the control end, is obtained, the pose information of the virtual workpiece is pose information of the virtual workpiece under a workpiece coordinate system, a three-dimensional perspective model matched with the virtual workpiece is constructed, and the pose of the three-dimensional perspective model is adjusted according to the pose information.
In the step, a modeling interface is displayed at a modeling end, wherein the modeling interface is used for displaying the built modeling environment which is the same as the virtual teaching environment. The modeling interface comprises a picture setting menu control, the modeling end obtains pose information of the virtual workpiece sent by the control end, the pose information is the pose of the virtual workpiece under the workpiece coordinate system, a three-dimensional model identical to the virtual workpiece is built in the modeling environment according to the pose, and the three-dimensional model is displayed in the modeling interface, as shown in fig. 4. And then, adjusting the pose of the three-dimensional perspective model to enable the coordinate position of the three-dimensional perspective model to be the same as the coordinate position of the virtual workpiece, and simultaneously rotating the three-dimensional perspective model to enable the pose of the three-dimensional perspective model to be consistent with the pose of the virtual workpiece so as to ensure the accuracy of the pose image acquired subsequently.
The application aims to separate a workpiece coordinate system from a virtual teaching environment, so that a virtual workpiece under the workpiece coordinate system becomes an independent body, and the pose of the virtual workpiece under the workpiece coordinate system is conveniently determined, and a reference coordinate system of the virtual workpiece is converted into a common camera coordinate system.
Optionally, the three-dimensional model is a perspective model.
S202, responding to a selected instruction of a picture setting menu control, acquiring a overlook image of the three-dimensional perspective model through a virtual camera, generating a workpiece perspective image, and calibrating coordinate information of the three-dimensional perspective model under a camera coordinate system.
It should be noted that, the parameters of the virtual camera, such as the focal length of the lens and the installation height in the modeling environment, are consistent with the parameters of the camera device in practical application. The perspective image of the workpiece is a pose image with perspective effect and can be stored in a designated path.
In this step, since the location of the center point of the workpiece coordinate system of the virtual workpiece changes due to the location of the center point, when the workpiece coordinate system changes, the difficulty in calibrating the virtual workpiece from the workpiece coordinate system to the robot coordinate system is greatly increased. Therefore, the application calibrates the coordinates of the virtual workpiece under the camera coordinate system by collecting the image, so as to convert the virtual workpiece from the workpiece coordinate system to the camera coordinate system, and further fix and position the center point of the coordinate system of the virtual workpiece as the center point of the camera coordinate system.
The method for converting the virtual workpiece from the workpiece coordinate system to the camera coordinate system specifically comprises the following steps: and establishing a mapping relation between the workpiece coordinate system and the camera coordinate system, and converting the workpiece coordinate of the virtual workpiece into the camera coordinate by calling the mapping relation.
In this embodiment, the modeling end is mainly used for constructing the three-dimensional model identical to the virtual workpiece and providing a top perspective image of the three-dimensional model, so that the subsequent control end can complete data mapping according to the top perspective image and the parameter information of the virtual robot, and further complete the conversion of the coordinate system.
Further, the method may further include the following steps before S201:
d1, displaying a modeling interface, wherein the modeling interface further comprises a communication control.
And D2, responding to a triggering instruction to the communication control, and entering a communication interface.
In this step, the user may enter the communication interface shown in fig. 5 by triggering the communication control. The communication interface comprises an IP address display area, a port display area, a received information display area, a transmitted information display area and a connection state display area, and further comprises an upper computer connection control, an information transmission control and an upper computer disconnection control.
And D3, in response to the information input in the IP address display area and the port display area, displaying fourth content information in the IP address display area and the port display area, wherein the fourth content information is the IP address and the port number of the control end.
In this step, the user may input the IP address and the port number of the control terminal through the IP address display area and the port display area, as shown in fig. 5, the IP address input by the user in the IP address display area and the port display area is 192.168.8.200, the port number is 3000, and the modeling terminal responds to this input information and displays it in the IP address display area and the port display area, respectively.
And D4, responding to a trigger instruction for connecting the upper computer control, constructing communication connection of the control end corresponding to the fourth content information, displaying the connection state of the modeling end in a connection state display area, and displaying first test information sent by the control end in a received information display area.
In the application, the connection communication between the control end and the modeling end can be actively initiated by the control end or any one of the modeling ends. In this step, the main body end that actively initiates the connection is the modeling end, and the main body end that actively initiates the connection in S3032 described above is the control end. When the modeling end and the control end are successfully connected, displaying a connection state in a connection state display area: the connection was successful. When the connection between the modeling end and the control end fails, displaying a connection state in a connection state display area: connection failure ", as shown in fig. 5.
And D5, responding to the information input in the transmission information display area, and displaying second test information in the transmission information display area, wherein the second test information is used for testing whether the control terminal can receive the information transmitted by the modeling terminal.
And D6, responding to a trigger instruction for disconnecting the control of the upper computer, and disconnecting the control end.
Optionally, the communication interface further includes a return modeling control, and further includes the following steps after D2: in response to a trigger instruction to return to the modeling control, a modeling interface as shown in FIG. 4 is displayed. In this particular embodiment, as shown in FIG. 5, the return modeling control is in the ">" symbol.
Referring to fig. 1, the teaching control method applied to the control end may include, but is not limited to, the following steps.
S301, displaying a main interface, wherein the main interface comprises a teaching communication area, a modeling communication area and a control area.
In this step, a main interface of the control terminal is displayed in the control terminal. In the main interface, a robot controller area, a read-write robot program data area and a read robot system data area are connected, and are combined together to form a teaching communication area, wherein the Unity communication area is a modeling communication area, and the VM interface area is a control area.
S302, responding to a selected instruction of the teaching communication area, constructing connection with a teaching end, and acquiring first coordinate information of the virtual robot and pose information of the virtual workpiece.
In the step, a user can construct the connection between the teaching end and the control end and perform double-end data interaction by selecting the teaching communication area.
The first coordinate information includes axis coordinate information of the virtual robot in the robot coordinate system. The robot coordinate system herein refers to a joint coordinate system and a tool coordinate system, and the axis coordinates are X-axis coordinates, Y-axis coordinates, and Z-axis coordinates. The pose information of the virtual workpiece refers to coordinate information of the virtual coordinate system under the workpiece coordinate system, and the coordinate information comprises an X-axis coordinate, a Y-axis coordinate and a Z-axis coordinate.
S303, responding to a selected instruction of the modeling communication area, constructing connection with the modeling end, and transmitting the virtual workpiece and pose information thereof to the modeling end so that the modeling end builds a three-dimensional perspective model.
In this step, the user can construct the connection between the control end and the modeling end and perform double-end data interaction by selecting the modeling communication area.
S304, responding to the selected instruction of the control area, and displaying a control sub-interface.
In this step, the user selects the control area, and the control end displays the control sub-interface as shown in fig. 6 according to the selected instruction. The control sub-interface comprises a synchronous display area, a calibration setting area, a binding area, a configuration area and an operation control. The binding area comprises a binding boundary surface control, and the configuration area comprises a calibration selection sub-area and an execution setting control.
Optionally, the arrangement is as follows: a calibration setting area and a binding area are respectively arranged below the synchronous display area, and a configuration area is arranged below the calibration setting area. An operation control is arranged below the configuration area and the binding area.
S305, in response to a selected instruction of the calibration setting area, acquiring calibration data and a workpiece perspective image corresponding to the selected instruction of the execution setting area, and synchronously displaying the calibration data and the workpiece perspective image corresponding to the selected instruction of the execution setting area in a synchronous display area.
It should be noted that the calibration data includes a plurality of coordinate calibration templates, each corresponding to a virtual workpiece. The perspective image of the workpiece is a overlook image of the three-dimensional perspective model of the virtual workpiece, namely, the overlook image acquired by the modeling end.
Further, as shown in fig. 6, the calibration setting area includes a scheme setting sub-area, the synchronous display area includes a flow display sub-area, and the scheme setting sub-area includes a scheme path display sub-area, a first browsing control, and a save scheme control. S305 may include the steps of:
s3051, in response to a trigger instruction of the first browsing control, obtaining calibration data corresponding to the trigger instruction of the first browsing control, displaying a path of the corresponding calibration data in a scheme path display sub-area, and displaying the corresponding calibration data in a flow display sub-area.
In this step, as shown in fig. 6, in response to a trigger instruction for the first browsing control, the solution path display sub-area displays "G: VM procedure A sea health test procedure 1.Sol ", and corresponding calibration data are displayed in the procedure display subarea.
S3052, corresponding calibration data is determined in response to a triggering instruction of the saving scheme control.
Further, the calibration setting area further comprises an image setting sub-area, and the synchronous display area comprises an image display sub-area. The image setting sub-area comprises an image catalog display sub-area, a second browsing control and an image source selection sub-area. S305 may further include the steps of:
S3053, responding to a trigger instruction of the second browsing control, acquiring a workpiece perspective image corresponding to the trigger instruction of the second browsing control, wherein the workpiece perspective image is one or more images, displaying paths of the corresponding workpiece perspective images in an image catalog display subarea, and displaying the corresponding workpiece perspective images in the image display subarea.
S3054, in response to a selection instruction at the image source selection sub-area, displaying the name of the workpiece perspective image corresponding to the selection instruction at the image source selection sub-area in the image source selection sub-area, as shown in fig. 6, and in response to the selection instruction at the image source selection sub-area, "image source 1" is displayed in the image source selection sub-area.
S306, a data binding window is displayed in response to a trigger instruction for binding the boundary surface control, the coordinate calibration template is mapped into the first coordinate information through the data binding window, and a control sub-interface is displayed in response to a trigger instruction for exiting the control in the data binding window.
In this step, the binding boundary surface control is a control identified with a "data binding interface" as shown in fig. 6, and the user can enter the data binding window as shown in fig. 7 by selecting the control to complete mapping of the coordinate information of the calibration template and the robot coordinate system, and after the mapping operation is completed, the user can return to the control sub-interface as shown in fig. 6 by selecting the exit control in the data binding window.
S307, displaying a coordinate calibration template corresponding to the selection instruction of the calibration selection subarea in response to the selection instruction of the calibration selection subarea, and determining an execution mode corresponding to the trigger instruction of the execution setting control in response to the trigger instruction of the execution setting control.
In this step, the user may select a desired coordinate calibration template in the calibration selection area, as shown in fig. 6, and after the user selects, the selected calibration template "flow 1" is displayed in the calibration selection area. Thereafter, the user may select the desired execution setting control to determine the manner of execution.
S308, responding to a trigger instruction of the operation control, converting coordinate information of the virtual workpiece under a camera coordinate system into coordinate information under a robot coordinate system by utilizing a coordinate calibration template corresponding to a selection instruction of a calibration selection sub-region according to an execution mode, and outputting the coordinate information of the virtual workpiece under the robot coordinate system to a teaching end as second coordinate information.
In this step, after the user selects the control identified with "start auto run" as shown in fig. 6, the control end starts the coordinate conversion operation. The user selects a workpiece perspective image in the image setting subarea, selects a coordinate calibration template in the scheme setting subarea, and binds coordinate information in the data binding window. Wherein: the workpiece perspective image maps the center point position of the virtual workpiece under the camera coordinate system; the coordinate calibration template maps a calibration coordinate system; the bound coordinate information is the mapping relation between the calibration coordinate system and the robot coordinate system. When the central point of the coordinate system changes, the coordinate system also changes. Based on the principle, the application firstly converts the coordinate system of the virtual workpiece into a common camera coordinate system in S202 through a mode of reconstructing a three-dimensional model and acquiring images. Then, the control end acquires the perspective image of the workpiece, maps the position of the central point of the virtual workpiece under the camera coordinate system into the calibration coordinate system, and converts the coordinate of the virtual workpiece from the camera coordinate system into the calibration coordinate system. Then, according to the mapping relation between the calibration coordinate system constructed in the data binding window of S306 and the robot coordinate system, the position of the center point of the virtual workpiece under the camera coordinate system is converted into the position of the center point under the robot coordinate system, and then the coordinate system transformation of the virtual workpiece is completed.
The control end of the application has the main functions of realizing data interaction with the teaching end and the modeling end, ensuring the consistency and standardization of data, simultaneously completing the identification of the center point position and the rotation angle of the workpiece according to the workpiece image provided by the modeling end and the robot parameter provided by the teaching end by combining an identification program, registering the workpiece coordinate to a robot coordinate system, further accurately registering the pose of the workpiece to the actually clamped position, and improving the accuracy of clamping the workpiece by the robot.
The related art mainly has the following defects regarding the coordinate system: first, the existing visual recognition process has a perspective defect, which causes a problem that recognized coordinate values deviate from actual coordinate values, and visual deviation is not generated, so that a learner cannot observe the operation process of the robot conveniently. Second, in the conventional robot vision virtual teaching, the coordinates of the virtual workpiece are generally directly output to the virtual robot to perform the gripping operation, however, this results in that the coordinate data of the workpiece obtained by the robot is different from the coordinate position actually gripped by the robot, that is, the coordinate data of the workpiece obtained by the robot has a deviation, which affects the teaching effect.
That is, there are the following problems to be solved: firstly, how to overcome the defect that the recognized coordinate value is inconsistent with the actual coordinate value due to the defect problem during visual recognition; second, how the coordinates of the object are registered from the object coordinate system to the robot coordinate system.
In the method, the modeling communication area and the modeling end are used for completing the re-modeling and the identification of the virtual workpiece, and the workpiece coordinate system of the virtual workpiece is converted into the camera coordinate system, so that the problem that the identified coordinate value does not accord with the actual coordinate value due to perspective defects in visual identification in the prior art is solved, and the accuracy of visual identification is improved. And finally, the mapping of the workpiece coordinate system and the robot coordinate system is completed according to the relation between the coordinates of the virtual workpiece under the camera coordinate system and the calibration template and the mapping of the calibration coordinate system and the robot coordinate system. According to the application, the coordinates of the workpiece are registered from the workpiece coordinate system to the robot coordinate system, so that the pose of the workpiece can be accurately registered to the actual clamped position, the accuracy of clamping the workpiece by the robot is improved, the teaching effect is further improved, the teaching effect reaches the expected value, and the visual effect is provided.
Steps S307 to S308 will be further described below in one embodiment of the present application. Because the virtual workpiece placed by the teaching end can be one or more in the application, the coordinate calibration templates required to be selected for different workpieces are different. According to the teaching requirements of the user, the teaching requirements generally include controlling the virtual robot to grip a certain workpiece, or controlling the virtual robot to grip all the workpieces in turn. For the former, only the coordinates of a certain workpiece need to be calibrated; in the latter case, the coordinates of all the workpieces need to be calibrated. To this end, the present application provides two implementations to facilitate calibrating one or more virtual workpieces.
The execution setting controls include a first setting control and a second setting control, and S307 may include, but is not limited to, the following steps.
S3071, responding to a trigger instruction of a first setting control, and determining an execution mode corresponding to the trigger instruction of the first setting control as a first execution mode; or alternatively, the process may be performed,
s3072, responding to the trigger instruction of the second setting control, and determining the execution mode corresponding to the trigger instruction of the second setting control as a second execution mode.
In the above steps, as shown in fig. 6, the control identified with "scheme execution" is the first execution control, and the control identified with "continuous scheme execution" is the second execution control.
Depending on the implementation, S308 may include, but is not limited to, the following steps.
S3081, according to the first execution mode, converting coordinate information of the virtual workpiece corresponding to the coordinate calibration template under a camera coordinate system into coordinate information under a robot coordinate system by utilizing the coordinate calibration template corresponding to the selection instruction of the calibration selection subarea.
The method is suitable for teaching requirements for controlling the virtual robot to clamp a certain workpiece. The user selects the coordinate calibration template corresponding to the selection instruction of the subarea through calibration, as shown in fig. 6, and the coordinate calibration template selected by the user is displayed as "flow 1" in the calibration selection subarea. After that, the user selects the first setting control, and the control end completes coordinate calibration conversion of the single virtual workpiece through the coordinate calibration template selected by the user.
Or S3082, when the execution mode is the second execution mode, acquiring the sequence of the plurality of coordinate calibration templates, taking the coordinate calibration template corresponding to the selection instruction of the calibration selection subarea as the first coordinate calibration template, and sequentially converting the coordinate information of the virtual workpiece corresponding to the coordinate calibration template under the camera coordinate system into the coordinate information under the robot coordinate system according to the sequence of the plurality of coordinate calibration templates.
The method is suitable for teaching requirements for controlling the virtual robot to sequentially clamp all workpieces. The user selects the coordinate calibration template corresponding to the selection instruction of the subarea through calibration, as shown in fig. 6, and the coordinate calibration template selected by the user is displayed as "flow 1" in the calibration selection subarea. After that, the user selects a second setting control, and the control end sequentially performs coordinate calibration conversion operation on all the virtual workpieces by taking the coordinate calibration template selected by the user as a first coordinate calibration template.
The execution setting control further includes a stop execution control, and after S308, further includes:
and responding to a trigger instruction for stopping executing the control, and stopping executing the operation of converting the coordinate information of the virtual workpiece corresponding to the coordinate calibration template under the camera coordinate system into the coordinate information under the robot coordinate system.
Referring to the interface diagram of the data binding window shown in fig. 7, in the following, the interactive display of the data binding window and the implementation process of mapping the coordinate calibration template to the first coordinate information through the data binding window in step S306 will be further described and illustrated.
The data binding window includes: mapping source display subarea, mapping target display subarea and mapping result subarea. The data binding window also includes a refresh data control, a binding signal control, a delete binding control, and an exit control.
Optionally, the arrangement is as follows: the mapping source display area and the mapping target display sub-area are arranged in parallel, the refreshing data control and the binding signal control are both positioned between the mapping source display area and the mapping target display sub-area, the mapping result sub-area is arranged below the mapping source display area and the mapping target display sub-area, the deleting binding control is arranged below the mapping result sub-area, and the exiting control is arranged at the upper right corner of the data binding window.
The mapping the coordinate calibration template to the first coordinate information may include, but is not limited to, the following steps.
S3061, in response to a trigger instruction for refreshing the data control, displaying all coordinate calibration templates in the mapping source display subarea, and displaying first coordinate information in the mapping target display subarea, wherein the coordinate calibration templates comprise a plurality of calibration coordinate information.
In this step, the user selects the refresh data control, and all the coordinate calibration templates are displayed in the map source display sub-area, where the coordinate calibration templates may include, but are not limited to, a plurality of calibration coordinate information and other calibration information, and as shown in fig. 7, one coordinate calibration template is displayed in the map source display sub-area, and includes a calibration coordinate information variable T, VM with VM variable type int and calibration coordinate information variables X, Y and a with float variable type. And the variable ImageData with the VM variable type of IMAGE, the variable fLeaveTimeStampLow with the VM variable type of float, and the like are other calibration information, namely other variables in the control end. The application is characterized in that the coordinate calibration conversion is completed by calibrating coordinate information.
Meanwhile, the data type, path, variable name, variable value, and the like, which display the first coordinate information, are in the map target display sub-area. Optionally, body data of the ABB robot other than the first coordinate information is also displayed in the map target subregion. As shown in fig. 7, variables of RS data type num are the first coordinate information, such as variable v_a, variable X, and the like. And the other displayed data are body data of the ABB body other than the first coordinate information.
S3062, in response to the selection instruction in the mapping source display subarea, highlighting marks are given to the calibration coordinate information corresponding to the selection instruction in the mapping source display subarea.
In this step, the user may select a plurality of coordinate calibration templates displayed in the map source display sub-area, and the control terminal responds to the corresponding selection instruction, so that the selected calibration coordinate information is highlighted. As shown in fig. 7, the user highlights the variable a by selecting the calibration coordinate information variable a whose VM variable type is float, which is displayed in the map source display sub-area.
S3063, in response to the selection instruction in the mapping target display subregion, a highlight mark is given to the first coordinate information corresponding to the selection instruction in the mapping target display subregion.
In this step, the user may select the first coordinate information displayed in the mapping target display sub-area, and the control end responds to the corresponding selection instruction to highlight the selected first coordinate information. As shown in fig. 7, the control terminal highlights the variable v_a by selecting the variable v_a of the RS data type num displayed in the mapping target display section.
S3064, in response to a trigger instruction of the binding signal control, mapping the calibration coordinate information endowed with the highlight mark into the first coordinate information endowed with the highlight mark, and displaying a mapping relation between the calibration coordinate information endowed with the highlight mark and the first coordinate information endowed with the highlight mark in a binding result subarea.
In the step, a user can finish mapping and binding of the calibration coordinate template and the coordinate information of the robot under the robot coordinate system by selecting the binding signal control. And the control end carries out binding mapping on the calibration coordinate information and the first coordinate information selected by the user in the steps S3062 to S3063 according to the triggering instruction, and displays the corresponding mapping relation in the binding result subarea. As shown in fig. 7, the user selects the mapping source as the variable a and the mapping target as the variable v_a, completes the binding mapping of the variable a and the variable v_a by triggering the binding signal control, and displays the corresponding mapping relationship in the binding result subregion identified with the "data binding result".
Optionally, the binding area includes a mapping display sub-area, the mapping display sub-area being disposed beside the binding interface control. S3064 further includes: and synchronously displaying the mapping relation between the calibration coordinate information endowed with the highlight mark and the first coordinate information endowed with the highlight mark in the mapping display subarea.
S3065, in response to the selection instruction of the binding result sub-region, a highlight mark is given to the mapping relation corresponding to the selection instruction of the binding result sub-region.
In this step, the user may select the mapping relationship to be deleted by selecting the mapping relationship displayed in the binding result sub-area, and the control end responds to the corresponding selection instruction, so that the selected mapping relationship is highlighted.
S3066, in response to a trigger instruction for deleting the binding control, mapping of the calibration coordinate information endowed with the highlight mark and the first coordinate information endowed with the highlight mark is released, and the mapping relation between the demapped calibration coordinate information and the first coordinate information is deleted in the binding result subarea.
In this step, the user may delete the mapping relationship selected in step S3065 by selecting the delete binding control. As shown in fig. 7, the binding result sub-area displays the mapping relation of the variable a mapped to the variable v_a, and after the user selects the mapping relation of the variable a mapped to the variable v_a, the binding control is triggered to be deleted, so that the mapping relation between the variable a and the variable v_a is released, and meanwhile, the corresponding display content is deleted in the binding result sub-area.
In one embodiment of the present application, the data communication and interaction process between the control end and the teaching end and step S302 will be further described and illustrated. The teaching communication area comprises a connection sub-area marked as 'connection robot controller', a read-write sub-area marked as 'read-write robot program data', and a read sub-area marked as 'read robot system data'. The control terminal of the application constructs the connection of the teaching terminal through responding to the selected instruction of the connection sub-region, and performs data interaction through responding to the read-write sub-region and the selected instruction of the read sub-region.
The connection of the construction and the teaching end can comprise the following steps.
A1, responding to a selected instruction of the connection subarea, and displaying a connection subarea.
In this step, the user may enter the connection sub-interface as shown in FIG. 8 by selecting a connection sub-area. The connection sub-interface comprises a refreshing list control, a connection control, an exit connection control, a teaching terminal display sub-area and a connection display sub-area.
A2, responding to a trigger instruction for refreshing the list control, and displaying information of all teaching terminals in the teaching terminal display subarea.
In this step, the user may acquire information of all connectable teaching ends by triggering a refresh list control identified with a "refresh controller list". And the control terminal responds to the trigger instruction, searches the connectable teaching terminal and synchronously displays the searched teaching terminal in a teaching terminal display area. Alternatively, the contents displayed in the teaching terminal display area include a system name, a system IP, a version number, a virtual state, a controller name, and the like of the teaching terminal. As shown in fig. 8, the System name of the teaching terminal found in this embodiment is System2, the System IP is 127.0.0.1, the version number is 6.8.0.0, the virtual state is virtual, and the controller name is lapto, and these pieces of information are synchronously displayed in the teaching terminal display area.
A3, in response to the instruction for selecting the teaching terminal display sub-region, highlighting is given to the information of the teaching terminal corresponding to the instruction for selecting the teaching terminal display sub-region.
In this step, the user may select the information of the teaching terminal displayed in the teaching terminal display area, and the control terminal highlights the information of the selected teaching terminal in response to the corresponding selection instruction.
And A4, responding to a trigger instruction of the connection control, constructing communication connection with the teaching end endowed with the highlight mark, and displaying a communication connection result in the connection display subarea.
In this step, the user can construct the communication connection between the selected teaching end and the control end by triggering the connection control identified with the "one-key connection". And C, the control end constructs connection with the demonstrator selected in the step A3 by the user according to the trigger instruction, and displays a connection result in a connection display subarea marked with a connection state.
And A5, responding to a trigger instruction for exiting the connection control, disconnecting the connection with the currently connected teaching end, and clearing the display content in the connection display subarea.
In the step, when the user needs to disconnect the control end from the teaching end, the connection control with the connection of the exit controller is triggered to finish the disconnection of communication. And C, the controller disconnects the teaching end with the communication connection in the step A4 according to the trigger instruction.
The acquiring pose information and the first coordinate information of the virtual workpiece may include the following steps.
B1, responding to the selected instruction of the read-write subarea, and displaying a read-write subarea.
In this step, the user may select the read-write sub-area to enter the read-write sub-interface shown in fig. 9, where the read-write sub-interface includes a numerical modification sub-area, a numerical type selection sub-area, and a numerical display sub-area. The read-write sub-interface also includes an acquire data control and a purge list control.
And B2, acquiring the body data of the virtual robot in response to a trigger instruction for acquiring the data control, and displaying the data type, the storage type, the path source information, the name and the initial variable value of the body data of the virtual robot in the numerical display subarea.
The body data includes first coordinate information of the virtual robot and pose information of the virtual workpiece.
Alternatively, B2 may include the steps of:
b21, in response to the selection instruction of the data type selection sub-region, displaying data type information corresponding to the selection instruction of the data type selection sub-region in the data type selection sub-region.
In this step, as shown in fig. 9, the data type information corresponding to the selection instruction includes: all data type, num type, bool type, and robtarget type.
And B22, responding to a trigger instruction for acquiring the data control, acquiring the body data of the virtual robot corresponding to the data type information, and displaying the data type, the storage type, the path source information, the name and the initial variable value of the body data of the virtual robot corresponding to the data type information in the numerical display subarea.
In this step, when the data type information corresponding to the selection instruction is all the data types, the control end obtains the body data of all the data types, and displays the relevant information of the body data of all the data types in the numerical display sub-area. When the data type information corresponding to the selection instruction is of a num type, the control end acquires the body data of the num type and displays the related information of the body data of the num type in the numerical display subarea. When the data type information corresponding to the selection instruction is the bool type, the control end obtains the body data of the bool type and displays the related information of the body data of the bool type in the numerical display subarea. When the data type information corresponding to the selection instruction is the robtarget type, the control end obtains the body data of the robtarget type, and displays the related information of the body data of the robtarget type in the numerical display subarea.
B3, in response to the information input in the numerical modification sub-area, displaying the first content information in the numerical modification sub-area.
The first content information includes a modified value of an initial variable value of the body data and corresponding annotation information, and the modified value of the initial variable value of the body data is used to replace the initial variable value of the body data.
In this step, as shown in fig. 9, two columns identified with "modification value" and "comment" are numerical modification sub-regions, i.e., the numerical modification sub-regions include a column for modifying the initial variable value and a column for inputting comment. The user may modify the initial variable value and annotate the annotation by entering a modification value in the column for modifying the initial variable value and entering a corresponding annotation in the column for entering the annotation. For example, the comment "PLC input RX data" is input in the comment input column corresponding to the row where the user variable blr_rx is located. It should be noted that num variables can only be modified to integer types or floating point types, otherwise cannot be modified.
And B4, in response to a trigger instruction for the emptying list control, emptying the content displayed in the numerical modification subarea and the numerical display subarea.
The application utilizes the DataGridView control of C# to sort and read and write the program data of the Robot Studio Robot, can synchronously update the data, can realize the reading and writing of the Robot program data, has the functions of ordering and specifying data types, can display the functions of data types, path 1, path 2, variable names, storage types, variable values, modification values, comments and the like in the table, and realizes the visual management. When the values of the robot program data need to be modified in the read-write robot program data interface, the space where the variable is located is required to be modified in the column selection of the modified values, the data modified values are input, and the modified values can be automatically transmitted to the program variables of the robot by clicking a carriage return or other places of the interface.
After step B3, the method further comprises the following steps:
and C1, displaying a reading sub-interface in response to a selected instruction for reading the sub-area, wherein the reading sub-interface comprises a first coordinate display sub-area, a second coordinate display sub-area and a coordinate type switching control, and the first coordinate information of the virtual robot is synchronously displayed on the first coordinate display sub-area.
In this step, after the first coordinate information is obtained in B2 or the initial variable value of the first coordinate information is modified in B3, the first coordinate information is synchronously displayed in the first coordinate display sub-area of the reading sub-interface as shown in fig. 10. The first coordinate information is an axis coordinate type including an X-axis coordinate, a Y-axis coordinate, a Z-axis coordinate, an RX-axis coordinate, a RY-axis coordinate, and an RZ-axis coordinate. As shown in fig. 10, the obtained first coordinate information includes an X-axis coordinate having a value of 912.451, a Y-axis coordinate having a value of 256.667, a Z-axis coordinate having a value of 1018.481, an RX-axis coordinate having a value of-139.630, an RY-axis coordinate having a value of 0.000, and an RZ-axis coordinate having a value of-160.000.
And C2, responding to a trigger instruction of the coordinate type switching control, switching the first coordinate information from the shaft coordinate type to the Cartesian coordinate type, generating third coordinate information, and displaying the third coordinate information in the second coordinate display subarea.
The third coordinate information includes: cartesian coordinate information of a virtual robot in a robot coordinate system.
In this step, the user can implement the conversion of the types of coordinates by triggering the coordinate type switching control. And the control end responds to the trigger instruction, converts the axis coordinate of the virtual robot in the robot coordinate system into Cartesian coordinates, and displays the Cartesian coordinates in the second coordinate display subarea. As shown in fig. 10, cartesian coordinate information corresponding to the first coordinate information displayed in the first coordinate display sub-area is displayed in the second coordinate display sub-area, including an axis 1 position coordinate of 19.040, an axis 2 position coordinate of-0.455, an axis 3 position coordinate of 13.094, an axis 4 position coordinate of-41.152, an axis 5 position coordinate of 79.788, and an axis 6 position coordinate of 7.548.
In one embodiment of the present application, the data communication and interaction process between the control end and the modeling end and step S303 will be further described and illustrated below. According to the application, communication connection between the control end and the modeling end is constructed through a TCP/IP protocol, and data mapping and data binding between the modeling end and the teaching end are completed through interaction of the control end. S303 may include, but is not limited to, the following steps.
S3031, a communication sub-interface is displayed in response to the selected instruction for modeling the communication area.
It should be noted that the communication sub-interface includes a server communication area and a modeling adjustment area. The server communication sub-area comprises a server control, a data transmission control, a closing server control, a first information display sub-area and a second information display sub-area, and the modeling adjustment area comprises a pose information selection sub-area, a pose modification sub-area and a binding data control.
In this step, the user selects the modeling communication area to enter the communication sub-interface as shown in fig. 11.
S3032, in response to a trigger instruction for establishing the server control, communication connection with the modeling end is established, and the communication connection state with the modeling end is displayed in the first information display subarea.
Optionally, the communication connection state includes any one of success in establishing the server or failure in establishing the server.
S3033, in response to the selection instruction of the pose information selection subarea, the pose information of the virtual workpiece corresponding to the selection instruction of the pose information selection subarea is displayed in the pose information selection subarea, and the initial variable value of the pose information of the corresponding virtual workpiece is displayed in the pose modification subarea.
In this step, the pose information selecting sub-region includes: the task selection area, the program module selection area and the variable name selection area allow a user to select body data except the first coordinate information corresponding to the virtual robot through the task selection area and the program module selection area, and to select pose information of the corresponding virtual workpiece through the variable name selection area. Wherein the selectable information in the variable name selection area includes: the number of the virtual workpiece, the X-axis coordinate, the Y-axis coordinate and the angle value of the virtual workpiece, wherein the X-axis coordinate and the Y-axis coordinate map the position of the virtual workpiece, and the angle value maps the gesture of the virtual workpiece. After the user selects the virtual workpiece, the initial variable values of the pose information of the corresponding virtual workpiece are synchronously displayed in the pose modification subarea.
S3034, in response to the information input by the pose modification sub-area, displaying the second content information in the pose modification sub-area.
The second content information is a modified value of the pose information.
S3035, the initial variable value of the pose information is replaced by the modified value of the pose information in response to the triggering instruction of the binding data control, and the initial variable value of the pose information is transmitted to the modeling end in response to the selected instruction of the sending data control.
In the above steps, the user may input information in the pose modification sub-area and modify the initial variable value of the pose information, as shown in fig. 11, the user may input corresponding modification value information for different variable names in the pose modification sub-area, for example, for a variable T, the input modification value is 3; for variable X, the input modification value is 54.437; for variable Y, the input modification value is-39.2603; for variable a, the input modification value is 29.9084. The user completes the modification by triggering the bind data control. And the control end responds to the trigger instruction and replaces the initial value of the pose information with a modified value. And then, the user sends pose information of the virtual workpiece to the modeling end by triggering the data sending control, so that the modeling end builds a three-dimensional perspective model. And the control end responds to the trigger instruction and sends the pose information of the virtual workpiece to the modeling end.
Further, S3032 includes:
and responding to the information input in the second information display subarea, displaying third content information in the second information display subarea, wherein the third content information is the IP address and the port number of the control terminal.
In this step, the user may input the IP address and the port number of the control terminal through the second information display sub-area, and as shown in fig. 11, the user inputs the IP address 192.168.8.200 and the port number 3000 in the second information display sub-area, and the control terminal responds to the input information and displays the same in the second information display sub-area.
And responding to a trigger instruction for establishing the server control, establishing communication connection with the modeling end, and displaying the communication connection state with the modeling end and second test information sent by the modeling end in the first information display subarea.
And responding to the information input in the first information display subarea, displaying first test information in the first information display subarea, wherein the first test information is used for testing whether the modeling terminal can receive the information sent by the control terminal.
After S3032, it may further include:
and responding to a trigger instruction for closing the server control, and disconnecting communication connection of the modeling end corresponding to the third content information.
The application provides an example to explain the technical scheme of the application, and the teaching control flow is as follows:
1. the teaching end is pre-built with a virtual teaching environment in which virtual robots and virtual platforms are arranged, as shown in fig. 2.
2. The control terminal responds to the selected instruction of the connection sub-area of the teaching communication area, displays a connection sub-interface shown in figure 8, and constructs connection with the teaching terminal and displays relevant connection information by responding to the selected instruction and the triggering instruction of the connection sub-interface.
3. The control end responds to the selected instruction of the read-write sub-area of the teaching communication area, displays a read-write sub-interface shown in fig. 9, acquires and displays the body data of the virtual robot by responding to the selected instruction, the trigger instruction and the input information of the read-write sub-interface, wherein the body data comprises first coordinate information of the virtual robot under a robot coordinate system and pose information of a virtual workpiece. Optionally, the initial variable values of the body data are modified by the numerical modification sub-region and the corresponding annotations are added.
4. The control terminal responds to the selected instruction of the reading subarea of the teaching communication area, displays a reading subarea as shown in fig. 10, and displays first coordinate information of the virtual robot under a robot coordinate system by responding to the selected instruction of the reading subarea. Optionally, the first coordinate information is switched from the shaft coordinate type to the cartesian coordinate type by a coordinate type switching control.
5. The control end responds to the selected instruction of the modeling communication area, displays a communication sub-interface shown in fig. 11, and constructs the connection with the modeling end by responding to the selected instruction, the triggering instruction and the input information of the communication sub-interface. Or the modeling end responds to the communication interface control in the modeling interface shown in fig. 4, enters the communication interface shown in fig. 5, and constructs the connection with the control end through the selected instruction and the input information of the communication interface.
6. The modeling end builds a three-dimensional perspective model and adjusts the pose of the three-dimensional perspective model according to the information sent by the control end, and collects the image of the three-dimensional perspective model to generate a perspective image of the workpiece, as shown in fig. 4.
7. The control end responds to the selected instruction of the control area, enters the control sub-interface shown in fig. 6, acquires calibration data and a workpiece perspective image through the selected instruction of the calibration setting area, and displays the calibration data and the workpiece perspective image in the synchronous display area.
8. The control end responds to the triggering instruction of the binding boundary surface control, displays a data binding window shown in fig. 7, binds calibration data into first coordinate information through the data binding serial port, namely builds the mapping relation between the calibration coordinate system and the robot coordinate system, and responds to the triggering instruction of the exiting control in the data binding window, and displays a control sub-interface shown in fig. 6.
9. The control end responds to a selection instruction in the calibration selection subarea, displays a selected coordinate calibration template, and responds to a trigger instruction for executing a setting control, and a corresponding execution mode is determined.
10. The control end firstly converts the coordinates of the three-dimensional perspective image under the camera coordinate system into the coordinates under the calibration coordinate system by utilizing the coordinate calibration template according to the execution mode through responding to the trigger instruction of the operation control, and then converts the coordinates of the three-dimensional perspective image under the calibration coordinate system into the coordinates of the robot coordinate system by utilizing the mapping relation between the coordinate calibration template and the robot coordinate system which are bound in the step 8.
11. And the control end sends the transformed coordinates of the virtual workpiece to the teaching end.
12. The teaching end performs simulation teaching according to the transformed coordinates of the virtual workpiece, as shown in fig. 3.
The application provides the following technical effects:
(1) the application integrates the common robot simulation software, the three-dimensional modeling software and the registration software into the same teaching platform, the teaching platform is combined with the interactive display technology, the teaching process is more visual through the interactive display method, the teaching difficulty of a demonstrator and the learning difficulty of a learner are reduced, and the application is particularly suitable for the learning of a beginner.
(2) According to the application, the communication between the control end and the teaching end and the communication between the control end and the modeling end are realized through the control end, the communication connection between the teaching end and the modeling end are indirectly realized, the data interaction and mapping among a plurality of pieces of software participating in teaching are realized, the uniformity of data is ensured, the occurrence of the problem of data disorder or error is avoided, and the adverse effect of the data problem on the teaching effect is reduced. The application has good cross-platform characteristic, supports various operating systems and can realize the transplanting of an embedded platform.
(3) According to the application, the conversion of the coordinate system is realized in a data mapping mode, firstly, the modeling communication area and the modeling end are used for completing the re-modeling and recognition of the virtual workpiece, and the workpiece coordinate system of the virtual workpiece is converted into the camera coordinate system, so that the problem that the recognized coordinate value is inconsistent with the actual coordinate system due to perspective defects in the visual recognition in the prior art is solved, and the accuracy of the visual recognition is improved. And finally, the mapping of the workpiece coordinate system and the robot coordinate system is completed according to the relation between the coordinates of the virtual workpiece under the camera coordinate system and the calibration template and the mapping of the calibration coordinate system and the robot coordinate system. According to the application, the coordinates of the workpiece are registered from the workpiece coordinate system to the robot coordinate system, so that the pose of the workpiece can be accurately registered to the actual clamped position, the accuracy of clamping the workpiece by the robot is improved, the teaching effect is further improved, the teaching effect reaches the expected value, and the visual effect is provided.
The terms "first," "second," "third," "fourth," and the like in the description of the application and in the above figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented, for example, in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one (item)" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices, or units, which may be in electrical, mechanical, or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic or optical disk, and other various media capable of storing program codes.
The step numbers in the above method embodiments are set for convenience of illustration, and the order of steps is not limited in any way, and the execution order of the steps in the embodiments may be adaptively adjusted according to the understanding of those skilled in the art.

Claims (10)

1. The robot teaching control method based on virtual vision is characterized by comprising the following steps:
building a virtual teaching environment, wherein a virtual platform and a virtual robot are arranged in the virtual teaching environment, a plurality of virtual workpieces are randomly generated, and the virtual workpieces are placed on the virtual platform;
and acquiring second coordinate information sent by a control end, controlling the virtual robot to grasp the virtual workpiece placed on the virtual platform according to the second coordinate information, and performing simulation teaching.
2. The robot teaching control method based on virtual vision is characterized by comprising the following steps:
acquiring pose information of the virtual workpiece sent by a control end, constructing a three-dimensional perspective model matched with the virtual workpiece, and adjusting the pose of the three-dimensional perspective model according to the pose information;
and acquiring a overlook image of the three-dimensional perspective model through a virtual camera, generating a workpiece perspective image, and calibrating coordinate information of the three-dimensional perspective model under a camera coordinate system.
3. The robot teaching control method based on virtual vision is characterized by comprising the following steps:
displaying a main interface, wherein the main interface comprises a teaching communication area, a modeling communication area and a control area;
Responding to a selected instruction of the teaching communication area, constructing connection with a teaching end, and acquiring first coordinate information of the virtual robot and pose information of the virtual workpiece;
the first coordinate information comprises axis coordinate information of the virtual robot under a robot coordinate system, and the pose information of the virtual workpiece comprises axis coordinate information of the virtual workpiece under a workpiece coordinate system;
responding to a selected instruction of the modeling communication area, constructing connection with a modeling end, and transmitting the virtual workpiece and pose information thereof to the modeling end;
responding to a selected instruction of the control area, displaying a control sub-interface, wherein the control sub-interface comprises a calibration setting area, a binding area, a configuration area and an operation control, the binding area comprises a binding boundary surface control, and the configuration area comprises a calibration selection sub-area and an execution setting control;
responding to a selected instruction of the calibration setting area, acquiring calibration data and a workpiece perspective image corresponding to the selected instruction of the execution setting area, wherein the calibration data comprises a plurality of coordinate calibration templates, each coordinate calibration template corresponds to one virtual workpiece, and the workpiece perspective image is a top view image of a three-dimensional perspective model of the virtual workpiece;
Displaying a data binding window in response to a trigger instruction for binding a boundary control, mapping the coordinate calibration template into the first coordinate information through the data binding window, and displaying the control sub-interface in response to a trigger instruction for exiting the control in the data binding window;
displaying a coordinate calibration template corresponding to the selection instruction of the calibration selection subarea in response to the selection instruction of the calibration selection subarea, and determining an execution mode corresponding to the trigger instruction of the execution setting control in response to the trigger instruction of the execution setting control;
and responding to a trigger instruction of the operation control, converting the coordinate information of the virtual workpiece under the camera coordinate system into the coordinate information under the robot coordinate system by utilizing a coordinate calibration template corresponding to the selection instruction of the calibration selection sub-region according to the execution mode, and outputting the coordinate information of the virtual workpiece under the robot coordinate system to the end as second coordinate information.
4. The virtual vision-based robot teaching control method according to claim 3, wherein the execution setting control includes a first setting control and a second setting control, and the determining, in response to a trigger instruction for the execution setting control, an execution manner corresponding to the trigger instruction for the execution setting control includes:
Responding to a trigger instruction of the first setting control, and determining an execution mode corresponding to the trigger instruction of the first setting control as a first execution mode;
or alternatively, the process may be performed,
and responding to the trigger instruction of the second setting control, and determining the execution mode corresponding to the trigger instruction of the second setting control as a second execution mode.
5. The virtual vision-based robot teaching control method according to claim 4, wherein the converting the coordinate information of the virtual workpiece in the camera coordinate system into the coordinate information in the robot coordinate system using the coordinate calibration template corresponding to the selection instruction of the calibration selection sub-region according to the execution mode, comprises:
when the execution mode is a first execution mode, according to the first execution mode, converting coordinate information of a virtual workpiece corresponding to the coordinate calibration template under a camera coordinate system into coordinate information under a robot coordinate system by utilizing a coordinate calibration template corresponding to a selection instruction of the calibration selection sub-region;
or alternatively, the process may be performed,
and when the execution mode is a second execution mode, acquiring the sequence of the plurality of coordinate calibration templates, taking the coordinate calibration template corresponding to the selection instruction of the calibration selection subarea as a first coordinate calibration template, and sequentially converting the coordinate information of the virtual workpiece corresponding to the coordinate calibration template under the camera coordinate system into the coordinate information under the robot coordinate system according to the sequence of the plurality of coordinate calibration templates.
6. The virtual vision-based robot teaching control method of claim 3, wherein the data binding window comprises a mapping source display sub-region, a mapping target display sub-region, a refresh data control, a binding signal control and a binding result sub-region; the binding the coordinate calibration template to the first coordinate information through the data binding window includes:
responding to a trigger instruction of the refreshing data control, displaying all the coordinate calibration templates in the mapping source display subarea, and displaying the first coordinate information in the mapping target display subarea, wherein the coordinate calibration templates comprise a plurality of calibration coordinate information;
in response to a selection instruction in the mapping source display subarea, giving a highlight mark to calibration coordinate information corresponding to the selection instruction in the mapping source display subarea;
in response to a selection instruction in the mapping target display subarea, giving a highlight mark to first coordinate information corresponding to the selection instruction in the mapping target display subarea;
and responding to a trigger instruction of the binding signal control, mapping the calibration coordinate information endowed with the highlight mark into the first coordinate information endowed with the highlight mark, and displaying the mapping relation between the calibration coordinate information endowed with the highlight mark and the first coordinate information endowed with the highlight mark in the binding result subarea.
7. The virtual vision-based robot teaching control method according to claim 6, wherein the data binding window further comprises a delete binding control, the binding the coordinate calibration template to the first coordinate information through the data binding window, further comprising:
responding to a selection instruction of the binding result sub-region, and endowing a mapping relation corresponding to the selection instruction of the binding result sub-region with a highlight mark;
and in response to a trigger instruction for deleting the binding control, mapping between the calibration coordinate information endowed with the highlight mark and the first coordinate information endowed with the highlight mark is released, and the mapping relation between the demapped calibration coordinate information and the first coordinate information is deleted in the binding result subarea.
8. The virtual vision-based robot teaching control method according to claim 3, wherein the teaching communication area includes a read-write sub-area, and the acquiring the first coordinate information of the virtual robot in the robot coordinate system and the pose information of the virtual workpiece includes:
responding to a selected instruction of the read-write subarea, displaying a read-write subarea, wherein the read-write subarea comprises an acquired data control, a numerical display subarea and a numerical modification subarea;
Acquiring body data of the virtual robot in response to a trigger instruction for the data acquisition control, and displaying a data type, a storage type, path source information, a name and an initial variable value of the body data of the virtual robot in the numerical display subarea, wherein the body data comprises first coordinate information of the virtual robot and pose information of the virtual workpiece;
in response to information entered in the numerical modification sub-area, first content information is displayed in the numerical modification sub-area, the first content information including a modified value of an initial variable value of the body data and corresponding annotation information thereof, the modified value of the initial variable value of the body data being used to replace the initial variable value of the body data.
9. The virtual vision-based robot teaching control method according to claim 8, wherein the teaching communication area further includes a reading sub-area, and after the first content information is displayed in the numerical modification sub-area in response to the information input in the numerical modification sub-area, further comprising:
in response to a selected instruction for the reading sub-area, displaying a reading sub-interface, wherein the reading sub-interface comprises a first coordinate display sub-area, a second coordinate display sub-area and a coordinate type switching control, and the first coordinate information of the virtual robot is synchronously displayed on the first coordinate display sub-area;
Responding to a trigger instruction of the coordinate type switching control, switching the first coordinate information from the shaft coordinate type to the Cartesian coordinate type, generating third coordinate information, and displaying the third coordinate information in the second coordinate display subarea; wherein the third coordinate information includes: and the virtual robot has Cartesian coordinate information under a robot coordinate system.
10. The virtual vision-based robot teaching control method according to claim 8, wherein the constructing a connection with a modeling terminal and transmitting the virtual workpiece and pose information thereof to the modeling terminal in response to a selected instruction for the modeling communication area, comprises:
responding to a selected instruction of the modeling communication area, displaying a communication sub-interface, wherein the communication sub-interface comprises a server control, a data transmission control and a modeling adjustment area, and the modeling adjustment area comprises a pose information selection sub-area, a pose modification sub-area and a binding data control;
responding to a trigger instruction for establishing a server control, and constructing communication connection with the modeling end;
in response to a selection instruction for the pose information selection sub-region, displaying pose information of a virtual workpiece corresponding to the selection instruction of the pose information selection sub-region in the pose information selection sub-region, and displaying an initial variable value of the pose information of the corresponding virtual workpiece in the pose modification sub-region;
In response to information input in the pose modification sub-region, displaying second content information in the pose modification sub-region, the second content information being a modification value of the pose information;
responding to a trigger instruction of the binding data control, and replacing an initial variable value of the pose information with a modified value of the pose information;
and transmitting the initial variable value of the pose information to the modeling end in response to the selected instruction of the sending data control.
CN202310662882.7A 2023-06-05 2023-06-05 Robot teaching control method based on virtual vision Active CN116619376B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310662882.7A CN116619376B (en) 2023-06-05 2023-06-05 Robot teaching control method based on virtual vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310662882.7A CN116619376B (en) 2023-06-05 2023-06-05 Robot teaching control method based on virtual vision

Publications (2)

Publication Number Publication Date
CN116619376A true CN116619376A (en) 2023-08-22
CN116619376B CN116619376B (en) 2024-01-23

Family

ID=87597265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310662882.7A Active CN116619376B (en) 2023-06-05 2023-06-05 Robot teaching control method based on virtual vision

Country Status (1)

Country Link
CN (1) CN116619376B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106142091A (en) * 2015-05-12 2016-11-23 佳能株式会社 Information processing method and information processor
KR20180063515A (en) * 2016-12-02 2018-06-12 두산로보틱스 주식회사 Teaching Device of a Robot and Teaching Method thereof
JP2019081236A (en) * 2017-10-31 2019-05-30 セイコーエプソン株式会社 Simulation device, control device and robot
CN114603533A (en) * 2020-12-03 2022-06-10 精工爱普生株式会社 Storage medium and robot teaching method
CN115008439A (en) * 2021-03-05 2022-09-06 佳能株式会社 Information processing apparatus and method, robot system, product manufacturing method, and computer program product

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106142091A (en) * 2015-05-12 2016-11-23 佳能株式会社 Information processing method and information processor
KR20180063515A (en) * 2016-12-02 2018-06-12 두산로보틱스 주식회사 Teaching Device of a Robot and Teaching Method thereof
JP2019081236A (en) * 2017-10-31 2019-05-30 セイコーエプソン株式会社 Simulation device, control device and robot
CN114603533A (en) * 2020-12-03 2022-06-10 精工爱普生株式会社 Storage medium and robot teaching method
CN115008439A (en) * 2021-03-05 2022-09-06 佳能株式会社 Information processing apparatus and method, robot system, product manufacturing method, and computer program product

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘华锋;周俊荣;叶军林;车海波;徐娟;: "工业机器人在自动化生产线分拣站的应用", 锻压装备与制造技术, no. 02, pages 11 - 14 *

Also Published As

Publication number Publication date
CN116619376B (en) 2024-01-23

Similar Documents

Publication Publication Date Title
Ong et al. Augmented reality-assisted robot programming system for industrial applications
AU2020201554B2 (en) System and method for robot teaching based on RGB-D images and teach pendant
CN110573308B (en) Computer-based method and system for spatial programming of robotic devices
JP3394322B2 (en) Coordinate system setting method using visual sensor
RU2587104C2 (en) Method and device to control robot
CN108122257A (en) A kind of Robotic Hand-Eye Calibration method and device
CN104002297A (en) Teaching system, teaching method and robot system
US11247335B2 (en) Semi-autonomous robot path planning
CN113379849A (en) Robot autonomous recognition intelligent grabbing method and system based on depth camera
CN111093911B (en) Robot system and method for operating the same
CN109814434B (en) Calibration method and device of control program
CN116958426A (en) Virtual debugging configuration method, device, computer equipment and storage medium
CN210361314U (en) Robot teaching device based on augmented reality technology
US20170312918A1 (en) Programming Method of a Robot Arm
CN116619376B (en) Robot teaching control method based on virtual vision
CN112847300A (en) Teaching system based on mobile industrial robot demonstrator and teaching method thereof
Thoo et al. Online and offline robot programming via augmented reality workspaces
CN109664273B (en) Industrial robot cursor dragging teaching method and system
CN111899629B (en) Flexible robot teaching system and method
Bulej et al. Simulation of manipulation task using iRVision aided robot control in Fanuc RoboGuide software
CN111015675A (en) Typical robot vision teaching system
Pichler et al. User centered framework for intuitive robot programming
CN215701709U (en) Configurable hand-eye calibration device
CN112506378B (en) Bending track control method and device and computer readable storage medium
US20230249345A1 (en) System and method for sequencing assembly tasks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant