WO2021046767A1 - 自主式机器人工装***、控制***、控制方法及存储介质 - Google Patents

自主式机器人工装***、控制***、控制方法及存储介质 Download PDF

Info

Publication number
WO2021046767A1
WO2021046767A1 PCT/CN2019/105438 CN2019105438W WO2021046767A1 WO 2021046767 A1 WO2021046767 A1 WO 2021046767A1 CN 2019105438 W CN2019105438 W CN 2019105438W WO 2021046767 A1 WO2021046767 A1 WO 2021046767A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
subsystem
control
tooling
image
Prior art date
Application number
PCT/CN2019/105438
Other languages
English (en)
French (fr)
Inventor
华韬
席宝时
吴剑强
李韬
傅玲
Original Assignee
西门子(中国)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 西门子(中国)有限公司 filed Critical 西门子(中国)有限公司
Priority to PCT/CN2019/105438 priority Critical patent/WO2021046767A1/zh
Priority to EP19944974.5A priority patent/EP4005745A4/en
Priority to CN201980098648.8A priority patent/CN114174007B/zh
Publication of WO2021046767A1 publication Critical patent/WO2021046767A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23PMETAL-WORKING NOT OTHERWISE PROVIDED FOR; COMBINED OPERATIONS; UNIVERSAL MACHINE TOOLS
    • B23P19/00Machines for simply fitting together or separating metal parts or objects, or metal and non-metal parts, whether or not involving some deformation; Tools or devices therefor so far as not provided for in other classes
    • B23P19/04Machines for simply fitting together or separating metal parts or objects, or metal and non-metal parts, whether or not involving some deformation; Tools or devices therefor so far as not provided for in other classes for assembling or disassembling parts
    • B23P19/06Screw or nut setting or loosening machines
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/31From computer integrated manufacturing till monitoring
    • G05B2219/31058Determination of assembly tooling, fixture
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40033Assembly, microassembly
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40072Exert a screwing motion
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40111For assembly
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/45Nc applications
    • G05B2219/45035Printed circuit boards, also holes to be drilled in a plate
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/45Nc applications
    • G05B2219/45091Screwing robot, tighten or loose bolt

Definitions

  • the invention relates to the industrial field, in particular to a vision-based autonomous robot tooling system, a vision-based autonomous robot tooling control system, a vision-based autonomous robot tooling control method and a computer readable storage medium.
  • Robots have been widely used in industrial applications such as loading and unloading, welding, stamping, spraying, and handling.
  • Robots can be flexibly combined with different equipment to meet the requirements of complex production processes and realize multiple machines. Linkage with automated production lines and digital factory layout, to maximize labor savings and improve production efficiency.
  • the robot needs to be programmed to understand and rigorously complete the task.
  • the fastening process needs to be pre-programmed, including screw fastening sequence, screw hole position, screw mode and torque, etc.
  • the robot can then repeatedly follow the pre-programmed process to tighten the screws.
  • the tightening process needs to be reprogrammed. It can be seen that for areas that require frequent introduction of new products, such as electronics manufacturing, the process of reprogramming each time hinders the large-scale use of robots.
  • a vision-based autonomous robot tooling system and a vision-based autonomous robot tooling control system are proposed, and on the other hand, a vision-based autonomous robot tooling control method is proposed.
  • a computer-readable storage medium so that the robot work platform can autonomously complete the assembly of the workpiece without reprogramming.
  • the vision-based autonomous robot tooling control system proposed in the embodiment of the present invention includes: an image acquisition module for controlling a vision subsystem to acquire images of a target workpiece arranged on a workbench; an image processing module for matching The image of the target workpiece is image-recognized to obtain tooling information; the robot control module is used to control a robot subsystem to perform corresponding tooling operations; the process management module is used to control a vision subsystem to collect by calling the image acquisition module The image of the target workpiece; the image of the target workpiece is recognized by calling the image processing module, the corresponding robot control parameters are planned according to the obtained tooling information, and the robot control parameters are controlled by calling the robot control module according to the robot control parameters.
  • the robot subsystem performs corresponding tooling operations.
  • the tooling is screw assembly;
  • the image processing module performs image recognition on the image of the target workpiece to obtain tooling information including screw hole positioning information;
  • the robot control module controls a robot sub
  • the system moves the screwdriver installed on the operating end of the robot subsystem to each target position;
  • the system further includes: a wire feeding control module for controlling a screw feeding subsystem to provide screws to the corresponding position; and a screwdriver control module for controlling The driving part of a screwdriver subsystem realizes the screw tightening control of the start and stop of the screwdriver and the corresponding torque;
  • the process management module controls a vision subsystem to collect the image of the target workpiece by calling the image acquisition module;
  • the image processing module performs image recognition on the image of the target workpiece, and obtains tooling information including screw hole positioning information, and according to the tooling information, plans the robot control parameters including actual screw hole information and process parameters.
  • the robot control parameters control a robot subsystem, a screw feeding subsystem, and a screwdriver subsystem to work together by calling the robot control module, wire feeding control module, and screwdriver control module to coordinate with the specified screw hole coordinates and screw fastening process Parameters tighten the screws.
  • it further includes: a user interface for displaying the working status of the vision-based autonomous robot tooling control system during operation.
  • the process management module is further configured to obtain a process file, which includes part information, process parameters, and part templates involved in the tooling, and stores the process file in a process library
  • the image processing module sequentially matches the part template in the process file with the image of the target workpiece, and establishes the position of the part in the process file and the part of the target workpiece after the matching degree reaches the set requirements Correspondence of location, tooling information is obtained.
  • a predetermined workpiece type template is further stored in the process library; the process file further includes: workpiece information including the type of the workpiece; the image processing module is further used to transfer the process Perform image recognition and matching between the workpiece type template in the library and the image of the target workpiece, and determine the type of the target workpiece according to the matching result; according to the type of the target workpiece, extract the workpiece information from the process library A process file corresponding to the type of the workpiece and the type of the target workpiece is then executed to perform a part matching operation on the part templates in the extracted process file in turn with the image of the target workpiece.
  • the process management module is further used to obtain a production file corresponding to the original design, extract production process information including part information, process parameters, and part templates from the production file, and combine all The production process information is exported to a process file; or the process management module is further used to receive a process file including part information, process parameters, and part templates input by the user.
  • the screw fastening system proposed in the embodiment of the present invention includes: a vision subsystem for collecting images of a target workpiece arranged on a workbench; a robot subsystem for controlling the operating end to perform corresponding tooling operations; a control sub
  • the system is used to control the vision subsystem to collect the image of the target workpiece, perform image recognition on the image of the target workpiece, generate corresponding robot control parameters according to the obtained tooling information, and control the robot sub-system according to the robot control parameters
  • the system executes the corresponding tooling operation.
  • the tooling is screw assembly; the robot subsystem is used to move the screwdriver installed on the operating end of the robot subsystem to each target position; the target position includes the corresponding screw holes of the target workpiece
  • the system further includes: a screwdriver subsystem, which includes a screwdriver installed on the operating end of the robot subsystem and a drive component for driving the screwdriver; the drive component can control the screwdriver to start and stop and the corresponding torque The screw is fastened; and the screw feeding subsystem is used to provide the screw to the corresponding position; the control subsystem is used to control the vision subsystem to collect the image of the target workpiece, and perform image recognition on the image of the target workpiece , Obtain tooling information including screw hole positioning information, generate robot control parameters including actual screw hole information and process parameters according to the tooling information, and control the robot subsystem and the screw feeding subsystem according to the robot control parameters Coordinate with the screwdriver subsystem.
  • the vision subsystem includes: a camera arranged above the workbench and/or a camera arranged on the operating end side of the robot subsystem; and the vision subsystem includes a camera arranged on the
  • the control subsystem is further used to control the robot subsystem to move the camera arranged on the operating end side of the robot subsystem to various target positions.
  • control subsystem is the vision-based autonomous robot tooling control system in any of the above embodiments.
  • the screw fastening control method proposed in the embodiment of the present invention includes: controlling a vision subsystem to collect an image of a target workpiece; performing image recognition on the image of the target workpiece, and planning corresponding robot control parameters according to the obtained tooling information;
  • the robot control parameters control a robot subsystem to perform corresponding tooling operations; wherein, the robot subsystem is used to control the operating end to perform corresponding tooling operations.
  • the tooling is screw assembly;
  • the performing image recognition on the image of the target workpiece, and planning corresponding robot control parameters according to the obtained tooling information includes: performing image recognition on the image of the target workpiece to obtain According to the tooling information including screw hole positioning information, generating robot control parameters including screw hole information and process parameters according to the tooling information;
  • said controlling the robot subsystem to perform corresponding tooling operations according to the robot control parameters includes: The robot control parameters control the robot subsystem, a screw feeding subsystem, and a screwdriver subsystem to work together to fasten screws with specified screw hole coordinates and screw fastening process parameters; wherein, the robot subsystem is used to The screwdriver installed at the operating end of the robot subsystem moves to each target position; the target position includes the position corresponding to each screw hole of the target workpiece; the screwdriver subsystem includes a screwdriver and a screwdriver installed at the operating end of the robot subsystem.
  • the driving part is used to drive the screwdriver; the driving part can control the start and stop of the screw
  • the method further includes: obtaining a process file including part information, process parameters, and part templates involved in the tooling, and storing the process file in a process library;
  • the image recognition of the image of the target workpiece includes: matching the part template in the process file with the image of the target workpiece in sequence, and after the matching degree reaches the set requirement, establishing the position of the part in the process file and the image of the target workpiece. The corresponding relationship of the part position of the target workpiece is obtained, and the tooling information is obtained.
  • a predetermined workpiece type template is further stored in the process library; the process file further includes: workpiece information including the type of the workpiece; and the screw hole template in the process file Before performing screw hole matching with the image of the target workpiece in turn, it further includes: performing image recognition and matching between the workpiece type template in the process library and the image of the target workpiece, and determining the target workpiece according to the matching result.
  • Type according to the type of the target workpiece, a process file corresponding to the type of the workpiece in the workpiece information and the type of the target workpiece is extracted from the process library.
  • the process file is: a process file generated by extracting production process information including part information, process parameters, and part templates from a production file corresponding to the original design; or a process file input for the user including Process documents including part information, process parameters and part templates.
  • Another screw fastening control system proposed in an embodiment of the present invention includes: at least one memory and at least one processor, wherein: the at least one memory is used to store a computer program; the at least one processor is used to call the At least one computer program stored in the memory executes the vision-based autonomous robot tooling control method in any of the above-mentioned embodiments.
  • the computer-readable storage medium proposed in the embodiment of the present invention has a computer program stored thereon; the computer program can be executed by a processor and implement the vision-based autonomous robot tooling control method in any of the above embodiments .
  • the actual robot control parameters are planned according to the recognition result, and then the robot can be The control parameters control the robot or related equipment to coordinate work to complete the autonomous workpiece assembly of the robot working platform.
  • pre-setting the workpiece type template firstly identifying the workpiece type on the image of the target workpiece, it is possible to flexibly realize the autonomous workpiece assembly of different workpieces when there are multiple types of workpieces.
  • the user's workload can be simplified, and the accuracy of the information in the process files can be improved to avoid errors during manual input.
  • Fig. 1A is an exemplary structure diagram of a vision-based autonomous robot tooling system in an embodiment of the present invention.
  • FIG. 1B is an exemplary structure diagram of a vision-based autonomous robot tooling system obtained by taking the case where the tooling is screw fastening in the embodiment of the present invention as an example.
  • Figure 2A is a simplified schematic diagram of an image of a target workpiece in an example of the present invention.
  • Fig. 2B is an image of the screw holes in Fig. 2A that have been fixed with screws.
  • Fig. 3 is a schematic diagram of a screw hole template in an example of the present invention.
  • FIGS. 4A and 4B are schematic diagrams of PCB workpiece type templates in an example of the present invention.
  • Fig. 5A is a schematic structural diagram of a control subsystem in the vision-based autonomous robot tooling system shown in Fig. 1 in an embodiment of the present invention.
  • FIG. 5B is a schematic structural diagram of the control subsystem obtained by taking the case where the tooling is screw fastening as an example in the embodiment of the present invention.
  • Fig. 6 is an exemplary flowchart of a vision-based autonomous robot tooling control method in an embodiment of the present invention.
  • Fig. 7 is an exemplary structure diagram of yet another vision-based autonomous robot tooling control system in an embodiment of the present invention.
  • the robot work platform in order to enable the robot work platform to autonomously complete the assembly of the workpiece without reprogramming, it is considered to perform image recognition on the target workpiece in advance to obtain relevant tooling information, and to plan the actual robot control parameters according to the recognition result , Such as parts installation sequence and process parameters, etc., then the robot or related equipment can be coordinated according to the robot control parameters to complete the autonomous workpiece assembly of the robot work platform.
  • FIG. 1A is an exemplary structure diagram of a vision-based autonomous robot tooling system (also referred to as a vision-based autonomous robot work platform) in an embodiment of the present invention.
  • the system may include: a vision subsystem 10, a robot subsystem 20, and a control subsystem 50.
  • the vision subsystem 10 is used to collect an image of a target workpiece arranged on a workbench.
  • the robot subsystem 20 is used to control the operating end to perform corresponding tooling operations.
  • the control subsystem 50 is used to control the vision subsystem 10 to collect images of the target workpiece, perform image recognition on the image of the target workpiece, generate corresponding robot control parameters according to the obtained tooling information, and control the robot according to the robot control parameters
  • the subsystem 20 or related equipment performs corresponding tooling operations.
  • FIG. 1B is an exemplary structure diagram of a vision-based autonomous robot tooling system obtained by taking the case where the tooling is screw fastening in the embodiment of the present invention as an example.
  • the system may include: a vision subsystem 10, a robot subsystem 20, a screwdriver subsystem 30, a screw feeding subsystem 40, and a control subsystem 50.
  • the vision subsystem 10 is used to collect an image of a target workpiece arranged on a workbench.
  • the vision subsystem 10 may include at least one camera (or called a camera).
  • the at least one camera may include: a fixed camera fixed on a screw-fastened console frame, and/or a mobile camera mounted on a robot arm (or called a robotic arm), and the mobile camera may be combined with the robot end effector Move together.
  • the camera fixed on the frame can be installed directly above the workpiece.
  • the imaging plane of the camera can be parallel to the screw fastening working plane.
  • the fixed camera can be placed at the highest point of the rack on the working plane.
  • the camera mounted on the robot arm is closer to the working plane and has a smaller field of view. Therefore, the camera can be used when precise positioning of the screw hole is required.
  • a fixed camera and a mobile camera can be used at the same time.
  • the robot subsystem 20 is used to move the screwdriver installed at the operating end of the robot subsystem 20 to various target positions under the control of the control subsystem 50; the target positions include the positions corresponding to the screw holes of the target workpiece. If the screw of the screw feeding subsystem 40 is sucked from the screw feeder through the screw tip of the screwdriver, the target position may also include the position of the corresponding screw in the screw feeder. In addition, if a camera is installed on the robot arm, the robot subsystem 20 can also move the camera to a position required by the control subsystem 50 under the control of the control subsystem 50 to assist in realizing accurate screw hole positioning.
  • the robot subsystem 20 can be implemented by a robot arm with 4, 5, or 6 degrees of freedom or a SCARA robot.
  • the screwdriver subsystem 30 may include a screwdriver installed at the operating end of the robot subsystem 20 and corresponding driving components, such as a motor. Under the control of the control subsystem 50, the driving component can control the screwdriver to start and stop, and tighten the screws with corresponding torque.
  • the screw feeding subsystem 40 is used to provide screws to corresponding positions.
  • the screw feeding subsystem 40 can be an automatic feeding system, or a suction or magnetic feeding system, such as vacuum accessories, magnetic drill bits, etc.
  • the specific feeding method can be flexible according to the specific type of screws and the needs of users select. Screws can be supplied through the supply hose. Alternatively, the screw tip of the screwdriver can also take out the screw from the screw feeder by vacuum suction or a magnetic drill. The screw feeder ensures that the screws are accurately arranged in the preset position.
  • the screw feeding subsystem 40 can provide the required screws to the screwdriver subsystem 30 through the supply hose under the control of the control subsystem 50, and the screw nozzle of the screwdriver is brought to the corresponding screw hole of the target workpiece. , Or the screw feeding subsystem 40 can arrange the required various screws in preset positions, and the robot subsystem 20 operates the screwdriver under the control of the control subsystem 50 to take out the screws from the screw feeder through vacuum suction or a magnetic drill. .
  • the control subsystem 50 is used to control the vision subsystem to collect the image of the target workpiece, perform image recognition on the image of the target workpiece, and obtain tooling information including screw hole positioning information. According to the tooling information, the actual screw can be generated. According to the robot control parameters of hole information and process parameters (such as torque), the robot subsystem, the screw feeding subsystem and the screwdriver subsystem are controlled to perform coordinated work according to the robot control parameters.
  • the screw hole information may include screw installation sequence, screw mode (that is, screw type), screw hole position (such as the coordinate position of each screw hole), and screw hole size (that is, corresponding to the corresponding screw size), etc.
  • the control subsystem 50 may first obtain a process file including part information related to the tooling (such as the screw hole information involved in the above-mentioned screw fastening), process parameters and a part template (such as a screw hole template).
  • the process file can be derived from a pre-acquired production file corresponding to the original design.
  • the production file can be a CAD file or a CAM file.
  • the part information (such as screw hole information), process
  • the production process information including parameters and part templates (such as a screw hole template), and the production process information is exported to a process file; or it may be a received process file manually input by a user.
  • the user can input a process file by himself. It is used to match and correlate the actual captured image of the workpiece when the robot is assembled, and finally generate the actual control program and parameters.
  • the part template (such as screw hole template) in the process file can be matched with the image of the target workpiece in sequence, and after the matching degree reaches the set requirement, the part in the process file can be created
  • the corresponding relationship between the position (such as the screw hole position) and the part position (such as the screw hole position) of the target workpiece obtain tooling information, and generate the actual part information (such as screw hole information) and process parameters according to the obtained tooling information
  • the robot control parameters can have multiple implementation forms. For example, it may be to generate a current processing file recording the robot control parameters, and then control the robot subsystem 20 or related equipment (such as the screw feeding subsystem 40 and the screwdriver subsystem 30) to work together according to the current processing file.
  • the robot subsystem 20 or related equipment (such as the screw feeding subsystem 40 and the screwdriver Subsystem 30) works together.
  • a normalized cross-correlation (NCC) algorithm for part positioning can be used. This is a method based on gray values. This is usually done by subtracting the mean and dividing by the standard deviation.
  • NCC normalized cross-correlation
  • n the number of pixels of the part template image
  • f(x,y) the gray value of each pixel in the target workpiece image
  • t(x,y) the gray value of each pixel in the part template image
  • ⁇ f the average gray value of each pixel in the target workpiece image
  • ⁇ t the average value of the gray value of each pixel in the part template image
  • ⁇ f the variance of the gray value of each pixel in the target workpiece image
  • ⁇ t The variance of the gray value of each pixel in the part template image.
  • shape-based template matching is sufficient to locate parts.
  • some parts have been installed.
  • some screw holes have been installed with screws.
  • the already installed parts can be fixed with screws, for example. Screw holes are excluded.
  • FIG. 2A shows a simplified schematic diagram of an image of a target workpiece in an example of the present invention.
  • the screw holes and screws on the original image may be unclear when the resolution is relatively low, in this article, in order to make the features more obvious, a simplified process of highlighting the features is performed to obtain a simplified image of the original image.
  • the image of the target workpiece shown in Fig. 2A is used for screw hole positioning
  • the screw hole template shown in Fig. 3 is used for matching
  • the target workpiece shown in Fig. 2A can be fixed with screws as shown in Fig. 2B
  • the screw hole is eliminated, correspondingly, the screw hole can be eliminated in the robot control parameters, and the screw fastening of the screw hole can be skipped during actual installation.
  • the parts (such as screw holes) in the target workpiece image can be matched with the parts (such as screw holes) in the process file.
  • the rotation and translation vectors of the model in the target workpiece image can be obtained.
  • the rotation vector and the translation vector are combined to establish the homography transformation T between the model frame and the image frame.
  • the matching of parts (such as screw holes) in the model frame can be calculated by the T matrix and the coordinates in the figure, as shown in the following formula (2):
  • each process file can be stored in a process library, and the process file can further include workpiece information at this time.
  • the tool information can include the type, name, and design-related information of the workpiece;
  • the library stores the artifact type templates corresponding to different target artifacts.
  • PCB outlines and some marks on the PCB can be used to create a workpiece type template based on shape matching.
  • Figures 5A and 5B respectively show schematic diagrams of two PCB type templates. This PCB type template uses the outline shape of the PCB workpiece, the gap on the outline, and the center hole on the PCB workpiece to make the PCB workpiece type template.
  • the image pyramid method can be used.
  • the image pyramid level is set to a value of 3 or higher, and the search starts from the low-resolution and small-size image at the top of the pyramid. If the search is successful at this stage , Then search for higher resolution and larger size images at the next level of the pyramid, and so on, until it reaches the original image size at the bottom of the pyramid.
  • the scaling of the model can be set to anisotropy, which means that the row scaling factor and the column scaling factor may be different.
  • the superscript T represents the workpiece type template
  • the subscript i represents the serial number of each point.
  • the model can be set with small scale stepping and rotation angle stepping to avoid missing any matches in the picture.
  • the model of the artifact type template can be compared with the image of the target artifact in all positions using a similarity measure.
  • the gradient of each point in the target workpiece image is Among them, the subscript u and v respectively represent the abscissa and the ordinate in the image of the target workpiece, and the superscript S represents the image of the target workpiece.
  • similarity measurement is to take the sum of all normalized dot products of the gradient vector of the workpiece type template image, and search for the images of all points in the model data set. This will generate a score at each point in the image of the target artifact.
  • the similarity measurement function can be shown in the following formula (3):
  • the model of the artifact type template exactly matches the image of the target artifact, this function will return 1 point.
  • the score corresponds to the part of the object visible in the image of the target workpiece. If there is no object in the workpiece type template in the image of the target workpiece, the score obtained after matching is 0.
  • the process file corresponding to the type of work piece can be searched for according to the work piece information of each process file in the process library, and the screw hole template in the process file is sequentially matched with the target work piece image.
  • the screw feeding subsystem 40 and the screwdriver subsystem 30 are controlled to work together according to the robot control parameters, specifically, for example, the screw installation sequence, the screw mode, and the screw hole position in the robot control parameters can be used.
  • the screw hole size and process parameters control the screw feeding subsystem 40 to provide the corresponding mode and the corresponding size of the screw to the corresponding position, control the robot subsystem 20 to move the screwdriver to the corresponding position, and control the screwdriver subsystem 30 to start and stop and the corresponding torque Screw tightening and monitor the quality of the screw tightening process.
  • control subsystem 50 may be composed of a communication switch and/or a computer (such as a PC) and/or a PLC (programmable logic controller) and/or an embedded computing device and/or any suitable computing device. These devices jointly control the aforementioned subsystems to ensure the normal operation of the screw fastening system. Since the main peripheral device is connected to the control device through an Ethernet or other communication bus, the communication switch is used to expand the number of communication ports in the control subsystem 50. PLC is generally used for digital I/O control of peripheral equipment to ensure the reliability and real-time control of the entire system. The main image acquisition, image processing and overall control logic can all be completed by an industrial computer/PC/embedded computing device. The central computing unit should be powerful enough to perform the entire calculation.
  • control sub-system 50 may have multiple implementation modes.
  • FIG. 5A shows a schematic structural diagram of the control sub-system 50 in an example. As shown in FIG. 5A, it may include: an image acquisition module 503, an image processing module 504, a robot control module 505, and a process management module 508. In some embodiments, a user interface 501 may be further included.
  • the image acquisition module 503 is used to control a vision subsystem to acquire an image of a target workpiece arranged on a workbench.
  • the image processing module 504 is used to perform image recognition on the image of the target workpiece to obtain tooling information.
  • the robot control module 505 is used to control a robot subsystem to perform corresponding tooling operations.
  • the process management module 508 is used to control a vision subsystem to collect the image of the target workpiece by calling the image acquisition module 503; to perform image recognition on the image of the target workpiece by calling the image processing module 504, and plan according to the obtained tooling information According to the corresponding robot control parameters, the robot control module 505 is called to control the robot subsystem to perform corresponding tooling operations according to the robot control parameters.
  • the user interface 501 is used to display the working status of the vision-based autonomous robot tooling control system during operation.
  • control subsystem 50 may have different specific implementation structures.
  • the control subsystem 50 may be as shown in FIG. 5B, including: a user interface 501, a process library 502, an image acquisition module 503, an image processing module 504, and a robot control module 505 , Wire feeding control module 506, screwdriver control module 507 and process management module 508.
  • the user interface 501 is used to display the working status of the screw fastening system during operation. When a system error occurs during system operation, the user can intervene and adjust the screw tightening process through the user interface.
  • the user interface 501 is also used to provide an interface for inputting and editing workpiece and process information in the process library 502.
  • the image acquisition module 503 is used to control the camera of the vision subsystem 10 to acquire an image of a target workpiece arranged on a workbench, and to acquire a real-time image of the target workpiece collected by the camera.
  • the number and types of cameras included in the vision subsystem 10 may be configurable.
  • the image processing module 504 is used to further process the real-time image of the target workpiece acquired by the image acquisition module 503, including image recognition and screw hole positioning.
  • these real-time images and the data stored in the process library 502 can be used to identify workpiece information and locate all screw holes on the workpiece.
  • the workpiece type of the target workpiece can be determined according to the real-time image of the target workpiece and the workpiece type template pre-stored in the process library 30, and the workpiece type corresponding to the workpiece type can be searched according to the workpiece information of each process file in the process library 30
  • the screw hole template in the process file is sequentially matched with the image of the target workpiece. After the matching degree reaches the set requirement, the screw hole position in the process file and the target workpiece are established. Correspondence of the screw hole position.
  • the robot control module 505 is used to control the movement of the robot according to the preset coordination and posture, and to monitor the state of the robot.
  • the robot subsystem 20 is controlled to move the screwdriver installed at the operating end of the robot subsystem 20 to various target positions.
  • the robot subsystem 20 can be controlled to move the screwdriver to the corresponding position according to the screw installation sequence and the screw hole position in the robot control parameters.
  • the robot control module 505 can also be configured according to the actual robot used in the system.
  • the wire feeding control module 506 supports multi-function devices corresponding to different wire feeding methods, such as an automatic wire feeding system or a suction/magnetic screwdriver.
  • the wire feeding control module 506 ensures that the screw can be correctly fed into the tip of the screwdriver before the screw is finally screwed in.
  • the screw feeding subsystem 40 can be controlled to provide screws of the corresponding mode and size to the corresponding position according to the screw installation sequence and screw mode in the robot control parameters, such as the screwdriver that can be supplied to the screwdriver through a hose. position.
  • the wire feeding method of suction/magnetic screwdriver it can be provided to a preset position, and the screwdriver can suck the corresponding screw from it.
  • the screwdriver control module 507 is used to control the driving parts of the screwdriver subsystem 30, so that the driving parts control the start, stop and torque of the screwdriver, and monitor whether the screw tightening operation is smooth.
  • the process management module 508 is used to control the vision subsystem 10 to collect the image of the target workpiece by calling the image acquisition module 503; by calling the image processing module 504 to further process the real-time image of the target workpiece acquired by the image acquisition module 503, including image recognition And the screw hole positioning, and plan the robot control parameters including the actual screw hole information and process parameters.
  • the process management module 508 is also used to control the robot subsystem 20, the screw feeding subsystem 40, and the screwdriver subsystem 30 to work together by calling the robot control module 505, the wire feeding control module 506, and the screwdriver control module 507 to work together with the specified screw hole coordinates and Screw tightening process parameters tighten the screws.
  • the process management module 508 may be further used to obtain a process file that includes part information (such as screw hole information), process parameters, and part template (such as screw hole template), and combines the The process file is stored in the process library 502.
  • the process management module 508 may first obtain a production file corresponding to the original design, and extract from the production file information including: part information (such as screw hole information), process parameters, and part templates (such as screw hole templates). Production process information, and export the production process information to a process file; or the process management module 508 first receives part information (such as screw hole information), process parameters, and part templates (such as screw hole templates) provided by the user Process documents.
  • the user can input the process file through the user interface 501.
  • the image processing module 504 when the image processing module 504 performs image recognition, it can match the part template (such as screw hole template) in the process file with the image of the target workpiece in sequence (such as screw hole matching). After the set requirements are met, the corresponding relationship between the part position (such as the screw hole position) in the process file and the part position (such as the screw hole position) of the target workpiece is established.
  • the part template such as screw hole template
  • the part position such as the screw hole position
  • the craft library 502 may further store a predetermined workpiece type template, and the craft file further includes: workpiece information including the workpiece type.
  • the image processing module 504 first performs image recognition and matching between the workpiece type template in the process library and the image of the target workpiece, and determines the type of the target workpiece according to the matching result; according to the type of the target workpiece , Extract the process file corresponding to the type of the workpiece in the workpiece information and the type of the target workpiece from the process library 502, and then combine the part template (such as the screw hole template) in the extracted process file with the target The image of the workpiece is matched to the part (such as screw hole matching).
  • the vision-based autonomous robot tooling system in the embodiment of the present invention has been described in detail above, and the vision-based autonomous robot tooling method in the embodiment of the present invention will be described in detail below.
  • the vision-based autonomous robot tooling method in the embodiment of the present invention can be implemented on the screw fastening system in the embodiment of the present invention.
  • details that are not disclosed in the method embodiment of the present invention please refer to the corresponding description in the system embodiment of the present invention, which will not be repeated here.
  • Fig. 6 is an exemplary flowchart of a vision-based autonomous robot tooling control method in an embodiment of the present invention. As shown in Figure 6, the method may include the following steps:
  • image recognition can be performed on the image of the target workpiece to obtain tooling information including screw hole positioning information, and based on the tooling information, generating information including screw hole information and process Parameters of the robot control parameters.
  • a robot subsystem, a screw feeding subsystem, and a screwdriver subsystem can be controlled to work together according to the robot control parameters to use the specified screw hole coordinates and screw fastening process Parameters tighten the screws.
  • the robot subsystem is used to control the operating end to perform corresponding tooling operations.
  • the robot subsystem is used to move the screwdriver installed on the operating end of the robot subsystem to each target position; the target position includes the position corresponding to each screw hole of the target workpiece;
  • the screwdriver subsystem includes a screwdriver installed at the operating end of the robot subsystem and a drive component for driving the screwdriver; the drive component can control the screwdriver to start and stop and tighten the screws with corresponding torque; the screw feed
  • the subsystem is used to provide the screws to the corresponding positions.
  • the method may further include: obtaining a process file, which includes part information (such as screw hole information), process parameters, and part template (such as screw hole template), and combining the process file Stored in a process library.
  • performing image recognition on the image of the target workpiece in step S64 may include: sequentially performing part matching (such as screw holes) on the part template (such as screw hole template) in the process file and the image of the target workpiece. Hole matching), after the matching degree reaches the set requirements, establish the corresponding relationship between the part position in the process file (such as the screw hole position) and the part position of the target workpiece (such as the screw hole position), and generate the part information ( Such as screw hole information) and process parameters of the robot control parameters.
  • a predetermined workpiece type template may be further stored in the process library; the process file further includes: workpiece information including the type of the workpiece; Before the part template (such as screw hole template) in the process file is sequentially matched with the image of the target workpiece (such as screw hole matching), it may further include: comparing the workpiece type template in the process library with the target Image recognition and matching are performed on the image of the workpiece, and the type of the target workpiece is determined according to the matching result; according to the type of the target workpiece, the workpiece type in the workpiece information is extracted from the process library to be the same as the type of the target workpiece. Corresponding process file.
  • the process file may be: the production process information including: part information (such as screw hole information), process parameters and part template (such as screw hole template) is extracted from a production file corresponding to the original design, and then generated The process file; or the process file entered by the user including part information (such as screw hole information), process parameters and part template (such as screw hole template).
  • part information such as screw hole information
  • process parameters and part template such as screw hole template
  • Fig. 7 is an exemplary flow chart of a vision-based autonomous robot tooling method obtained by taking a case where the tooling is screw fastening in an example of the present invention. As shown in Figure 7, the method may include the following steps:
  • Workpiece type templates can be pre-stored in a process library. Then, in this step S74, each workpiece type template can be extracted from the process library for matching with the image of the target workpiece.
  • step S75 Use the corresponding process file to perform screw hole positioning on the image of the target workpiece.
  • the process file can be derived from a generated file in advance and stored in the process library. Then, in this step S74, the corresponding process file can be extracted from the process library according to the type of the target workpiece determined in step S74, and the screw hole template of each installation position thereof can be matched with the image of the target workpiece in turn.
  • S76 Generate robot control parameters including screw hole information and process parameters according to the results of image recognition and screw hole positioning.
  • step S80 Judge whether all the workpiece fastening tasks have been completed, if yes, end; otherwise, return to step S72.
  • Fig. 8 is an exemplary structure diagram of yet another vision-based autonomous robot tooling control system in an embodiment of the present invention.
  • the system may include: at least one memory 81 and at least one processor 82.
  • some other components may also be included, such as a display, a communication port, and so on. These components communicate through the bus 83.
  • At least one memory 81 is used to store a computer program.
  • the computer program can be understood to include the various modules of the vision-based autonomous robot tooling control system shown in FIGS. 5A and 5B.
  • at least one memory 81 may also store an operating system and the like.
  • Operating systems include but are not limited to: Android operating system, Symbian operating system, Windows operating system, Linux operating system and so on.
  • At least one processor 82 is configured to call at least one computer program stored in the memory 81 to execute the vision-based autonomous robot tooling control method described in the embodiment of the present invention.
  • the processor 82 can be a CPU, a processing unit/module, an ASIC, a logic module or a programmable gate array or the like. It can receive and send data through the communication port.
  • a hardware module may include specially designed permanent circuits or logic devices (such as dedicated processors, such as FPGAs or ASICs) to complete specific operations.
  • the hardware module may also include programmable logic devices or circuits temporarily configured by software (for example, including general-purpose processors or other programmable processors) for performing specific operations.
  • software for example, including general-purpose processors or other programmable processors
  • the embodiments of the present invention also provide computer software that can be executed on a server or a server cluster or a cloud platform.
  • the computer software can be executed by a processor and realize the vision-based autonomy described in the embodiments of the present invention.
  • Robot tooling control method Robot tooling control method.
  • the embodiment of the present invention also provides a computer-readable storage medium on which a computer program is stored, and the computer program can be executed by a processor and implement the vision-based autonomous robot described in the embodiment of the present invention.
  • Tooling control method Specifically, a system or device equipped with a storage medium may be provided, and the software program code for realizing the function of any one of the above-mentioned embodiments is stored on the storage medium, and the computer (or CPU or MPU of the system or device) ) Read and execute the program code stored in the storage medium.
  • an operating system or the like operating on the computer can also be used to complete part or all of the actual operations through instructions based on the program code.
  • Implementations of storage media used to provide program codes include floppy disks, hard disks, magneto-optical disks, optical disks (such as CD-ROM, CD-R, CD-RW, DVD-ROM, DVD-RAM, DVD-RW, DVD+RW), Magnetic tape, non-volatile memory card and ROM.
  • the program code can be downloaded from the server computer via a communication network.
  • the actual robot control parameters are planned according to the recognition result, and then the robot can be The control parameters control the robot or related equipment to coordinate work to complete the autonomous workpiece assembly of the robot working platform.
  • the workpiece type is recognized on the image of the target workpiece, and the autonomous workpiece assembly of different workpieces can be flexibly realized when there are multiple types of workpieces.
  • the user's workload can be simplified, and the accuracy of the information in the process files can be improved to avoid errors during manual input.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

一种基于视觉的自主式机器人工装***、控制***、控制方法及存储介质,其中,基于视觉的自主式机器人工装***包括:视觉子***(10),用于采集目标工件的图像;机器人子***(20),用于控制操作端执行相应的工装操作;控制子***(50),用于控制视觉子***(10)采集目标工件的图像,对目标工件的图像进行图像识别,根据得到的工装信息生成对应的机器人控制参数,根据机器人控制参数控制机器人子***(20)执行相应的工装操作。

Description

自主式机器人工装***、控制***、控制方法及存储介质 技术领域
本发明涉及工业领域,特别是一种基于视觉的自主式机器人工装***、基于视觉的自主式机器人工装控制***、基于视觉的自主式机器人工装控制方法及计算机可读存储介质。
背景技术
随着机器人相关技术的发展,机器人在装卸、焊接、冲压、喷涂、搬运等工业应用中得到了广泛的应用,机器人可以灵活地与不同的设备组合,以满足复杂的生产工艺要求,实现多机联动自动化生产线和数字化工厂布局,最大限度地节省人力,提高生产效率。
然而,机器人需要编程来理解和严格地完成任务,以工件装配中的螺丝紧固应用为例,需要对紧固过程进行预先编程,包括螺丝紧固顺序、螺孔位置、螺丝模式以及扭矩等,之后机器人便可重复地遵循预先编程的过程进行螺丝紧固。当一个新的产品设计被引入时,需要重新对紧固过程进行编程。可见,针对需要频繁引入新产品的领域,如电子制造领域,每次都需要重新编程的过程阻碍了机器人的大规模使用。
发明内容
有鉴于此,本发明实施例中一方面提出了一种基于视觉的自主式机器人工装***和基于视觉的自主式机器人工装控制***,另一方面提出了一种基于视觉的自主式机器人工装控制方法和计算机可读存储介质,以使得机器人工作平台能够在不需要重新编程的情况下自主完成工件装配。
本发明实施例中提出的基于视觉的自主式机器人工装控制***,包括:图像采集模块,用于控制一视觉子***采集布置在一工作台上的目标工件的图像;图像处理模块,用于对所述目标工件的图像进行图像识别,得到工装信息;机器人控制模块,用于控制一机器人子***执行相应的工装操作;工艺管理模块,用于通过调用所述图像采集模块控制一视觉子***采集目标工件的图像;通过调用所述图像处理模块对所述目标工件的 图像进行图像识别,根据得到的工装信息规划对应的机器人控制参数,根据所述机器人控制参数通过调用所述机器人控制模块控制所述机器人子***执行相应的工装操作。
在一个实施方式中,所述工装为螺丝装配;所述图像处理模块对所述目标工件的图像进行图像识别,得到包括螺孔定位信息在内的工装信息;所述机器人控制模块控制一机器人子***将安装在机器人子***操作端的螺丝刀移动至各个目标位置;该***进一步包括:送丝控制模块,用于控制一螺丝送料子***将螺丝提供至对应的位置;和螺丝刀控制模块,用于控制一螺丝刀子***的驱动部件实现对所述螺丝刀的启停和相应扭矩的螺丝紧固控制;所述工艺管理模块通过调用所述图像采集模块控制一视觉子***采集目标工件的图像;通过调用所述图像处理模块对所述目标工件的图像进行图像识别,得到包括螺孔定位信息在内的工装信息,根据所述工装信息,规划包括实际的螺孔信息和工艺参数的机器人控制参数,根据所述机器人控制参数通过调用所述机器人控制模块、送丝控制模块和螺丝刀控制模块控制一机器人子***、一螺丝送料子***和一螺丝刀子***协同工作,以指定的螺孔坐标和螺丝紧固工艺参数紧固螺丝。
在一个实施方式中,进一步包括:用户界面,用于显示基于视觉的自主式机器人工装控制***运行时的工作状态。
在一个实施方式中,所述工艺管理模块进一步用于获取一工艺文件,所述工艺文件中包括工装所涉及的零件信息、工艺参数和零件模板,并将所述工艺文件存储在一工艺库中;所述图像处理模块将所述工艺文件中的零件模板依次与所述目标工件的图像进行零件匹配,在匹配度达到设定要求后,建立工艺文件中的零件位置与所述目标工件的零件位置的对应关系,得到工装信息。
在一个实施方式中,所述工艺库中进一步存储有预先确定的工件类型模板;所述工艺文件中进一步包括:包括工件类型在内的工件信息;所述图像处理模块进一步用于将所述工艺库中的工件类型模板与所述目标工件的图像进行图像识别和匹配,根据匹配结果确定出所述目标工件的类型;根据所述目标工件的类型,从所述工艺库中提取工件信息中的工件类型与所述目标工件的类型相对应的工艺文件,之后执行将所提取的工艺文件中的零件模板依次与所述目标工件的图像进行零件匹配的操作。
在一个实施方式中,所述工艺管理模块进一步用于获取一对应原始设计的生产文件,从所述生产文件中提取出包括零件信息、工艺参数和零件模板在内的生产工艺信息,并将所述生产工艺信息导出到工艺文件中;或者所述工艺管理模块进一步用于接收用户输入的包括零件信息、工艺参数和零件模板在内的工艺文件。
本发明实施例中提出的螺丝紧固***,包括:视觉子***,用于采集布置在一工作 台上的目标工件的图像;机器人子***,用于控制操作端执行相应的工装操作;控制子***,用于控制所述视觉子***采集目标工件的图像,对所述目标工件的图像进行图像识别,根据得到的工装信息生成对应的机器人控制参数,根据所述机器人控制参数控制所述机器人子***执行相应的工装操作。
在一个实施方式中,所述工装为螺丝装配;所述机器人子***用于将安装在机器人子***操作端的螺丝刀移动至各个目标位置;所述目标位置包括所述目标工件的各个螺孔所对应的位置;该***进一步包括:螺丝刀子***,其包括安装在所述机器人子***操作端的螺丝刀和用于驱动所述螺丝刀的驱动部件;所述驱动部件能够控制所述螺丝刀进行启停和相应扭矩的螺丝紧固;和螺丝送料子***,用于将螺丝提供至对应的位置;所述控制子***用于控制所述视觉子***采集目标工件的图像,对所述目标工件的图像进行图像识别,得到包括螺孔定位信息在内的工装信息,根据所述工装信息,生成包括实际螺孔信息和工艺参数的机器人控制参数,根据所述机器人控制参数控制所述机器人子***、螺丝送料子***和螺丝刀子***进行协调工作。
在一个实施方式中,所述视觉子***包括:布置在所述工作台上方的摄像机和/或布置在所述机器人子***操作端侧的摄像机;且在所述视觉子***包括布置在所述机器人子***操作端侧的摄像机时,所述控制子***进一步用于控制所述机器人子***将所述布置在所述机器人子***操作端侧的摄像机移动至各个目标位置。
在一个实施方式中,所述控制子***为如上所述任一实施方式中的基于视觉的自主式机器人工装控制***。
本发明实施例中提出的螺丝紧固控制方法,包括:控制一视觉子***采集目标工件的图像;对所述目标工件的图像进行图像识别,根据得到的工装信息规划对应的机器人控制参数;根据所述机器人控制参数控制一机器人子***执行相应的工装操作;其中,所述机器人子***用于控制操作端执行相应的工装操作。
在一个实施方式中,所述工装为螺丝装配;所述对目标工件的图像进行图像识别,根据得到的工装信息规划对应的机器人控制参数包括:对所述目标工件的图像进行图像识别,得到包括螺孔定位信息在内的工装信息,根据所述工装信息,生成包括螺孔信息和工艺参数的机器人控制参数;所述根据机器人控制参数控制所述机器人子***执行相应的工装操作包括:根据所述机器人控制参数控制所述机器人子***、一螺丝送料子***和一螺丝刀子***协同工作,以指定的螺孔坐标和螺丝紧固工艺参数紧固螺丝;其中,所述机器人子***用于将安装在机器人子***操作端的螺丝刀移动至各个目标位置;所述目标位置包括所述目标工件的各个螺孔所对应的位置;所述螺丝刀子***包括安装在 所述机器人子***操作端的螺丝刀和用于驱动所述螺丝刀的驱动部件;所述驱动部件能够控制所述螺丝刀进行启停和相应扭矩的螺丝紧固;所述螺丝送料子***用于将螺丝提供至对应的位置。
在一个实施方式中,进一步包括:获取一工艺文件,所述工艺文件中包括工装所涉及的零件信息、工艺参数和零件模板,并将所述工艺文件存储在一工艺库中;所述对所述目标工件的图像进行图像识别包括:将所述工艺文件中的零件模板依次与所述目标工件的图像进行零件匹配,在匹配度达到设定要求后,建立工艺文件中的零件位置与所述目标工件的零件位置的对应关系,得到工装信息。
在一个实施方式中,所述工艺库中进一步存储有预先确定的工件类型模板;所述工艺文件中进一步包括:包括工件类型在内的工件信息;所述将所述工艺文件中的螺孔模板依次与所述目标工件的图像进行螺孔匹配之前,进一步包括:将所述工艺库中的工件类型模板与所述目标工件的图像进行图像识别和匹配,根据匹配结果确定出所述目标工件的类型;根据所述目标工件的类型,从所述工艺库中提取工件信息中的工件类型与所述目标工件的类型相对应的工艺文件。
在一个实施方式中,所述工艺文件为:从一对应原始设计的生产文件中提取出包括零件信息、工艺参数和零件模板在内的生产工艺信息后生成的工艺文件;或者为用户输入的包括零件信息、工艺参数和零件模板在内的工艺文件。
本发明实施例中提出的又一种螺丝紧固控制***,包括:至少一个存储器和至少一个处理器,其中:所述至少一个存储器用于存储计算机程序;所述至少一个处理器用于调用所述至少一个存储器中存储的计算机程序,执行如上所述任一实施方式中的基于视觉的自主式机器人工装控制方法。
本发明实施例中提出的计算机可读存储介质,其上存储有计算机程序;所述计算机程序能够被一处理器执行并实现如上所述任一实施方式中的基于视觉的自主式机器人工装控制方法。
从上述方案中可以看出,由于本发明实施例中通过获取目标工件的图像,并对所述目标工件的图像进行图像识别,根据识别结果规划出实际的机器人控制参数,之后即可根据该机器人控制参数控制机器人或者以及相关设备进行协调工作,完成机器人工作平台的自主工件装配。
其中,通过利用一包括零件信息、工艺参数和零件模板的工艺文件对目标工件的图像进行基于模板的图像识别和零件定位,可提高图像识别和零件定位的准确性。
进一步地,通过预先设置工件类型模板首先对目标工件的图像进行工件类型的识别, 可在存在多种工件类型的情况下,灵活实现对不同工件的自主工件装配。
此外,通过直接利用对应初始设计的生产文件如CAD文件等来导出工艺文件,可简化用户的工作量,并提高工艺文件中信息的准确性,避免人工输入时的误差。
附图说明
下面将通过参照附图详细描述本发明的优选实施例,使本领域的普通技术人员更清楚本发明的上述及其它特征和优点,附图中:
图1A为本发明实施例中一种基于视觉的自主式机器人工装***的示例性结构图。
图1B为本发明实施例中以工装为螺丝紧固的情况为例得到的基于视觉的自主式机器人工装***的示例性结构图。
图2A为本发明一个例子中目标工件的图像的简化示意图。
图2B为图2A中已经用螺丝固定的螺孔的图像。
图3为本发明一个例子中一个螺孔模板的示意图。
图4A和图4B为本发明一个例子中的PCB工件类型模板的示意图。
图5A为本发明实施例中图1所示基于视觉的自主式机器人工装***中的控制子***的结构示意图。
图5B为本发明实施例中以工装为螺丝紧固的情况为例得到的控制子***的结构示意图。
图6为本发明实施例中的基于视觉的自主式机器人工装控制方法的示例性流程图。
图7为本发明实施例中又一种基于视觉的自主式机器人工装控制***的示例性结构图。
其中,附图标记如下:
标号 含义
10 视觉子***
20 机器人子***
30 螺丝刀子***
40 螺丝送料子***
50 控制子***
501 用户界面
502 工艺库
503 图像采集模块
504 图像处理模块
505 机器人控制模块
506 送丝控制模块
507 螺丝刀控制模块
508 工艺管理模块
S62、S64、S66 步骤
S71~S80 步骤
81 存储器
82 处理器
83 总线
具体实施方式
为了描述上的简洁和直观,下文通过描述若干代表性的实施方式来对本发明的方案进行阐述。实施方式中大量的细节仅用于帮助理解本发明的方案。但是很明显,本发明的技术方案实现时可以不局限于这些细节。为了避免不必要地模糊了本发明的方案,一些实施方式没有进行细致地描述,而是仅给出了框架。下文中,“包括”是指“包括但不限于”,“根据……”是指“至少根据……,但不限于仅根据……”。由于汉语的语言习惯,下文中没有特别指出一个成分的数量时,意味着该成分可以是一个也可以是多个,或可理解为至少一个。
本发明实施例中,为了使得机器人工作平台能够在不需要重新编程的情况下自主完成工件装配,考虑预先对目标工件进行图像识别,以获取相关工装信息,根据识别结果规划出实际的机器人控制参数,例如零件安装顺序和工艺参数等,之后即可根据该机器人控制参数控制机器人或者以及相关设备进行协调工作,完成机器人工作平台的自主工件装配。
为了使本发明的技术方案及优点更加清楚明白,以下结合附图及实施方式,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施方式仅仅用以阐述性说明本发明,并不用于限定本发明的保护范围。
图1A为本发明实施例中一种基于视觉的自主式机器人工装***(也可称为基于视觉的自主式机器人工作平台)的示例性结构图。如图1A所示,该***中可包括:视觉子系 统10、机器人子***20和控制子***50。
其中,视觉子***10用于采集布置在一工作台上的目标工件的图像。
机器人子***20用于控制操作端执行相应的工装操作。
控制子***50用于控制视觉子***10采集目标工件的图像,对所述目标工件的图像进行图像识别,根据得到的工装信息生成对应的机器人控制参数,根据所述机器人控制参数控制所述机器人子***20或者以及相关设备执行相应的工装操作。
图1B为本发明实施例中以工装为螺丝紧固的情况为例得到的基于视觉的自主式机器人工装***的示例性结构图。如图1B所示,该***中可包括:视觉子***10、机器人子***20、螺丝刀子***30、螺丝送料子***40和控制子***50。
其中,视觉子***10用于采集布置在一工作台上的目标工件的图像。
具体实现时,视觉子***10可包括至少一个摄像机(或称摄像头)。所述至少一个摄像机可包括:固定在螺丝紧固控制台机架上的固定摄像机,和/或,安装在机器人臂(或称机械臂)上的移动摄像机,该移动摄像机可与机器人末端执行器一起移动。其中,固定在机架上的摄像机可安装在工件正上方,以工件为电子制造领域的印刷电路板(PCB)的情况为例,该摄像机的成像平面可与螺丝紧固工作平面平行。为了具有更宽的视野,该固定摄像机可放置在工作平面的机架最高处。安装在机器人臂上的摄像机因其离工作平面更近,视野更小,因此当需要对螺孔进行精确定位时,可使用该摄像机。在一个实施方式中,可同时使用固定摄像机和移动摄像机。
机器人子***20用于在控制子***50的控制下将安装在机器人子***20操作端的螺丝刀移动至各个目标位置;所述目标位置包括所述目标工件的各个螺孔所对应的位置。如果螺丝送料子***40的螺丝是通过螺丝刀的螺丝嘴从螺丝进料器中吸取的话,则目标位置还可包括螺丝进料器中相应螺丝所在的位置。此外,如果在机器人手臂上安装有摄像机,则该机器人子***20还可以在控制子***50的控制下将所述摄像机移动到控制子***50要求的位置,以辅助实现精确的螺孔定位。
具体实现时,机器人子***20可采用4、5或6自由度的机器人手臂或SCARA机器人实现。
螺丝刀子***30可包括安装在机器人子***20操作端的螺丝刀和相应的驱动部件,如电机等。所述驱动部件能够在控制子***50的控制下控制所述螺丝刀进行启停、相应扭矩的螺丝紧固。
螺丝送料子***40用于将螺丝提供至对应的位置。
具体实现时,螺丝送料子***40可以是自动送料***,也可以是吸风或磁力送料系 统,如真空附件、磁性钻头等,具体采用何种送料方式可根据螺丝的具体类型和用户的需要灵活选择。螺丝可以通过供给软管供给。或者,螺丝刀的螺丝嘴也可以通过真空吸力或磁性钻头从螺丝进料器中取出螺丝。螺丝进料器确保螺丝在预设位置准确布置。例如,具体实现时,螺丝送料子***40可在控制子***50的控制下将所需的螺丝通过供给软管提供给螺丝刀子***30,由螺丝刀的螺丝嘴带至目标工件对应的螺孔处,或者螺丝送料子***40可将所需的各种螺丝布置在预设位置,由机器人子***20在控制子***50的控制下操作螺丝刀通过真空吸力或磁性钻头从螺丝进料器中取出螺丝。
控制子***50用于控制所述视觉子***采集目标工件的图像,对所述目标工件的图像进行图像识别,得到包括螺孔定位信息在内的工装信息,根据该工装信息可生成包括实际螺孔信息和工艺参数(如扭矩)的机器人控制参数,根据所述机器人控制参数控制所述机器人子***、螺丝送料子***和螺丝刀子***进行协调工作。其中,螺孔信息可包括螺丝安装顺序、螺丝模式(也即螺丝类型)、螺孔位置(如每个螺孔的坐标位置)和螺孔尺寸(即对应相应的螺丝尺寸)等。
本发明实施例中,具体实现时,对所述目标工件的图像进行图像识别以获取工装信息的方法可以有很多。例如,可以采用模板匹配技术或深度学习技术等进行。下面以采用模板匹配技术进行图像识别的情况为例。控制子***50可首先获取一包括工装所涉及的零件信息(如上述螺丝紧固所涉及的螺孔信息)、工艺参数和零件模板(如螺孔模板)的工艺文件。该工艺文件可以从预先获取的一对应原始设计的生产文件中导出,例如,生产文件可以为CAD文件或CAM文件等,从所述生产文件中提取出包括零件信息(如螺孔信息)、工艺参数和零件模板(如螺孔模板)在内的生产工艺信息,并将所述生产工艺信息导出到工艺文件中;或者也可以是接收的用户手动输入的工艺文件。例如,在没有适合自动转换的生产文件时,可由用户自行输入一工艺文件。供机器人组装时与实际捕捉到的工件图像做匹配和关联,最终生成实际的控制程序和参数。之后,可将所述工艺文件中的零件模板(如螺孔模板)依次与所述目标工件的图像进行零件(如螺孔)匹配,在匹配度达到设定要求后,建立工艺文件中的零件位置(如螺孔位置)与所述目标工件的零件位置(如螺孔位置)的对应关系,得到工装信息,并根据得到的工装信息生成包括实际的零件信息(如螺孔信息)和工艺参数在内的机器人控制参数。具体实现时,机器人控制参数可以有多种实现形式。例如,可以是生成一个记录有所述机器人控制参数的当前加工文件,之后根据该当前加工文件控制机器人子***20或者以及相关设备(如螺丝送料子***40和螺丝刀子***30)协同工作。又如,也可以是根据得到的工装信息确定实际的零件信息(如螺孔信息)与工艺文件中的零件信息(如螺孔信息)的差别信 息(例如,某个或某些螺孔已经安装有螺丝,无需再行安装,则可跳过该螺丝的安装),以及建立工艺文件中的零件坐标(如螺孔坐标)与实际零件坐标(如螺孔坐标)之间的转换关系,并将所述差别信息以及所述转换关系记录下来,之后结合工艺文件中的零件信息(如螺孔信息)和工艺参数等信息,控制机器人子***20或者以及相关设备(如螺丝送料子***40和螺丝刀子***30)协同工作。
其中,将零件模板与所述目标工件的图像进行零件匹配时,可采用零件定位的归一化互相关(NCC)算法。这是一种基于灰色值的方法。这通常是通过减去平均值和除以标准差来完成的。零件模板t(x,y)与目标工件中的子图像f(x,y)的互相关可如下式(1)所示:
Figure PCTCN2019105438-appb-000001
其中,各个符号的含义分别为:
n:零件模板图像的像素点个数;
f(x,y):目标工件图像中各像素点的灰度值;
t(x,y):零件模板图像中各像素点的灰度值;
μf:目标工件图像中各像素点灰度值的平均值;
μt:零件模板图像中各像素点灰度值的平均值;
σf:目标工件图像中各像素点灰度值的方差;
σt:零件模板图像中各像素点灰度值的方差。
一般来说,基于形状的模板匹配足以定位零件。但是,有时有些零件已经安装上,例如,螺丝紧固应用中,有些螺孔已经安装上螺丝,此时,通过该NCC算法进行处理时,可将已经已经安装上的零件例如已经用螺丝固定的螺孔排除。
例如,以工装为螺丝紧固的情况为例,图2A示出了本发明一个例子中目标工件的图像的简化示意图。因原始图像在分辨率比较低时其上的螺孔及螺丝等细节可能会不清楚,因此本文中为了使得特征更加明显,对其进行了特征突出的简化处理,得到了原始图像的简化图。对图2A所示目标工件的图像进行螺孔定位时,若利用图3所示的螺孔模板进行匹配的话,则可将图2A所示目标工件中如图2B所示的已经用螺丝固定的螺孔排除掉,相应地,机器人控制参数中便可以将该螺孔排除,实际安装时可跳过对该螺孔的螺丝紧固安装。
零件(如螺孔)定位后,可以将目标工件图像中的零件(如螺孔)与工艺文件中的零件(如螺孔)相匹配。例如,通过零件(如螺孔)识别步骤后,可以得到目标工件图 像中模型的旋转和平移矢量。然后,将旋转矢量和平移矢量结合起来,建立模型帧和图像帧之间的单应变换T。模型框架内的零件(如螺孔)匹配可用T矩阵和图中的坐标计算,如下式(2)所示:
Figure PCTCN2019105438-appb-000002
当存在多种目标工件时,对应的工艺文件也会有多种。此时,可将每个工艺文件存储在一工艺库中,且工艺文件中此时可进一步包括工件信息,该工具信息中可包括工件的类型、名称以及设计相关信息等;并且可预先在工艺库中存储对应不同目标工件的工件类型模板。
例如,针对不同类型的PCB工件,可利用PCB轮廓以及PCB板上的一些标记,如缺口、孔、十字标记或圆点标记等,制作成基于形状匹配的工件类型模板。如图5A和图5B分别示出了两个PCB类型模板的示意图。该PCB类型模板利用PCB工件的轮廓形状、轮廓上的缺口、PCB工件上的中心孔几个信息来制作PCB工件类型模板。
具体进行匹配时,为了提高匹配速度,可采用图像金字塔方法,例如图像金字塔级别设置为3或更高的值,从作为金字塔顶部的低分辨率小尺寸的图像开始搜索,如果在这个阶段搜索成功,则向金字塔的下一个层次的更高分辨率、更大尺寸的图像搜索,以此类推,直到达到金字塔的底部的原始图像大小。其中,模型的缩放可设置为各向异性,这意味着行缩放因子和列缩放因子可能不同。根据工件类型模板的图像创建的模型可包含一组X,Y坐标点:
Figure PCTCN2019105438-appb-000003
其在X和Y方向上的梯度
Figure PCTCN2019105438-appb-000004
其中i=1…n。其中,上标T表示工件类型模板,下标i代表各点序号。该模型可采用小比例步进和旋转角度步进设置,以避免错过图片中的任何匹配。在匹配过程中,工件类型模板的模型可与目标工件的图像在所有位置使用相似性度量进行比较。目标工件图像中各点的梯度是
Figure PCTCN2019105438-appb-000005
其中,下标u,v分别代表目标工件图像中的横坐标与纵坐标,上标S表示目标工件图像。相似性度量的思想是取工件类型模板图像梯度向量的所有归一化点积之和,搜索模型数据集中所有点的图像。这将在目标工件的图像中的每个点处生成分数。其中,相似性度量函数可如下式(3)所示:
Figure PCTCN2019105438-appb-000006
如果工件类型模板的模型与目标工件的图像完全匹配,此函数将返回1分。分数对应于在目标工件的图像中可见的对象部分。如果目标工件的图像中没有工件类型模板中 的对象,则匹配后得到的分数为0。
例如,将图2A中的PCB工件的图像与图4A和图4B所示的PCB工件类型模板进行匹配时,可得到其与图4A所示的工件类型模板相匹配,从而确定所述目标工件的工件类型,之后可根据所述工艺库中各工艺文件的工件信息查找对应所述工件类型的工艺文件,将所述工艺文件中的螺孔模板依次与目标工件图像进行螺孔匹配。
其中,根据所述机器人控制参数控制机器人子***20、螺丝送料子***40和螺丝刀子***30协同工作时,具体地,例如可根据机器人控制参数中的螺丝安装顺序、螺丝模式、螺孔位置、螺孔尺寸和工艺参数控制螺丝送料子***40将相应模式和相应尺寸的螺丝提供到相应位置,控制机器人子***20将螺丝刀移动至相应位置,并控制螺丝刀子***30进行启停和相应扭矩的螺丝紧固,并监控螺丝紧固过程的质量。
具体实现时,控制子***50可由通信开关和/或计算机(如PC机)和/或PLC(可编程逻辑控制器)和/或嵌入式计算设备和/或任何合适的计算设备组成。这些装置共同控制前述各子***,以确保螺丝紧固***正常工作。由于主***设备通过以太网或其他通信总线与控制设备相连,因此通信交换机用于扩展控制子***50中的通信端口数量。PLC一般用于***设备的数字I/O控制,保证整个***控制的可靠性和实时性。而主要的图像采集、图像处理和总体控制逻辑可均由工控机/PC机/嵌入式计算设备完成。中央计算单元应该足够强大,可以进行整个计算。
具体实现时,控制子***50可有多种实现方式,例如图5A示出了一个例子中控制子***50的结构示意图。如图5A所示,其可包括:图像采集模块503、图像处理模块504、机器人控制模块505和工艺管理模块508。在一些实施方式中,还可进一步包括一用户界面501。
其中,图像采集模块503用于控制一视觉子***采集布置在一工作台上的目标工件的图像。
图像处理模块504用于对所述目标工件的图像进行图像识别,得到工装信息。
机器人控制模块505用于控制一机器人子***执行相应的工装操作。
工艺管理模块508用于通过调用所述图像采集模块503控制一视觉子***采集目标工件的图像;通过调用所述图像处理模块504对所述目标工件的图像进行图像识别,根据得到的工装信息规划对应的机器人控制参数,根据所述机器人控制参数通过调用所述机器人控制模块505控制所述机器人子***执行相应的工装操作。
用户界面501用于显示基于视觉的自主式机器人工装控制***运行时的工作状态。
针对不同的工装应用,控制子***50可有不同的具体实现结构。例如,仍以工装为 螺丝紧固的情况为例,则控制子***50可如图5B所示,包括:用户界面501、工艺库502、图像采集模块503、图像处理模块504、机器人控制模块505、送丝控制模块506、螺丝刀控制模块507和工艺管理模块508。
其中,用户界面501用于显示螺丝紧固***运行时的工作状态。当***运行中出现***错误时,用户可以通过用户界面进行干预和调整螺钉紧固过程。用户界面501还用于提供在工艺库502中输入和编辑工件和工艺信息的界面。
图像采集模块503用于控制视觉子***10的摄像机采集布置在一工作台上的目标工件的图像,并获取所述摄像机采集的目标工件的实时图像。本发明实施例中,视觉子***10所包括的摄像机数量和类型可以是可配置的。
图像处理模块504用于对图像采集模块503获取的目标工件的实时图像进行进一步处理,包括图像识别和螺孔定位。例如,可利用这些实时图像和存储在工艺库502中的数据,识别工件信息并定位工件上的所有螺孔。具体地,可根据目标工件的实时图像以及预先存储在工艺库30中的工件类型模板确定所述目标工件的工件类型,根据所述工艺库30中各工艺文件的工件信息查找对应所述工件类型的工艺文件,将所述工艺文件中的螺孔模板依次与所述目标工件的图像进行螺孔匹配,在匹配度达到设定要求后,建立工艺文件中的螺孔位置与所述目标工件的螺孔位置的对应关系。
机器人控制模块505用于根据预先设定的协调和姿态控制机器人运动,并对机器人状态进行监控。例如,控制机器人子***20将安装在机器人子***20操作端的螺丝刀移动至各个目标位置。具体地,可根据机器人控制参数中的螺丝安装顺序、和螺孔位置等控制机器人子***20将螺丝刀移动至相应位置。此外,机器人控制模块505也可根据***中使用的实际机器人进行配置。
送丝控制模块506支持与不同送丝方式对应的多功能设备,如自动送丝***或吸风/磁力螺丝刀。送丝控制模块506确保在最终拧入螺钉之前,可以将螺钉正确送入螺丝刀的尖端。例如,对于自动送丝***,可根据机器人控制参数中的螺丝安装顺序和螺丝模式等控制螺丝送料子***40将相应模式和相应尺寸的螺丝提供到相应位置,如可通过软管提供到螺丝刀的位置。对于吸风/磁力螺丝刀的送丝方式,可提供至预先设定的位置,由螺丝刀从中吸取对应的螺丝。
螺丝刀控制模块507用于控制螺丝刀子***30的驱动部件,使所述驱动部件控制所述螺丝刀的启动、停止和扭矩,并监控螺丝紧固操作是否平稳。
工艺管理模块508用于用于通过调用图像采集模块503控制视觉子***10采集目标工件的图像;通过调用图像处理模块504对图像采集模块503获取的目标工件的实时图 像进行进一步处理,包括图像识别和螺孔定位,并规划包括实际的螺孔信息和工艺参数的机器人控制参数。工艺管理模块508还用于通过调用机器人控制模块505、送丝控制模块506和螺丝刀控制模块507控制机器人子***20、螺丝送料子***40和螺丝刀子***30协同工作,以指定的螺孔坐标和螺丝紧固工艺参数紧固螺丝。
在其他实施方式中,工艺管理模块508可进一步用于获取一工艺文件,所述工艺文件中包括零件信息(如螺孔信息)、工艺参数和零件模板(如螺孔模板),并将所述工艺文件存储在工艺库502中。例如,工艺管理模块508可首先获取一对应原始设计的生产文件,从所述生产文件中提取出包括:零件信息(如螺孔信息)、工艺参数和零件模板(如螺孔模板)在内的生产工艺信息,并将所述生产工艺信息导出到工艺文件中;或者工艺管理模块508首先接收用户提供的包括零件信息(如螺孔信息)、工艺参数和零件模板(如螺孔模板)在内的工艺文件。在一个实施方式中,用户可通过用户界面501输入所述工艺文件。相应地,图像处理模块504在进行图像识别时,可将所述工艺文件中的零件模板(如螺孔模板)依次与所述目标工件的图像进行零件匹配(如螺孔匹配),在匹配度达到设定要求后,建立工艺文件中的零件位置(如螺孔位置)与所述目标工件的零件位置(如螺孔位置)的对应关系。
在另一个实施方式中,工艺库502中可进一步存储有预先确定的工件类型模板,且工艺文件中进一步包括:包括工件类型在内的工件信息。相应地,图像处理模块504首先将所述工艺库中的工件类型模板与所述目标工件的图像进行图像识别和匹配,根据匹配结果确定出所述目标工件的类型;根据所述目标工件的类型,从所述工艺库502中提取工件信息中的工件类型与所述目标工件的类型相对应的工艺文件,之后将所提取的工艺文件中的零件模板(如螺孔模板)依次与所述目标工件的图像进行零件匹配(如螺孔匹配)。
以上对本发明实施例中的基于视觉的自主式机器人工装***进行了详细描述,下面再对本发明实施例中的基于视觉的自主式机器人工装方法进行详细描述。本发明实施例中的基于视觉的自主式机器人工装方法能够在本发明实施例中的螺丝紧固***上实现。对于本发明方法实施例中未披露的细节可参见本发明***实施例中的相应描述,此处不再一一赘述。
图6为本发明实施例中的基于视觉的自主式机器人工装控制方法的示例性流程图。如图6所示,该方法可包括如下步骤:
S62、控制一视觉子***采集目标工件的图像。
S64、对所述目标工件的图像进行图像识别,根据得到的工装信息规划对应的机器人 控制参数。
本步骤中,对于工装为螺丝紧固的情况,可对所述目标工件的图像进行图像识别,得到包括螺孔定位信息在内的工装信息,根据所述工装信息,生成包括螺孔信息和工艺参数的机器人控制参数。
S66、根据所述机器人控制参数控制一机器人子***执行相应的工装操作。
本步骤中,对于工装为螺丝紧固的情况,可根据所述机器人控制参数控制一机器人子***、一螺丝送料子***和一螺丝刀子***协同工作,以指定的螺孔坐标和螺丝紧固工艺参数紧固螺丝。
其中,所述机器人子***用于控制操作端执行相应的工装操作。对于工装为螺丝紧固的情况,所述机器人子***用于将安装在机器人子***操作端的螺丝刀移动至各个目标位置;所述目标位置包括所述目标工件的各个螺孔所对应的位置;所述螺丝刀子***包括安装在所述机器人子***操作端的螺丝刀和用于驱动所述螺丝刀的驱动部件;所述驱动部件能够控制所述螺丝刀进行启停和相应扭矩的螺丝紧固;所述螺丝送料子***用于将螺丝提供至对应的位置。
在一个实施方式中,该方法可进一步包括:获取一工艺文件,所述工艺文件中包括零件信息(如螺孔信息)、工艺参数和零件模板(如螺孔模板),并将所述工艺文件存储在一工艺库中。相应地,步骤S64中所述对所述目标工件的图像进行图像识别可包括:将所述工艺文件中的零件模板(如螺孔模板)依次与所述目标工件的图像进行零件匹配(如螺孔匹配),在匹配度达到设定要求后,建立工艺文件中的零件位置(如螺孔位置)与所述目标工件的零件位置(如螺孔位置)的对应关系,并生成包括零件信息(如螺孔信息)和工艺参数的机器人控制参数。
在有一个实施方式中,所述工艺库中可进一步存储有预先确定的工件类型模板;所述工艺文件中进一步包括:包括工件类型在内的工件信息;相应地,步骤S64中所述将所述工艺文件中的零件模板(如螺孔模板)依次与所述目标工件的图像进行零件匹配(如螺孔匹配)之前,可进一步包括:将所述工艺库中的工件类型模板与所述目标工件的图像进行图像识别和匹配,根据匹配结果确定出所述目标工件的类型;根据所述目标工件的类型,从所述工艺库中提取工件信息中的工件类型与所述目标工件的类型相对应的工艺文件。
其中,所述工艺文件可以为:从一对应原始设计的生产文件中提取出包括:零件信息(如螺孔信息)、工艺参数和零件模板(如螺孔模板)在内的生产工艺信息后生成的工艺文件;或者为用户输入的包括零件信息(如螺孔信息)、工艺参数和零件模板(如 螺孔模板)在内的工艺文件。
图7为本发明一个例子中以工装为螺丝紧固的情况为例得到的基于视觉的自主式机器人工装方法的示例性流程图。如图7所示,该方法可包括如下步骤:
S71、对图1所示螺丝紧固***进行初始化。
S72、将目标工件布置在工作台上,即上料。
S73、控制一视觉子***采集目标工件的图像。
S74、利用工件类型模板对所述目标工件的图像进行工件类型识别。工件类型模板可以预先存储到一工艺库中。则本步骤S74中可从所述工艺库中提取出各工件类型模板与所述目标工件的图像进行匹配。
S75、利用对应的工艺文件对所述目标工件的图像进行螺孔定位。工艺文件可以预先从一生成文件中导出,并存储到所述工艺库中。则本步骤S74中可根据步骤S74中确定的目标工件的类型从所述工艺库中提取出对应的工艺文件,依次将其各安装位置的螺孔模板与所述目标工件的图像进行匹配。
S76、根据图像识别和螺孔定位的结果,生成包括螺孔信息和工艺参数的机器人控制参数。
S77、根据所述机器人控制参数,针对当前螺孔,控制一机器人子***、一螺丝送料子***和一螺丝刀子***协同工作,以指定的螺孔坐标和螺丝紧固工艺参数紧固螺丝。
S78、判断是否所有螺孔均已进行螺丝紧固,如果是,则执行步骤S79;如果否,则返回执行步骤S77。
S79、将目标工件从所述工作台上卸下来,即下料。
S80、判断是否所有工件紧固任务均已完成,如果是,则结束;否则,返回执行步骤S72。
图8为本发明实施例中又一种基于视觉的自主式机器人工装控制***的示例性结构图。如图8所示,该***可包括:至少一个存储器81和至少一个处理器82。此外,还可以包括一些其它组件,例如显示器、通信端口等。这些组件通过总线83进行通信。
其中:至少一个存储器81用于存储计算机程序。在一个实施方式中,该计算机程序可以理解为包括图5A和图5B所示的基于视觉的自主式机器人工装控制***的各个模块。此外,至少一个存储器81还可存储操作***等。操作***包括但不限于:Android操作***、Symbian操作***、Windows操作***、Linux操作***等等。
至少一个处理器82用于调用至少一个存储器81中存储的计算机程序,执行本发明实施例中所述的基于视觉的自主式机器人工装控制方法。处理器82可以为CPU,处理单 元/模块,ASIC,逻辑模块或可编程门阵列等。其可通过所述通信端口进行数据的接收和发送。
需要说明的是,上述各流程和各结构图中不是所有的步骤和模块都是必须的,可以根据实际的需要忽略某些步骤或模块。各步骤的执行顺序不是固定的,可以根据需要进行调整。各模块的划分仅仅是为了便于描述采用的功能上的划分,实际实现时,一个模块可以分由多个模块实现,多个模块的功能也可以由同一个模块实现,这些模块可以位于同一个设备中,也可以位于不同的设备中。
可以理解,上述各实施方式中的硬件模块可以以机械方式或电子方式实现。例如,一个硬件模块可以包括专门设计的永久性电路或逻辑器件(如专用处理器,如FPGA或ASIC)用于完成特定的操作。硬件模块也可以包括由软件临时配置的可编程逻辑器件或电路(如包括通用处理器或其它可编程处理器)用于执行特定操作。至于具体采用机械方式,或是采用专用的永久性电路,或是采用临时配置的电路(如由软件进行配置)来实现硬件模块,可以根据成本和时间上的考虑来决定。
另外,本发明实施例中还提供一种能够在服务器或服务器集群或云平台上执行的计算机软件,所述计算机软件能够被一处理器执行并实现本发明实施例中所述的基于视觉的自主式机器人工装控制方法。
此外,本发明实施例中还提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序能够被一处理器执行并实现本发明实施例中所述的基于视觉的自主式机器人工装控制方法。具体地,可以提供配有存储介质的***或者装置,在该存储介质上存储着实现上述实施例中任一实施方式的功能的软件程序代码,且使该***或者装置的计算机(或CPU或MPU)读出并执行存储在存储介质中的程序代码。此外,还可以通过基于程序代码的指令使计算机上操作的操作***等来完成部分或者全部的实际操作。还可以将从存储介质读出的程序代码写到***计算机内的扩展板中所设置的存储器中或者写到与计算机相连接的扩展单元中设置的存储器中,随后基于程序代码的指令使安装在扩展板或者扩展单元上的CPU等来执行部分和全部实际操作,从而实现上述实施方式中任一实施方式的功能。用于提供程序代码的存储介质实施方式包括软盘、硬盘、磁光盘、光盘(如CD-ROM、CD-R、CD-RW、DVD-ROM、DVD-RAM、DVD-RW、DVD+RW)、磁带、非易失性存储卡和ROM。可选择地,可以由通信网络从服务器计算机上下载程序代码。
从上述方案中可以看出,由于本发明实施例中通过获取目标工件的图像,并对所述目标工件的图像进行图像识别,根据识别结果规划出实际的机器人控制参数,之后即可 根据该机器人控制参数控制机器人或者以及相关设备进行协调工作,完成机器人工作平台的自主工件装配。
其中,通过利用一包括零件信息、工艺参数和零件模板的工艺文件对目标工件的图像进行基于模板的图像识别和零件定位,可提高图像识别和零件定位的准确性。
进一步地,通过预先设置工件类型模板首先对目标工件的图像进行工件类型的识别,可在存在多种工件类型的情况下,灵活实现对不同工件的自主工件装配。
此外,通过直接利用对应初始设计的生产文件如CAD文件等来导出工艺文件,可简化用户的工作量,并提高工艺文件中信息的准确性,避免人工输入时的误差。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (17)

  1. 基于视觉的自主式机器人工装控制***,其特征在于,包括:
    图像采集模块(503),用于控制一视觉子***采集布置在一工作台上的目标工件的图像;
    图像处理模块(504),用于对所述目标工件的图像进行图像识别,得到工装信息;
    机器人控制模块(505),用于控制一机器人子***执行相应的工装操作;
    工艺管理模块(508),用于通过调用所述图像采集模块(503)控制一视觉子***采集目标工件的图像;通过调用所述图像处理模块(504)对所述目标工件的图像进行图像识别,根据得到的工装信息规划对应的机器人控制参数,根据所述机器人控制参数通过调用所述机器人控制模块(505)控制所述机器人子***执行相应的工装操作。
  2. 根据权利要求1所述的基于视觉的自主式机器人工装控制***,其特征在于,所述工装为螺丝装配;
    所述图像处理模块(504)对所述目标工件的图像进行图像识别,得到包括螺孔定位信息在内的工装信息;
    所述机器人控制模块(505)控制一机器人子***将安装在机器人子***操作端的螺丝刀移动至各个目标位置;
    该***进一步包括:
    送丝控制模块(506),用于控制一螺丝送料子***将螺丝提供至对应的位置;和
    螺丝刀控制模块(507),用于控制一螺丝刀子***的驱动部件实现对所述螺丝刀的启停和相应扭矩的螺丝紧固控制;
    所述工艺管理模块(508)通过调用所述图像采集模块(503)控制一视觉子***采集目标工件的图像;通过调用所述图像处理模块(504)对所述目标工件的图像进行图像识别,得到包括螺孔定位信息在内的工装信息,根据所述工装信息,规划包括实际的螺孔信息和工艺参数的机器人控制参数,根据所述机器人控制参数通过调用所述机器人控制模块(505)、送丝控制模块(506)和螺丝刀控制模块(507)控制一机器人子***、一螺丝送料子***和一螺丝刀子***协同工作,以指定的螺孔坐标和螺丝紧固工艺参数紧固螺丝。
  3. 根据权利要求1或2所述的基于视觉的自主式机器人工装控制***,其特征在于,进一步包括:用户界面(501),用于显示基于视觉的自主式机器人工装控制***运行时的工作状态。
  4. 根据权利要求1或2所述的基于视觉的自主式机器人工装控制***,其特征在于,所述工艺管理模块(508)进一步用于获取一工艺文件,所述工艺文件中包括工装所涉及的零件信息、工艺参数和零件模板,并将所述工艺文件存储在一工艺库(502)中;
    所述图像处理模块(504)将所述工艺文件中的零件模板依次与所述目标工件的图像进行零件匹配,在匹配度达到设定要求后,建立工艺文件中的零件位置与所述目标工件的零件位置的对应关系,得到工装信息。
  5. 根据权利要求4所述的基于视觉的自主式机器人工装控制***,其特征在于,所述工艺库(502)中进一步存储有预先确定的工件类型模板;所述工艺文件中进一步包括:包括工件类型在内的工件信息;
    所述图像处理模块(504)进一步用于将所述工艺库(502)中的工件类型模板与所述目标工件的图像进行图像识别和匹配,根据匹配结果确定出所述目标工件的类型;根据所述目标工件的类型,从所述工艺库(502)中提取工件信息中的工件类型与所述目标工件的类型相对应的工艺文件,之后执行将所提取的工艺文件中的零件模板依次与所述目标工件的图像进行零件匹配的操作。
  6. 根据权利要求5所述的基于视觉的自主式机器人工装控制***,其特征在于,所述工艺管理模块(508)进一步用于获取一对应原始设计的生产文件,从所述生产文件中提取出包括零件信息、工艺参数和零件模板在内的生产工艺信息,并将所述生产工艺信息导出到工艺文件中;或者所述工艺管理模块(508)进一步用于接收用户输入的包括零件信息、工艺参数和零件模板在内的工艺文件。
  7. 基于视觉的自主式机器人工装***,其特征在于,包括:
    视觉子***(10),用于采集布置在一工作台上的目标工件的图像;
    机器人子***(20),用于控制操作端执行相应的工装操作;
    控制子***(50),用于控制所述视觉子***(10)采集目标工件的图像,对所述目标工件的图像进行图像识别,根据得到的工装信息生成对应的机器人控制参数,根据所述机器人控制参数控制所述机器人子***(20)执行相应的工装操作。
  8. 根据权利要求7所述的基于视觉的自主式机器人工装***,其特征在于,所述工装为螺丝装配;
    所述机器人子***(20)用于将安装在机器人子***操作端的螺丝刀移动至各个目标位置;所述目标位置包括所述目标工件的各个螺孔所对应的位置;
    该***进一步包括:
    螺丝刀子***(30),其包括安装在所述机器人子***操作端的螺丝刀和用于 驱动所述螺丝刀的驱动部件;所述驱动部件能够控制所述螺丝刀进行启停和相应扭矩的螺丝紧固;和
    螺丝送料子***(40),用于将螺丝提供至对应的位置;
    所述控制子***(50)用于控制所述视觉子***(10)采集目标工件的图像,对所述目标工件的图像进行图像识别,得到包括螺孔定位信息在内的工装信息,根据所述工装信息,生成包括实际螺孔信息和工艺参数的机器人控制参数,根据所述机器人控制参数控制所述机器人子***(20)、螺丝送料子***(30)和螺丝刀子***(40)进行协调工作。
  9. 根据权利要求7或8所述的基于视觉的自主式机器人工装***,其特征在于,所述视觉子***(10)包括:布置在所述工作台上方的摄像机和/或布置在所述机器人子***(20)操作端侧的摄像机;且
    在所述视觉子***(10)包括布置在所述机器人子***(20)操作端侧的摄像机时,所述控制子***(50)进一步用于控制所述机器人子***(20)将所述布置在所述机器人子***(20)操作端侧的摄像机移动至各个目标位置。
  10. 根据权利要求6所述的基于视觉的自主式机器人工装***,其特征在于,所述控制子***(50)为如权利要求1至6中任一项所述的基于视觉的自主式机器人工装控制***。
  11. 基于视觉的自主式机器人工装控制方法,其特征在于,包括:
    控制一视觉子***采集目标工件的图像(S62);
    对所述目标工件的图像进行图像识别,根据得到的工装信息规划对应的机器人控制参数;
    根据所述机器人控制参数控制一机器人子***执行相应的工装操作;
    其中,所述机器人子***用于控制操作端执行相应的工装操作。
  12. 根据权利要求10所述的基于视觉的自主式机器人工装控制方法,其特征在于,所述工装为螺丝装配;
    所述对目标工件的图像进行图像识别,根据得到的工装信息规划对应的机器人控制参数包括:对所述目标工件的图像进行图像识别,得到包括螺孔定位信息在内的工装信息,根据所述工装信息,生成包括螺孔信息和工艺参数的机器人控制参数(S64);
    所述根据机器人控制参数控制所述机器人子***执行相应的工装操作包括:根据所述机器人控制参数控制所述机器人子***、一螺丝送料子***和一螺丝刀子***协同工作,以指定的螺孔坐标和螺丝紧固工艺参数紧固螺丝(S66);
    其中,所述机器人子***用于将安装在机器人子***操作端的螺丝刀移动至各个目标位置;所述目标位置包括所述目标工件的各个螺孔所对应的位置;所述螺丝刀子***包括安装在所述机器人子***操作端的螺丝刀和用于驱动所述螺丝刀的驱动部件;所述驱动部件能够控制所述螺丝刀进行启停和相应扭矩的螺丝紧固;所述螺丝送料子***用于将螺丝提供至对应的位置。
  13. 根据权利要求10或11所述的基于视觉的自主式机器人工装控制方法,其特征在于,进一步包括:获取一工艺文件,所述工艺文件中包括工装所涉及的零件信息、工艺参数和零件模板,并将所述工艺文件存储在一工艺库中;
    所述对所述目标工件的图像进行图像识别包括:将所述工艺文件中的零件模板依次与所述目标工件的图像进行零件匹配,在匹配度达到设定要求后,建立工艺文件中的零件位置与所述目标工件的零件位置的对应关系,得到工装信息。
  14. 根据权利要求12所述的基于视觉的自主式机器人工装控制方法,其特征在于,所述工艺库中进一步存储有预先确定的工件类型模板;所述工艺文件中进一步包括:包括工件类型在内的工件信息;
    所述将所述工艺文件中的螺孔模板依次与所述目标工件的图像进行螺孔匹配之前,进一步包括:将所述工艺库中的工件类型模板与所述目标工件的图像进行图像识别和匹配,根据匹配结果确定出所述目标工件的类型;根据所述目标工件的类型,从所述工艺库中提取工件信息中的工件类型与所述目标工件的类型相对应的工艺文件。
  15. 根据权利要求12所述的基于视觉的自主式机器人工装控制方法,其特征在于,所述工艺文件为:从一对应原始设计的生产文件中提取出包括零件信息、工艺参数和零件模板在内的生产工艺信息后生成的工艺文件;或者为用户输入的包括零件信息、工艺参数和零件模板在内的工艺文件。
  16. 基于视觉的自主式机器人工装控制***,其特征在于,包括:至少一个存储器(71)和至少一个处理器(72),其中:
    所述至少一个存储器(71)用于存储计算机程序;
    所述至少一个处理器(72)用于调用所述至少一个存储器(71)中存储的计算机程序,执行如权利要求10至14中任一项所述的基于视觉的自主式机器人工装控制方法。
  17. 计算机可读存储介质,其上存储有计算机程序;其特征在于,所述计算机程序能够被一处理器执行并实现如权利要求10至14中任一项所述的基于视觉的自主式机器人工装控制方法。
PCT/CN2019/105438 2019-09-11 2019-09-11 自主式机器人工装***、控制***、控制方法及存储介质 WO2021046767A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2019/105438 WO2021046767A1 (zh) 2019-09-11 2019-09-11 自主式机器人工装***、控制***、控制方法及存储介质
EP19944974.5A EP4005745A4 (en) 2019-09-11 2019-09-11 AUTONOMOUS ROBOT TOOLING SYSTEM, CONTROL SYSTEM, CONTROL METHOD AND STORAGE MEDIUM
CN201980098648.8A CN114174007B (zh) 2019-09-11 自主式机器人工装***、控制***、控制方法及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/105438 WO2021046767A1 (zh) 2019-09-11 2019-09-11 自主式机器人工装***、控制***、控制方法及存储介质

Publications (1)

Publication Number Publication Date
WO2021046767A1 true WO2021046767A1 (zh) 2021-03-18

Family

ID=74866861

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/105438 WO2021046767A1 (zh) 2019-09-11 2019-09-11 自主式机器人工装***、控制***、控制方法及存储介质

Country Status (2)

Country Link
EP (1) EP4005745A4 (zh)
WO (1) WO2021046767A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113703382A (zh) * 2021-07-13 2021-11-26 特科能(株洲)科技有限公司 前室预抽真空多用气氛渗氮炉工件识别***
CN114227706A (zh) * 2021-12-15 2022-03-25 熵智科技(深圳)有限公司 基于3d视觉的切坡口方法、装置、设备、***及介质
CN114799849A (zh) * 2022-06-27 2022-07-29 深圳市中弘凯科技有限公司 一种基于机器视觉的螺丝机作业操作参数采集分析***
CN116713514A (zh) * 2023-07-28 2023-09-08 嘉钢精密工业(盐城)有限公司 基于机器视觉的铸造件加工孔精准定位***及方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003225837A (ja) * 2002-01-31 2003-08-12 Nitto Seiko Co Ltd 自動ねじ締め装置
CN102909548A (zh) * 2012-10-13 2013-02-06 桂林电子科技大学 一种自动锁螺丝方法及装置
CN104259839A (zh) * 2014-09-03 2015-01-07 四川长虹电器股份有限公司 基于视觉识别的自动螺钉机及其安装螺钉的方法
CN108312144A (zh) * 2017-12-25 2018-07-24 北京航天测控技术有限公司 基于机器视觉的机器人自动锁付控制***及方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6459227B2 (ja) * 2014-06-02 2019-01-30 セイコーエプソン株式会社 ロボットシステム
JP6506245B2 (ja) * 2016-12-26 2019-04-24 ファナック株式会社 組付動作を学習する機械学習装置および部品組付システム

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003225837A (ja) * 2002-01-31 2003-08-12 Nitto Seiko Co Ltd 自動ねじ締め装置
CN102909548A (zh) * 2012-10-13 2013-02-06 桂林电子科技大学 一种自动锁螺丝方法及装置
CN104259839A (zh) * 2014-09-03 2015-01-07 四川长虹电器股份有限公司 基于视觉识别的自动螺钉机及其安装螺钉的方法
CN108312144A (zh) * 2017-12-25 2018-07-24 北京航天测控技术有限公司 基于机器视觉的机器人自动锁付控制***及方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4005745A4 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113703382A (zh) * 2021-07-13 2021-11-26 特科能(株洲)科技有限公司 前室预抽真空多用气氛渗氮炉工件识别***
CN114227706A (zh) * 2021-12-15 2022-03-25 熵智科技(深圳)有限公司 基于3d视觉的切坡口方法、装置、设备、***及介质
CN114799849A (zh) * 2022-06-27 2022-07-29 深圳市中弘凯科技有限公司 一种基于机器视觉的螺丝机作业操作参数采集分析***
CN116713514A (zh) * 2023-07-28 2023-09-08 嘉钢精密工业(盐城)有限公司 基于机器视觉的铸造件加工孔精准定位***及方法
CN116713514B (zh) * 2023-07-28 2023-11-03 嘉钢精密工业(盐城)有限公司 基于机器视觉的铸造件加工孔精准定位***及方法

Also Published As

Publication number Publication date
EP4005745A4 (en) 2023-04-19
CN114174007A (zh) 2022-03-11
EP4005745A1 (en) 2022-06-01

Similar Documents

Publication Publication Date Title
WO2021046767A1 (zh) 自主式机器人工装***、控制***、控制方法及存储介质
CN108453701B (zh) 控制机器人的方法、示教机器人的方法和机器人***
CN106426161B (zh) 在引导装配环境中将机器视觉坐标空间关联的***和方法
CN108698228B (zh) 任务创建装置及作业***以及作业机器人的控制装置
EP3354418A2 (en) Robot control method and device
JPH07311610A (ja) 視覚センサを用いた座標系設定方法
JP2008015683A (ja) ロボットプログラムを作成するための装置、プログラム、記録媒体及び方法
CN113319859B (zh) 一种机器人示教方法、***、装置及电子设备
US11679508B2 (en) Robot device controller for controlling position of robot
CN113379849A (zh) 基于深度相机的机器人自主识别智能抓取方法及***
CN114080590A (zh) 使用先进扫描技术的机器人料箱拾取***和方法
JP2004243215A (ja) シーラー塗布装置のロボットティーチング方法及びシーラー塗布装置
CN112828892A (zh) 一种工件抓取方法、装置、计算机设备及存储介质
CN110232710B (zh) 基于三维相机的物品定位方法、***及设备
CN109732601B (zh) 一种自动标定机器人位姿与相机光轴垂直的方法和装置
US10786901B2 (en) Method for programming robot in vision base coordinate
CN111571596B (zh) 利用视觉修正冶金接插装配作业机器人误差的方法及***
JP2003330511A (ja) シーラー塗布装置
JP2015003348A (ja) ロボット制御システム、制御装置、ロボット、ロボット制御システムの制御方法及びロボットの制御方法
CN114174007B (zh) 自主式机器人工装***、控制***、控制方法及存储介质
US10532460B2 (en) Robot teaching device that sets teaching point based on motion image of workpiece
JP7482364B2 (ja) ロボット搭載移動装置及びシステム
CN112643718B (zh) 图像处理设备及其控制方法和存储其控制程序的存储介质
CN110060330B (zh) 一种基于点云图像的三维建模方法、装置和机器人
JP6799614B2 (ja) 情報処理装置及び情報作成方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19944974

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019944974

Country of ref document: EP

Effective date: 20220224

NENP Non-entry into the national phase

Ref country code: DE