CN116494248A - Visual positioning method of industrial robot - Google Patents

Visual positioning method of industrial robot Download PDF

Info

Publication number
CN116494248A
CN116494248A CN202310754410.4A CN202310754410A CN116494248A CN 116494248 A CN116494248 A CN 116494248A CN 202310754410 A CN202310754410 A CN 202310754410A CN 116494248 A CN116494248 A CN 116494248A
Authority
CN
China
Prior art keywords
target
grabbing
industrial robot
visual positioning
target material
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310754410.4A
Other languages
Chinese (zh)
Other versions
CN116494248B (en
Inventor
邱伟宸
胡发辉
张强
冯晓春
***
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Crk M&e Equipments Co ltd
Original Assignee
Shenzhen Crk M&e Equipments Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Crk M&e Equipments Co ltd filed Critical Shenzhen Crk M&e Equipments Co ltd
Priority to CN202310754410.4A priority Critical patent/CN116494248B/en
Publication of CN116494248A publication Critical patent/CN116494248A/en
Application granted granted Critical
Publication of CN116494248B publication Critical patent/CN116494248B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to the technical field of machine vision, and discloses a visual positioning method of an industrial robot, which is used for improving the grabbing gesture accuracy and the feeding efficiency of the industrial robot. The method comprises the following steps: according to the visual positioning information, matching mapping relations between the target materials and the material transmission channels, and generating a first target material grabbing gesture and a material grabbing sequence of each target material; acquiring weight information and material attribute information of each target material, and calculating vibration compensation parameters of each target material according to the weight information and the material attribute information; fitting the vibration compensation parameters and the grabbing postures of the first target materials to obtain grabbing postures of the second target materials of each target material; generating target action execution instructions of the industrial robot according to the second target material grabbing postures and the material grabbing sequences of each target material, and sequentially placing at least two target materials in a target feeding area according to the target action execution instructions.

Description

Visual positioning method of industrial robot
Technical Field
The invention relates to the technical field of machine vision, in particular to a visual positioning method of an industrial robot.
Background
Industrial robot applications have been widely used in the automation and production fields, however, industrial robots often require accurate visual positioning techniques in order to be able to perform the correct tasks. For this reason, visual localization methods of industrial robots have been widely studied and developed.
Existing industrial robot vision positioning techniques include marker-based positioning, feature-matching-based positioning, deep-learning-based positioning, and the like. However, these techniques have a number of problems, such as: marker localization is limited by the quality and accuracy of the markers, the susceptibility of feature matching to illumination and interference, and the need for extensive data set support by deep learning algorithms, among other things.
Disclosure of Invention
The invention provides a visual positioning method of an industrial robot, which is used for improving the grabbing gesture accuracy and the feeding efficiency of the industrial robot.
The first aspect of the invention provides a visual positioning method of an industrial robot, which comprises the following steps:
dividing material transmission channels of a material feeding transmission platform in a preset industrial robot to obtain a plurality of material transmission channels;
creating a first mapping relation between a single material conveying channel and a first candidate material grabbing gesture, and creating a second mapping relation between every two material conveying channels and a second candidate material grabbing gesture;
Visual positioning is carried out on at least two adjacent target materials, visual positioning information of each target material is generated, mapping relations of the target materials and the material transmission channels are matched according to the visual positioning information, and a first target material grabbing gesture and a material grabbing sequence of each target material are generated;
acquiring weight information and material attribute information of each target material, and calculating vibration compensation parameters of each target material according to the weight information and the material attribute information;
fitting the vibration compensation parameters and the grabbing postures of the first target materials to obtain grabbing postures of the second target materials of each target material;
generating target action execution instructions of the industrial robot according to the second target material grabbing postures of each target material and the material grabbing sequence, and sequentially placing the at least two target materials in a target feeding area according to the target action execution instructions.
With reference to the first aspect, in a first implementation manner of the first aspect of the present invention, the dividing a material conveying channel of a feeding and conveying platform in a preset industrial robot to obtain a plurality of material conveying channels includes:
The method comprises the steps that identification extraction is carried out on a preset feeding transmission platform through a preset industrial robot, so that a plurality of outlet identifications and a plurality of inlet identifications are obtained;
generating a digital twin model of the feed transmission platform according to the plurality of outlet identifiers and the plurality of inlet identifiers;
and dividing the material transmission channels of the digital twin model based on preset channel parameters to obtain a plurality of material transmission channels.
With reference to the first aspect, in a second implementation manner of the first aspect of the present invention, creating a first mapping relationship between a single material conveying channel and a first candidate material grabbing gesture, and creating a second mapping relationship between every two material conveying channels and a second candidate material grabbing gesture includes:
obtaining the maximum material size and the maximum material weight of the material conveying channels, and respectively modeling each material conveying channel according to the maximum material size and the maximum material weight to obtain a channel model corresponding to each material conveying channel;
creating a first mapping relation between a single material transmission channel and a first candidate material grabbing gesture according to a channel model corresponding to each material transmission channel;
and calculating the relative position and direction between every two material transmission channels according to the channel model corresponding to each material transmission channel, and creating a second mapping relation between every two material transmission channels and a second candidate material grabbing gesture according to the relative position and direction.
With reference to the first aspect, in a third implementation manner of the first aspect of the present invention, the performing visual positioning on at least two adjacent target materials to generate visual positioning information of each target material, and matching, according to the visual positioning information, mapping relationships between the target materials and the plurality of material transmission channels, to generate a first target material capturing pose and a material capturing sequence of each target material, where the first target material capturing pose and the material capturing sequence include:
performing three-dimensional scanning on at least two adjacent target materials to obtain three-dimensional point cloud data of the at least two target materials;
visual positioning is carried out according to the three-dimensional point cloud data, so that visual positioning information of each target material is obtained;
extracting the shape and the characteristics of each target material according to the visual positioning information of each target material, and calculating the relative positions of the at least two target materials according to the shape and the characteristics;
determining the material grabbing sequence of the at least two target materials according to the relative positions;
and matching the mapping relation between the target material and the material transmission channels according to the visual positioning information.
With reference to the first aspect, in a fourth implementation manner of the first aspect of the present invention, the obtaining weight information and material attribute information of each target material, and calculating a vibration compensation parameter of each target material according to the weight information and the material attribute information, includes:
Respectively acquiring weight information of each target material based on a weighing sensor in the feeding transmission platform, and scanning material attribute information of each target material through the industrial robot;
setting a vibration compensation coefficient of each target material according to the material attribute information;
and inputting the vibration compensation coefficient and the weight information into a preset vibration compensation model to perform vibration compensation calculation, so as to obtain vibration compensation parameters of each target material.
With reference to the first aspect, in a fifth implementation manner of the first aspect of the present invention, the fitting the grabbing gesture to the vibration compensation parameter and the grabbing gesture of the first target material to obtain a second grabbing gesture of each target material includes:
combining the vibration compensation parameters and the grabbing postures of the first target materials to obtain a posture combined data model of each target material;
performing gesture fitting according to the gesture merging data model to obtain gesture information of each target material after fitting;
and calculating the second target material grabbing gesture of each target material according to the gesture information of each target material after fitting.
With reference to the first aspect, in a sixth implementation manner of the first aspect of the present invention, the generating, according to the second target material grabbing gesture of each target material and the material grabbing sequence, a target action execution instruction of the industrial robot, and placing the at least two target materials in a target feeding area sequentially according to the target action execution instruction includes:
Generating a grabbing parameter set of the industrial robot according to the grabbing gesture of the second target material of each target material and the grabbing sequence of the materials;
according to a preset execution instruction generation algorithm, performing instruction conversion on the grabbing parameter set to generate a target action execution instruction of the industrial robot;
and sequentially placing the at least two target materials in a target feeding area according to the target action execution instruction.
A second aspect of the present invention provides a visual positioning apparatus of an industrial robot, the visual positioning apparatus of an industrial robot comprising:
the dividing module is used for dividing material transmission channels of a material feeding transmission platform in the preset industrial robot to obtain a plurality of material transmission channels;
the creation module is used for creating a first mapping relation between a single material transmission channel and a first candidate material grabbing gesture and creating a second mapping relation between every two material transmission channels and a second candidate material grabbing gesture;
the visual positioning module is used for performing visual positioning on at least two adjacent target materials, generating visual positioning information of each target material, matching mapping relations between the target materials and the material transmission channels according to the visual positioning information, and generating a first target material grabbing gesture and a material grabbing sequence of each target material;
The calculating module is used for acquiring weight information and material attribute information of each target material and calculating vibration compensation parameters of each target material according to the weight information and the material attribute information;
the fitting module is used for fitting the grabbing postures of the vibration compensation parameters and the grabbing postures of the first target materials to obtain second grabbing postures of the second target materials of each target material;
and the execution module is used for generating a target action execution instruction of the industrial robot according to the second target material grabbing gesture of each target material and the material grabbing sequence, and sequentially placing the at least two target materials in a target feeding area according to the target action execution instruction.
A third aspect of the present invention provides a visual positioning apparatus for an industrial robot, comprising: a memory and at least one processor, the memory having instructions stored therein; the at least one processor invokes the instructions in the memory to cause the visual positioning apparatus of the industrial robot to perform the visual positioning method of the industrial robot described above.
A fourth aspect of the present invention provides a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the above-described visual positioning method of an industrial robot.
According to the technical scheme provided by the invention, mapping relations between target materials and a plurality of material transmission channels are matched according to visual positioning information, and a first target material grabbing gesture and a material grabbing sequence of each target material are generated; acquiring weight information and material attribute information of each target material, and calculating vibration compensation parameters of each target material according to the weight information and the material attribute information; fitting the vibration compensation parameters and the grabbing postures of the first target materials to obtain grabbing postures of the second target materials of each target material; according to the method, the target action execution instruction of the industrial robot is generated according to the second target material grabbing gesture and the material grabbing sequence of each target material, and at least two target materials are sequentially placed in a target feeding area according to the target action execution instruction.
Drawings
FIG. 1 is a schematic view of an embodiment of a visual positioning method of an industrial robot according to an embodiment of the present invention;
FIG. 2 is a flowchart of creating a first mapping relationship and a second mapping relationship according to an embodiment of the present invention;
FIG. 3 is a flow chart of generating a first target material grabbing gesture and a material grabbing sequence according to an embodiment of the present invention;
FIG. 4 is a flow chart of calculating vibration compensation parameters according to an embodiment of the present invention;
FIG. 5 is a schematic view of an embodiment of a visual positioning device of an industrial robot according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an embodiment of a visual positioning apparatus of an industrial robot in an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a visual positioning method of an industrial robot, which is used for improving the grabbing gesture accuracy and feeding efficiency of the industrial robot. The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
For ease of understanding, a specific flow of an embodiment of the present invention is described below with reference to fig. 1, and an embodiment of a visual positioning method for an industrial robot in an embodiment of the present invention includes:
s101, dividing a material transmission channel of a material feeding transmission platform in a preset industrial robot to obtain a plurality of material transmission channels;
it is to be understood that the execution subject of the present invention may be a visual positioning device of an industrial robot, and may also be a terminal or a server, which is not limited herein. The embodiment of the invention is described by taking a server as an execution main body as an example.
Specifically, firstly, extracting identification information from a feeding transmission platform of the industrial robot species, obtaining an identification information set corresponding to the feeding transmission platform, further, dividing the identification information according to the identification information set to obtain a plurality of outlet identifications and a plurality of inlet identifications corresponding to the feeding transmission platform, and further, dividing a material transmission channel of the feeding transmission platform of the preset industrial robot species according to the plurality of outlet identifications and the plurality of inlet identifications to obtain a plurality of material transmission channels.
S102, creating a first mapping relation between a single material conveying channel and a first candidate material grabbing gesture, and creating a second mapping relation between every two material conveying channels and a second candidate material grabbing gesture;
Specifically, the server firstly determines a grabbing gesture library of the first candidate material and the second candidate material, encodes the grabbing gesture library into a digital or vector form, performs size analysis on each material transmission channel through a preset image detection algorithm, obtains the maximum material sizes and weights of the material transmission channels, further creates a first mapping relation between a single material transmission channel and the grabbing gesture of the first candidate material according to the maximum material sizes and weights of the material transmission channels, further obtains relative position information between every two material transmission channels through the image detection algorithm for every two material transmission channels, and further creates a second mapping relation between every two material transmission channels and the grabbing gesture of the second candidate material according to the relative position information.
S103, performing visual positioning on at least two adjacent target materials, generating visual positioning information of each target material, matching mapping relations of the target materials and a plurality of material transmission channels according to the visual positioning information, and generating a first target material grabbing gesture and a material grabbing sequence of each target material;
the method comprises the steps of acquiring images and depth information of at least two target materials by using a preset image acquisition device, extracting point cloud data sets from the images and depth information, preprocessing the point cloud data by a server through a point cloud filtering and registering algorithm, removing noise and errors, segmenting objects in the point cloud data sets by using a point cloud segmentation algorithm to obtain point cloud data subsets of each target material, carrying out attitude estimation on the point cloud data subsets of each obtained object, calculating a rigid body transformation matrix of each target material in the point cloud according to the estimated position and direction, acquiring 3D attitude information of each target material, classifying and identifying the point cloud data subsets of each target material, and further determining the specific type and related attribute of the material by utilizing the characteristics such as color, shape, text and the like to identify and classify the attribute. And matching the position and relative position information of each material in the point cloud, and further determining the grabbing sequence and grabbing targets of each material, wherein the point cloud registration algorithm and the hand-eye calibration technology are used for acquiring the 3D gesture information of each target material in a grabbing coordinate system, calculating corresponding grabbing gestures and grabbing positions according to the mapping relation, further determining the material grabbing sequence of at least two target materials according to the relative positions, and finally matching the mapping relation between the target materials and the material transmission channels according to the visual positioning information.
S104, acquiring weight information and material attribute information of each target material, and calculating vibration compensation parameters of each target material according to the weight information and the material attribute information;
specifically, the server obtains the weight information and the material attribute information of each target material through a weighing sensor in the feeding transmission platform respectively, and it is to be noted that the material attribute information includes: the method comprises the steps of establishing a three-dimensional solid model of the material according to the shape, size, weight and other attribute information of the material, calculating an inertia matrix of each target material, and calculating the vibration frequency of each target material, namely the natural frequency according to the mass and inertia matrix of each target material, further comparing the vibration frequencies of different materials, calculating the vibration compensation coefficient of each target material according to the attribute information of the weight, the shape and the form and the like, and finally, calculating the vibration compensation parameter according to the vibration compensation coefficient of each target material by a server.
S105, fitting the vibration compensation parameters and the grabbing postures of the first target materials to obtain second grabbing postures of each target material;
specifically, the server determines initial estimation of the grabbing pose according to the grabbing pose of the first target material and the vibration compensation parameter, inputs 3D point cloud data of the material to be grabbed, and aligns the target point cloud with the reference point cloud through a point cloud registration algorithm so as to eliminate movement of the object in a sensor coordinate system. And establishing an object rigid body transformation matrix by utilizing object posture information in the point cloud data, and adjusting the initial value of the grabbing posture by utilizing the known grabbing posture of the first target material. And respectively carrying out gesture estimation on each target material by utilizing an ICP algorithm based on the point cloud characteristics to obtain high-precision gesture information of each target material, planning a grabbing gesture of a second target material according to the grabbing track of the first target material and the high-precision gesture information of each target material, further optimizing the precision of grabbing gesture fitting through multiple iterations, filtering abnormal data, and finally obtaining the grabbing gesture of the second target material of each target material.
S106, generating target action execution instructions of the industrial robot according to the second target material grabbing postures and the material grabbing sequences of each target material, and sequentially placing at least two target materials in a target feeding area according to the target action execution instructions.
Specifically, the server firstly sorts all target materials according to a feeding sequence, marks a second target material grabbing gesture of each material, calculates a motion track required by a robot for executing grabbing actions, a joint track of a robot arm and a short-distance track required by placing actions by utilizing a robot track planning algorithm, generates a target action execution instruction according to the motion track of each material and the joint track of the robot arm, moves the robot arm to the target grabbing gesture according to the target action execution instruction, executes grabbing actions, successfully grabs the target materials, moves the robot arm to the target placing gesture according to the target action execution instruction, executes the placing actions and places the target materials in a target feeding area.
In the embodiment of the invention, mapping relations between target materials and a plurality of material transmission channels are matched according to visual positioning information, and a first target material grabbing gesture and a material grabbing sequence of each target material are generated; acquiring weight information and material attribute information of each target material, and calculating vibration compensation parameters of each target material according to the weight information and the material attribute information; fitting the vibration compensation parameters and the grabbing postures of the first target materials to obtain grabbing postures of the second target materials of each target material; according to the method, the target action execution instruction of the industrial robot is generated according to the second target material grabbing gesture and the material grabbing sequence of each target material, and at least two target materials are sequentially placed in a target feeding area according to the target action execution instruction.
In a specific embodiment, the process of executing step S101 may specifically include the following steps:
(1) The method comprises the steps that identification extraction is carried out on a preset feeding transmission platform through a preset industrial robot, so that a plurality of outlet identifications and a plurality of inlet identifications are obtained;
(2) Generating a digital twin model of the feed transmission platform according to the plurality of outlet identifiers and the plurality of inlet identifiers;
(3) And carrying out material transmission channel division on the digital twin model based on preset channel parameters to obtain a plurality of material transmission channels.
Specifically, the server extracts identification information of a feeding transmission platform of the industrial robot species, acquires an identification information set corresponding to the feeding transmission platform, and further performs identification information division according to the identification information set to obtain a plurality of outlet identifications and a plurality of inlet identifications corresponding to the feeding transmission platform.
And generating three-dimensional point cloud data corresponding to the feeding transmission platform according to the plurality of outlet identifiers and the plurality of inlet identifiers, wherein the server constructs a virtual space coordinate system according to the plurality of outlet identifiers and the plurality of inlet identifiers, further respectively determines virtual three-dimensional coordinate data corresponding to each outlet identifier and each inlet identifier to obtain a virtual three-dimensional coordinate data set, and further constructs a digital twin model according to the virtual three-dimensional coordinate data set to obtain the digital twin model of the feeding transmission platform.
Finally, according to the geometric characteristics, channel length, width, height, material and other information of the digital twin model, the transmission channel parameters are designed, and it is to be noted that the channel parameters include: the channel shape, the material and thickness comprising the inner wall of the channel, the bending and angle information of the channel and the size of the channel opening are adopted, the channel segmentation processing is carried out on the digital twin model according to the channel parameters, the model is divided into a plurality of transmission channels comprising channels with different sizes, materials and shapes according to the material transmission direction and the channel parameters, and finally a plurality of material transmission channels are obtained.
In a specific embodiment, as shown in fig. 2, the process of executing step S102 may specifically include the following steps:
s201, obtaining the maximum material sizes and the maximum material weights of a plurality of material conveying channels, and respectively modeling each material conveying channel according to the maximum material sizes and the maximum material weights to obtain a channel model corresponding to each material conveying channel;
s202, creating a first mapping relation between a single material transmission channel and a first candidate material grabbing gesture according to a channel model corresponding to each material transmission channel;
s203, calculating the relative position and direction between every two material transmission channels according to the channel model corresponding to each material transmission channel, and creating a second mapping relation between every two material transmission channels and the second candidate material grabbing gesture according to the relative position and direction.
Specifically, the server firstly obtains the maximum material sizes and weights of a plurality of material transmission channels, then determines the maximum material sizes and weight standards which can be contained in each channel according to the channel parameters and the channel shapes, classifies the transmission channels according to the size and weight parameters of various materials, determines the material types and limits supported by each channel, and further establishes a channel model. When the channel model is built, 3D modeling is performed according to parameters such as the size, the width, the height and the length of the material transmission channels, and a channel model corresponding to each material transmission channel is obtained.
And then, according to parameters such as the size, the shape, the weight and the like of the first candidate material, determining the motion track and the gesture of the first candidate material in the transmission channel, converting the gesture parameters of the first candidate material into gesture parameters for controlling a robot arm through a robot control algorithm, collecting action parameters such as clamping, placing, moving and the like of the robot arm on the first candidate material, further obtaining corresponding mapping relations between the gesture of the robot arm and the transmission track in different material motion processes through simulation and emulation of each material transmission channel, and completing establishment of a first mapping relation between a single material transmission channel and the grabbing gesture of the first candidate material, and finally obtaining the first mapping relation between the single material transmission channel and the grabbing gesture of the first candidate material.
Finally, determining relative positions and directions, including angles, distances, relative heights and the like, of channel models corresponding to the two material transmission channels, determining motion tracks and postures between the two material transmission channels according to parameters, including clamping directions, clamping forces and the like, of the second candidate materials, calculating relative positions and directions between the two material transmission channels through a robot control algorithm, including posture parameters and track parameters for clamping and placing the materials, and acquiring corresponding mapping relations between the postures of arms and the transmission tracks of the robot in different material motion processes through simulation and emulation between every two material transmission channels so as to complete establishment of a second mapping relation between each two material transmission channels and the grabbing posture of the second candidate material.
In a specific embodiment, as shown in fig. 3, the process of executing step S103 may specifically include the following steps:
s301, performing three-dimensional scanning on at least two adjacent target materials to obtain three-dimensional point cloud data of the at least two target materials;
s302, performing visual positioning according to the three-dimensional point cloud data to obtain visual positioning information of each target material;
S303, extracting the shape and the characteristics of each target material according to the visual positioning information of each target material, and calculating the relative positions of at least two target materials according to the shape and the characteristics;
s304, determining a material grabbing sequence of at least two target materials according to the relative positions;
s305, matching the mapping relation between the target material and the material transmission channels according to the visual positioning information.
Specifically, the server places at least two adjacent target materials in a scanning area of the scanning equipment, adjusts the scanning equipment to keep a proper distance and angle with the target materials, then starts the three-dimensional scanning equipment, and controls the position, the posture and the rotation of the target materials according to the indication of the scanning equipment so that the scanning equipment can capture the complete appearance and each detail of the target materials, and finally obtains three-dimensional point cloud data of the at least two target materials;
and carrying out standardized processing such as denoising, interpolation, registration, compression and the like on the acquired three-dimensional point cloud data to construct a complete three-dimensional model, and extracting and matching characteristic points of the three-dimensional model to realize positioning and identification of a target material, wherein visual positioning information of the target material, including parameters such as position, attitude and size of the material, is calculated according to the relative positions and directions of the matching points and the characteristic points. In the embodiment of the invention, if visual positioning is required to be performed on a plurality of target materials, a multi-sensor fusion technology and a multi-camera calibration technology can be adopted to process and integrate three-dimensional point cloud data of different visual angles, so that positioning accuracy and robustness are improved.
Further, the shape and characteristics of each target material, including size, shape, curvature, angle, surface characteristics, etc., are extracted according to the visual positioning information of the target material. Furthermore, the comparison and matching are performed on different target materials to obtain the relative positions and directions among the target materials, and it should be noted that if the relative positions and directions among a plurality of target materials need to be calculated, a multi-view three-dimensional reconstruction method can be adopted, and meanwhile, the target materials are scanned and matched by utilizing a plurality of view angles and a plurality of sensors, so that the accuracy and precision of calculation are improved.
Finally, according to the calculated relative positions and directions of at least two target materials, determining the material grabbing sequence between the two target materials, for example, grabbing materials with lower positions or materials with longer distances firstly so as to enable the following materials to run smoothly, and according to the visual positioning information of each target material, matching the mapping relation between the target material and a plurality of material transmission channels so as to confirm the correct path of material transmission and the gesture and track of the robot arm.
In a specific embodiment, as shown in fig. 4, the process of executing step S104 may specifically include the following steps:
S401, respectively acquiring weight information of each target material based on a weighing sensor in a feeding transmission platform, and scanning material attribute information of each target material through an industrial robot;
s402, setting vibration compensation coefficients of each target material according to material attribute information;
s403, inputting the vibration compensation coefficient and the weight information into a preset vibration compensation model to perform vibration compensation calculation, and obtaining the vibration compensation parameters of each target material.
Specifically, the server obtains the weight information and the material attribute information of each target material through a weighing sensor in the feeding transmission platform respectively, and it is to be noted that the material attribute information includes: the method comprises the steps of establishing a three-dimensional solid model of the material according to the shape, size, weight and other attribute information of the material, calculating an inertia matrix of each target material, and calculating the vibration frequency of each target material, namely the natural frequency according to the mass and inertia matrix of each target material, further comparing the vibration frequencies of different materials, calculating the vibration compensation coefficient of each target material according to the attribute information of the weight, the shape and the form and the like, and finally, calculating the vibration compensation parameter according to the vibration compensation coefficient of each target material by a server.
In a specific embodiment, the process of executing step S105 may specifically include the following steps:
(1) Combining the vibration compensation parameters and the grabbing postures of the first target materials to obtain a posture combined data model of each target material;
(2) Performing gesture fitting according to the gesture merging data model to obtain gesture information of each target material after fitting;
(3) And calculating the second target material grabbing gesture of each target material according to the gesture information of each target material after fitting.
Specifically, the server firstly collects grabbing gesture data and vibration compensation parameter data of a first target material according to a control algorithm and sensor equipment of a robot arm, performs preprocessing and analysis on the collected data, including denoising, filtering, alignment and calibration processing, so as to improve accuracy and precision of the data, combines and fuses the grabbing gesture of the first target material and the vibration compensation parameter, and generates a gesture combined data model including the position, gesture, motion track, grabbing parameter and the like of the material.
And loading a posture merging data model, acquiring information such as the position, the posture and the grabbing parameters of each target material, fitting and matching the actual posture data acquired by the sensor according to the posture merging data model so as to realize accurate and stable control of the material posture, and generating posture information of each target material after fitting.
Finally, according to fitting posture data of the target materials, information such as positions, postures and grabbing parameters of the target materials are calculated, and then according to a control algorithm of a robot arm, grabbing postures of the second target materials such as grabbing positions, angles and strength are calculated through relative positions and distances of the target materials.
In a specific embodiment, the process of executing step S106 may specifically include the following steps:
(1) Generating a grabbing parameter set of the industrial robot according to the grabbing gesture of the second target material of each target material and the grabbing sequence of the materials;
(2) According to a preset execution instruction generation algorithm, performing instruction conversion on the grabbing parameter set to generate a target action execution instruction of the industrial robot;
(3) And sequentially placing at least two target materials in the target feeding area according to the target action execution instruction.
Specifically, firstly, according to fitting gesture data of each target material and a second target material grabbing gesture, a grabbing parameter set is generated, wherein the grabbing parameter set comprises parameters such as a robot arm length, grabbing angles and grabbing forces, then, according to a material grabbing sequence and a path planning, the grabbing parameter set of each target material is ordered and combined to generate an grabbing parameter set of an industrial robot, further, according to a motion track of each material and a joint track of a robot arm, a target action executing instruction is generated, according to the target action executing instruction, the robot arm is moved to a target grabbing gesture, grabbing actions are executed, the target material is successfully grabbed, according to the target action executing instruction, the robot arm is moved to a target placing gesture, placing actions are executed, and the target material is placed in a target feeding area.
The method for positioning the industrial robot according to the embodiment of the present invention is described above, and the following describes the device for positioning the industrial robot according to the embodiment of the present invention, referring to fig. 5, and one embodiment of the device for positioning the industrial robot according to the embodiment of the present invention includes:
the dividing module 501 is configured to divide a material transmission channel of a material transmission platform in a preset industrial robot to obtain a plurality of material transmission channels;
the creating module 502 is configured to create a first mapping relationship between a single material transfer channel and a first candidate material grabbing gesture, and create a second mapping relationship between each two material transfer channels and a second candidate material grabbing gesture;
the visual positioning module 503 is configured to perform visual positioning on at least two adjacent target materials, generate visual positioning information of each target material, and match mapping relationships between the target materials and the plurality of material transmission channels according to the visual positioning information, so as to generate a first target material grabbing gesture and a material grabbing sequence of each target material;
the calculating module 504 is configured to obtain weight information and material attribute information of each target material, and calculate a vibration compensation parameter of each target material according to the weight information and the material attribute information;
The fitting module 505 is configured to perform a grabbing gesture fitting on the vibration compensation parameter and the first target material grabbing gesture, so as to obtain a second target material grabbing gesture of each target material;
the execution module 506 is configured to generate a target action execution instruction of the industrial robot according to the second target material grabbing gesture of each target material and the material grabbing sequence, and sequentially place the at least two target materials in a target feeding area according to the target action execution instruction.
Matching mapping relations between the target materials and the material transmission channels according to the visual positioning information through the cooperative cooperation of the components, and generating a first target material grabbing gesture and a material grabbing sequence of each target material; acquiring weight information and material attribute information of each target material, and calculating vibration compensation parameters of each target material according to the weight information and the material attribute information; fitting the vibration compensation parameters and the grabbing postures of the first target materials to obtain grabbing postures of the second target materials of each target material; according to the method, the target action execution instruction of the industrial robot is generated according to the second target material grabbing gesture and the material grabbing sequence of each target material, and at least two target materials are sequentially placed in a target feeding area according to the target action execution instruction.
The above fig. 5 describes the visual positioning device of the industrial robot in the embodiment of the present invention in detail from the point of view of the modularized functional entity, and the following describes the visual positioning device of the industrial robot in the embodiment of the present invention in detail from the point of view of hardware processing.
Fig. 6 is a schematic structural diagram of a visual positioning apparatus for an industrial robot according to an embodiment of the present invention, where the visual positioning apparatus 600 for an industrial robot may have a relatively large difference due to different configurations or performances, and may include one or more processors (central processing units, CPU) 610 (e.g., one or more processors) and a memory 620, and one or more storage media 630 (e.g., one or more mass storage devices) storing application programs 633 or data 632. Wherein the memory 620 and the storage medium 630 may be transitory or persistent storage. The program stored on the storage medium 630 may include one or more modules (not shown), each of which may include a series of instruction operations in the visual positioning apparatus 600 of the industrial robot. Still further, the processor 610 may be configured to communicate with the storage medium 630 to execute a series of instruction operations in the storage medium 630 on the visual positioning device 600 of the industrial robot.
The industrial robot vision positioning device 600 can also include one or more power supplies 640, one or more wired or wireless network interfaces 650, one or more input/output interfaces 660, and/or one or more operating systems 631, such as Windows Serves, macOS X, unix, linux, freeBSD, and the like. It will be appreciated by those skilled in the art that the visual positioning device configuration of the industrial robot shown in fig. 6 does not constitute a limitation of the visual positioning device of the industrial robot, and may include more or fewer components than shown, or may combine certain components, or may be arranged in different components.
The present invention also provides a visual positioning apparatus for an industrial robot, which includes a memory and a processor, wherein the memory stores computer readable instructions that, when executed by the processor, cause the processor to execute the steps of the visual positioning method for an industrial robot in the above embodiments.
The present invention also provides a computer readable storage medium, which may be a non-volatile computer readable storage medium, and may also be a volatile computer readable storage medium, in which instructions are stored which, when executed on a computer, cause the computer to perform the steps of the visual positioning method of an industrial robot.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (randomacceS memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A visual positioning method of an industrial robot, characterized in that the visual positioning method of the industrial robot comprises:
dividing material transmission channels of a material feeding transmission platform in a preset industrial robot to obtain a plurality of material transmission channels;
creating a first mapping relation between a single material conveying channel and a first candidate material grabbing gesture, and creating a second mapping relation between every two material conveying channels and a second candidate material grabbing gesture;
visual positioning is carried out on at least two adjacent target materials, visual positioning information of each target material is generated, mapping relations of the target materials and the material transmission channels are matched according to the visual positioning information, and a first target material grabbing gesture and a material grabbing sequence of each target material are generated;
Acquiring weight information and material attribute information of each target material, and calculating vibration compensation parameters of each target material according to the weight information and the material attribute information;
fitting the vibration compensation parameters and the grabbing postures of the first target materials to obtain grabbing postures of the second target materials of each target material;
generating target action execution instructions of the industrial robot according to the second target material grabbing postures of each target material and the material grabbing sequence, and sequentially placing the at least two target materials in a target feeding area according to the target action execution instructions.
2. The visual positioning method of an industrial robot according to claim 1, wherein the dividing the material transmission channels of the feeding transmission platform in the preset industrial robot to obtain a plurality of material transmission channels comprises:
the method comprises the steps that identification extraction is carried out on a preset feeding transmission platform through a preset industrial robot, so that a plurality of outlet identifications and a plurality of inlet identifications are obtained;
generating a digital twin model of the feed transmission platform according to the plurality of outlet identifiers and the plurality of inlet identifiers;
And dividing the material transmission channels of the digital twin model based on preset channel parameters to obtain a plurality of material transmission channels.
3. The method of claim 1, wherein creating a first mapping between a single material transfer lane and a first candidate material grabbing gesture and creating a second mapping between each two material transfer lanes and a second candidate material grabbing gesture comprises:
obtaining the maximum material size and the maximum material weight of the material conveying channels, and respectively modeling each material conveying channel according to the maximum material size and the maximum material weight to obtain a channel model corresponding to each material conveying channel;
creating a first mapping relation between a single material transmission channel and a first candidate material grabbing gesture according to a channel model corresponding to each material transmission channel;
and calculating the relative position and direction between every two material transmission channels according to the channel model corresponding to each material transmission channel, and creating a second mapping relation between every two material transmission channels and a second candidate material grabbing gesture according to the relative position and direction.
4. The method for visual positioning of an industrial robot according to claim 1, wherein the performing visual positioning on at least two adjacent target materials to generate visual positioning information of each target material, and matching mapping relationships of the target material and the plurality of material transfer channels according to the visual positioning information to generate a first target material grabbing gesture and a material grabbing sequence of each target material, includes:
Performing three-dimensional scanning on at least two adjacent target materials to obtain three-dimensional point cloud data of the at least two target materials;
visual positioning is carried out according to the three-dimensional point cloud data, so that visual positioning information of each target material is obtained;
extracting the shape and the characteristics of each target material according to the visual positioning information of each target material, and calculating the relative positions of the at least two target materials according to the shape and the characteristics;
determining the material grabbing sequence of the at least two target materials according to the relative positions;
and matching the mapping relation between the target material and the material transmission channels according to the visual positioning information.
5. The method for visual positioning of an industrial robot according to claim 1, wherein the acquiring the weight information and the material property information of each target material and calculating the vibration compensation parameter of each target material according to the weight information and the material property information comprises:
respectively acquiring weight information of each target material based on a weighing sensor in the feeding transmission platform, and scanning material attribute information of each target material through the industrial robot;
Setting a vibration compensation coefficient of each target material according to the material attribute information;
and inputting the vibration compensation coefficient and the weight information into a preset vibration compensation model to perform vibration compensation calculation, so as to obtain vibration compensation parameters of each target material.
6. The method for visual positioning of an industrial robot according to claim 1, wherein the fitting the vibration compensation parameter and the first target material grabbing gesture to obtain a second target material grabbing gesture of each target material includes:
combining the vibration compensation parameters and the grabbing postures of the first target materials to obtain a posture combined data model of each target material;
performing gesture fitting according to the gesture merging data model to obtain gesture information of each target material after fitting;
and calculating the second target material grabbing gesture of each target material according to the gesture information of each target material after fitting.
7. The visual positioning method of an industrial robot according to claim 1, wherein the generating a target action execution instruction of the industrial robot according to the second target material grabbing gesture of each target material and the material grabbing sequence, and placing the at least two target materials in a target feeding area in sequence according to the target action execution instruction, includes:
Generating a grabbing parameter set of the industrial robot according to the grabbing gesture of the second target material of each target material and the grabbing sequence of the materials;
according to a preset execution instruction generation algorithm, performing instruction conversion on the grabbing parameter set to generate a target action execution instruction of the industrial robot;
and sequentially placing the at least two target materials in a target feeding area according to the target action execution instruction.
8. A visual positioning device of an industrial robot, characterized in that the visual positioning device of an industrial robot comprises:
the dividing module is used for dividing material transmission channels of a material feeding transmission platform in the preset industrial robot to obtain a plurality of material transmission channels;
the creation module is used for creating a first mapping relation between a single material transmission channel and a first candidate material grabbing gesture and creating a second mapping relation between every two material transmission channels and a second candidate material grabbing gesture;
the visual positioning module is used for performing visual positioning on at least two adjacent target materials, generating visual positioning information of each target material, matching mapping relations between the target materials and the material transmission channels according to the visual positioning information, and generating a first target material grabbing gesture and a material grabbing sequence of each target material;
The calculating module is used for acquiring weight information and material attribute information of each target material and calculating vibration compensation parameters of each target material according to the weight information and the material attribute information;
the fitting module is used for fitting the grabbing postures of the vibration compensation parameters and the grabbing postures of the first target materials to obtain second grabbing postures of the second target materials of each target material;
and the execution module is used for generating a target action execution instruction of the industrial robot according to the second target material grabbing gesture of each target material and the material grabbing sequence, and sequentially placing the at least two target materials in a target feeding area according to the target action execution instruction.
9. A visual positioning apparatus of an industrial robot, characterized in that the visual positioning apparatus of an industrial robot comprises: a memory and at least one processor, the memory having instructions stored therein;
the at least one processor invokes the instructions in the memory to cause the visual positioning device of the industrial robot to perform the visual positioning method of the industrial robot of any of claims 1-7.
10. A computer readable storage medium having instructions stored thereon, which when executed by a processor, implement the visual localization method of an industrial robot according to any one of claims 1-7.
CN202310754410.4A 2023-06-26 2023-06-26 Visual positioning method of industrial robot Active CN116494248B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310754410.4A CN116494248B (en) 2023-06-26 2023-06-26 Visual positioning method of industrial robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310754410.4A CN116494248B (en) 2023-06-26 2023-06-26 Visual positioning method of industrial robot

Publications (2)

Publication Number Publication Date
CN116494248A true CN116494248A (en) 2023-07-28
CN116494248B CN116494248B (en) 2023-08-29

Family

ID=87316907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310754410.4A Active CN116494248B (en) 2023-06-26 2023-06-26 Visual positioning method of industrial robot

Country Status (1)

Country Link
CN (1) CN116494248B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109623821A (en) * 2018-12-26 2019-04-16 深圳市越疆科技有限公司 The visual guide method of mechanical hand crawl article
CN111843981A (en) * 2019-04-25 2020-10-30 广州中国科学院先进技术研究所 Multi-robot cooperative assembly system and method
US20210086364A1 (en) * 2019-09-20 2021-03-25 Nvidia Corporation Vision-based teleoperation of dexterous robotic system
US20210188554A1 (en) * 2019-12-19 2021-06-24 Nimble Robotics, Inc. Robotic System Having Shuttle
CN113911728A (en) * 2021-09-30 2022-01-11 江苏白王口腔护理用品有限公司 Electric toothbrush brush head dynamic feeding system and feeding method based on vision
CN114378825A (en) * 2022-01-21 2022-04-22 四川长虹智能制造技术有限公司 Multi-camera visual positioning method and system and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109623821A (en) * 2018-12-26 2019-04-16 深圳市越疆科技有限公司 The visual guide method of mechanical hand crawl article
CN111843981A (en) * 2019-04-25 2020-10-30 广州中国科学院先进技术研究所 Multi-robot cooperative assembly system and method
US20210086364A1 (en) * 2019-09-20 2021-03-25 Nvidia Corporation Vision-based teleoperation of dexterous robotic system
US20210188554A1 (en) * 2019-12-19 2021-06-24 Nimble Robotics, Inc. Robotic System Having Shuttle
CN113911728A (en) * 2021-09-30 2022-01-11 江苏白王口腔护理用品有限公司 Electric toothbrush brush head dynamic feeding system and feeding method based on vision
CN114378825A (en) * 2022-01-21 2022-04-22 四川长虹智能制造技术有限公司 Multi-camera visual positioning method and system and electronic equipment

Also Published As

Publication number Publication date
CN116494248B (en) 2023-08-29

Similar Documents

Publication Publication Date Title
CN112476434B (en) Visual 3D pick-and-place method and system based on cooperative robot
CN107741234B (en) Off-line map construction and positioning method based on vision
US11276194B2 (en) Learning dataset creation method and device
CN107063228B (en) Target attitude calculation method based on binocular vision
JP5458885B2 (en) Object detection method, object detection apparatus, and robot system
JP5787642B2 (en) Object holding device, method for controlling object holding device, and program
JP5671281B2 (en) Position / orientation measuring apparatus, control method and program for position / orientation measuring apparatus
JP6004809B2 (en) Position / orientation estimation apparatus, information processing apparatus, and information processing method
KR102056664B1 (en) Method for work using the sensor and system for performing thereof
JP5480667B2 (en) Position / orientation measuring apparatus, position / orientation measuring method, program
CN110378325B (en) Target pose identification method in robot grabbing process
CN111476841B (en) Point cloud and image-based identification and positioning method and system
Dawod et al. BIM-assisted object recognition for the on-site autonomous robotic assembly of discrete structures
Horváth et al. Point cloud based robot cell calibration
Yoon et al. 3D position estimation of drone and object based on QR code segmentation model for inventory management automation
CN116909208B (en) Shell processing path optimization method and system based on artificial intelligence
CN116494248B (en) Visual positioning method of industrial robot
CN113778096A (en) Positioning and model building method and system for indoor robot
JP5462662B2 (en) Position / orientation measurement apparatus, object identification apparatus, position / orientation measurement method, and program
CN116977434A (en) Target behavior tracking method and system based on tracking camera
Kim et al. Structured light camera base 3D visual perception and tracking application system with robot grasping task
CN116690988A (en) 3D printing system and method for large building model
Fan et al. An automatic robot unstacking system based on binocular stereo vision
KR102452315B1 (en) Apparatus and method of robot control through vision recognition using deep learning and marker
Bürkle et al. Computer vision based control system of a piezoelectric microrobot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant