CN115026822B - Industrial robot control system and method based on feature point docking - Google Patents

Industrial robot control system and method based on feature point docking Download PDF

Info

Publication number
CN115026822B
CN115026822B CN202210669671.1A CN202210669671A CN115026822B CN 115026822 B CN115026822 B CN 115026822B CN 202210669671 A CN202210669671 A CN 202210669671A CN 115026822 B CN115026822 B CN 115026822B
Authority
CN
China
Prior art keywords
identification
template
module
image information
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210669671.1A
Other languages
Chinese (zh)
Other versions
CN115026822A (en
Inventor
陈统书
何志雄
詹友军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Tiantai Robot Co Ltd
Original Assignee
Guangdong Tiantai Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Tiantai Robot Co Ltd filed Critical Guangdong Tiantai Robot Co Ltd
Priority to CN202210669671.1A priority Critical patent/CN115026822B/en
Publication of CN115026822A publication Critical patent/CN115026822A/en
Application granted granted Critical
Publication of CN115026822B publication Critical patent/CN115026822B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23KSOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
    • B23K37/00Auxiliary devices or processes, not specially adapted to a procedure covered by only one of the preceding main groups
    • B23K37/02Carriages for supporting the welding or cutting element
    • B23K37/0252Steering means
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Robotics (AREA)
  • Image Analysis (AREA)

Abstract

A system and a method for controlling an industrial robot based on characteristic point docking are disclosed, wherein the system comprises the industrial robot and a position acquisition module, the industrial robot is provided with a welding mechanical arm, and the welding mechanical arm is electrically connected with the position acquisition module; the position acquisition module is arranged at the front end of the transmission crawler belt and used for acquiring Y-axis coordinates of each part on the transmission crawler belt when the parts enter the transmission crawler belt and sending the Y-axis coordinates of each part to the welding mechanical arm one by one; the welding mechanical arm comprises an image acquisition module, an identification module and a control module; the image acquisition module is used for receiving the Y-axis coordinate sent by the position acquisition module and limiting the acquisition range of the image acquisition module at the Y-axis coordinate. The Y-axis coordinate of the part on the transmission crawler belt is collected in advance through the position collection module, so that the image acquisition range of the welding mechanical arm is limited at the Y-axis coordinate, the identification number of the welding mechanical arm can be reduced, and the identification speed of the welding mechanical arm is increased.

Description

Industrial robot control system and method based on feature point docking
Technical Field
The invention relates to the technical field of robot control, in particular to an industrial robot control system and method based on characteristic point docking.
Background
With the continuous progress of modern science and technology, especially the improvement of the processing capacity of sensors and actuators, the development of computer technology, the progress of mechanical design and numerical control processing tools and the like, the rapid development of robots is promoted. The intelligent robot has been widely used in various fields such as engineering, manufacturing and life, and can replace human to perform tasks in welding work by virtue of the advantages of miniaturization, intelligence, high flexibility and the like.
However, in the process of welding fine parts, because the parts are fine, when the parts fall into an identification area with a mechanical arm, an identification module of the mechanical arm can acquire a plurality of parts, so that the number of the mechanical arm to be identified is increased, the identification time is prolonged, the parts are continuously driven by a transmission crawler belt, and after the identification time is prolonged, the parts can be taken out of the welding range of the mechanical arm.
Another convenience is because the part is scattered at will and is placed in the transmission track, and although discerning this part and need the welded, nevertheless because the welding point on the angle problem part of putting also changes equally, has leaded to the welding point of the unable effective accurate welding to the part of arm, needs the manual work to carry out putting of part and intervenes and just can alleviate the problem that this condition brought.
Therefore, a robot system capable of improving the recognition speed and accurately acquiring the welding point is in urgent need.
Disclosure of Invention
In view of the above-mentioned drawbacks, the present invention provides a system and a method for controlling an industrial robot based on feature point docking, which can precisely weld to a target part on a moving conveyor track.
In order to achieve the purpose, the invention adopts the following technical scheme: an industrial robot control system based on characteristic point docking comprises an industrial robot and a position acquisition module, wherein the industrial robot is provided with a welding mechanical arm, and the welding mechanical arm is electrically connected with the position acquisition module;
the position acquisition module is arranged at the front end of the transmission crawler belt and used for acquiring Y-axis coordinates of each part on the transmission crawler belt when the parts enter the transmission crawler belt and sending the Y-axis coordinates of each part to the welding mechanical arm one by one;
the welding mechanical arm comprises an image acquisition module, an identification module and a control module;
the image acquisition module is used for receiving the Y-axis coordinate sent by the position acquisition module and limiting the acquisition range of the image acquisition module at the Y-axis coordinate;
the image acquisition module is also used for shooting the part to acquire the image information of the part and sending the image information to the identification module when the part enters the acquisition range of the image acquisition module;
the identification module is used for receiving the image information, identifying the part by adopting a plurality of identification template pictures at different angles, judging whether the part is a target part needing to be welded or not, if so, acquiring a template picture with the highest matching score with the target part in the identification template pictures as a coordinate selection template picture, rotating and displacing the coordinate selection template picture until the coincidence degree of the coordinate selection template picture and the target part meets a threshold value, selecting welding coordinates of a welding point from the coordinate selection template picture, and sending the welding coordinates to the control module;
the control module receives the welding coordinates, obtains the movement speed of the transmission crawler, controls the welding mechanical arm to move to the welding coordinates, updates the X-axis coordinates according to the movement speed of the transmission crawler, and performs tracking welding on the target part.
Preferably, the identification module comprises a preparation module;
the preparation module comprises a template making sub-module, an identification feature extraction sub-module and a storage sub-module;
the template making submodule is used for making a plurality of template drawings, wherein each template drawing corresponds to a different integer angle;
the identification feature extraction submodule comprises a gradient quantization unit and a lifting unit;
the gradient quantization unit is used for performing first-layer pyramid direction gradient quantization and second-layer pyramid direction gradient quantization on the template pictures to respectively obtain identification features corresponding to the template pictures;
the lifting unit is used for acquiring the identification features by taking the current angle as a list;
the storage submodule is used for storing all the identification features in the different angle table lists.
Preferably, the identification module comprises a further matching module;
the matching module comprises a processing submodule and a score calculating submodule;
the processing submodule comprises: the system comprises a pyramid linear memory data container processing unit, a translation unit and a similarity response matrix diagram acquisition unit;
the pyramid linear memory data container processing unit is used for carrying out gradient extraction and quantification on the image information, creating two layers of pyramid linear memory data containers and traversing the image information line through the data of the two layers of pyramids;
the translation unit is used for performing bit-by-bit translation on the image information quantization gradient within the range of 4 x 4, and executing or operating the obtained 16 graphs pixel by pixel to obtain a diffusion gradient matrix graph of the image information after gradient diffusion;
the similarity response matrix image acquisition unit is used for converting the diffusion gradient matrix image of the image information into gradient matrix images in the first four directions and the last four directions through AND operation, and finding out the maximum similarity of each gradient matrix image angle and each angle in a lookup table through a preset lookup table, wherein the lookup table is a pre-calculated table of various combinations in 8 directions;
acquiring 8 similarity response matrix diagrams, converting all the similarity response matrix diagrams into a format of 16 orders or 64 orders, and storing the format in the continuous linear memory data containers in a linearized manner;
the score calculation submodule comprises a calling unit and a similarity calculation unit;
the calling unit is used for calling the two layers of pyramids in the storage submodule, finding access entries of linear memories of the two layers of pyramids according to 8 similarity response matrix diagrams, and acquiring identification features corresponding to each angle;
the similarity calculation unit is used for matching the image information and the identification features, obtaining the matching scores of the image information and the identification features of each angle, judging whether the highest matching score is larger than a threshold value, if so, judging that the content of the image information is a target part, and obtaining a template corresponding to the highest matching score as a coordinate selection template.
Preferably, the identification module further comprises a correction module;
the correction module comprises a target part extraction sub-module, an identification feature association sub-module and a rotation translation sub-module;
the target part extraction submodule is used for extracting the frame of the target part from the image information in a sub-pixel point mode to obtain a target frame and sending the target frame to the identification feature association submodule;
the identification feature association submodule is used for combining the identification features on the target frame and other identification features into a first identification point according to a proportion and finding out a second identification point corresponding to the first identification point on the template picture according to the first identification point;
acquiring distances between all first identification points and second identification points corresponding to the first identification points, judging whether the distances between all first identification feature points and the second identification points corresponding to the first identification feature points are larger than a distance threshold, if the distances between the first identification feature points and the second identification points corresponding to the first identification feature points are larger than the distance threshold, acquiring the number of first identification features meeting the distance threshold, judging whether the number of the first identification features meets a first number threshold again, and if the number of the first identification features is larger than the first number threshold, sending a correction instruction to the rotation and translation sub-module;
and the rotation and translation sub-module receives the correction instruction, substitutes the first identification point and the second identification point into a change matrix, and corrects the pose of the template graph to obtain a corrected template graph.
Preferably, the correction module further comprises a verification sub-module;
the verification submodule is used for acquiring the number of times of modifying the pose of the current template drawing and the distances between all the first identification points and the second identification points, and judging the number of the second identification points meeting the distance threshold value on the modified template drawing and the number of times of modifying the template drawing;
and when the number of the second identification points meeting the distance threshold is less than the second number threshold and the template graph correction frequency is less than the frequency threshold, updating the current change matrix by using the last change matrix, continuously sending a correction instruction, and continuously correcting the template graph until the number of the second identification points meeting the distance threshold is greater than the second number threshold or the template graph correction frequency is equal to the frequency threshold.
An industrial robot control method based on characteristic point docking is applied to an industrial robot control system based on characteristic point docking, wherein the industrial robot is provided with a welding mechanical arm, and the method comprises the following steps:
step S1: when the parts enter the transmission crawler belt, acquiring Y-axis coordinates of each part on the transmission crawler belt, and sending the Y-axis coordinates of each part to the welding mechanical arm one by one;
step S2: receiving the Y-axis coordinate by the welding mechanical arm, and limiting the welding mechanical arm at the Y-axis coordinate;
when a part enters an image acquisition range of the welding mechanical arm, shooting the part by the welding mechanical arm, acquiring image information of the part, and identifying the image information;
the image information identification step comprises the following steps:
receiving the image information, identifying the part by adopting a plurality of identification template pictures at different angles, judging whether the part is a target part needing to be welded, if so, obtaining a template picture which is matched with the target part and has the highest score in the identification template pictures as a coordinate selection template picture, rotating and displacing the coordinate selection template picture until the coincidence degree of the coordinate selection template picture and the target part meets a threshold value, and selecting the welding coordinate of a welding point from the coordinate selection template picture;
and step S3: and receiving the welding coordinate and acquiring the movement speed of the transmission crawler, moving the welding mechanical arm to the welding coordinate, updating the X-axis coordinate according to the movement speed of the transmission crawler, and tracking and welding the target part.
Preferably, the following steps are also required to be executed before step S2:
step A1: making a plurality of template drawings, wherein each template drawing corresponds to a different integer angle;
step A2: performing first-layer pyramid direction gradient quantization and second-layer pyramid direction gradient quantization on the plurality of template pictures, respectively obtaining identification features corresponding to the plurality of template pictures, taking the current angle as a list to obtain the identification features, and storing the identification features;
step A3: all identifying features in the different angle table columns are stored.
Preferably, the specific steps of selecting the coordinate selection template map in step S2 are as follows:
step B1: gradient extraction and quantification are carried out on the image information, a two-layer pyramid linear memory data container is created, and the image information line traverses the data of the two layers of pyramids;
and step B2: performing bit-by-bit translation on the image information quantization gradient within the range of 4 x 4, and executing or operating the obtained 16 images pixel by pixel to obtain a diffusion gradient matrix image of the image information after gradient diffusion;
the similarity response matrix image acquisition unit is used for converting the diffusion gradient matrix image of the image information into gradient matrix images in the first four directions and the last four directions through AND operation, and finding out the maximum similarity of each gradient matrix image angle and each angle in a lookup table through a preset lookup table, wherein the lookup table is a pre-calculated table of various combinations in 8 directions;
acquiring 8 similarity response matrix diagrams, converting all the similarity response matrix diagrams into a format of 16 orders or 64 orders, and storing the formats in the continuous linear memory data containers in a linearized manner;
and step B3: finding an access entry of a linear memory of the two layers of pyramids by using the two layers of pyramids in the storage submodule according to the 8 similarity response matrix diagrams, and acquiring identification characteristics corresponding to each angle;
and step B4: matching the image information and the identification features, acquiring the matching score of the image information and the identification features of each angle, judging whether the highest matching score is greater than a threshold value, if so, judging that the content of the image information is a target part, and acquiring a template corresponding to the highest matching score as a coordinate selection template;
the matching score is calculated in the following manner:
Figure GDA0004006414360000071
where Q is the input image information, T represents the template map, c is the position of the template map in the input image information, P represents the domain centered on c, P is the offset position, o is the recognition feature, and epsilon (Q, T, c) is the matching score.
Preferably, the specific steps of rotating and shifting the coordinate selection template map in step S2 are as follows:
step C1: extracting the frame of the target part from the image information in a sub-pixel point mode to obtain a target frame;
and step C2: combining the recognition features on the target frame and the rest recognition features into a first recognition point according to a proportion, and finding out a second recognition point corresponding to the first recognition point on the template picture according to the first recognition point;
acquiring the distance between all first identification points and second identification points corresponding to the first identification points, judging whether the distance between all first identification feature points and the second identification points corresponding to the first identification feature points is greater than a distance threshold, if the distance between the first identification feature points and the second identification points corresponding to the first identification feature points is greater than the distance threshold, acquiring the number of first identification features meeting the distance threshold, judging whether the number meets a first number threshold, and if the number is greater than the first number threshold, correcting the template graph;
and C3: and substituting the first identification point and the second identification point into a change matrix, and correcting the pose of the template graph to obtain a corrected template graph, wherein the calculation process of the change matrix is as follows:
substituting the coordinates of the first recognition point and the coordinates of the second recognition point into formula (1):
Figure GDA0004006414360000072
wherein R is a rotation matrix, is selected>
Figure GDA0004006414360000073
To translate the matrix, q i And p i Respectively the coordinates, n, of the associated first and second identifying feature points i Is a feature vector, i is a natural integer greater than 1, and epsilon is a change matrix;
then, the minimum deflection angle R of the first identification point and the second identification point is obtained, the minimum deflection angle R is substituted into the formula (2), and the minimum value of the rotation matrix R is obtained through calculation, wherein the formula (2) is as follows:
Figure GDA0004006414360000081
substituting the minimum value of the rotation matrix R back into the formula (1) to obtain a formula (3);
Figure GDA0004006414360000082
wherein c is i =p i ×n i
And (3) solving the partial derivatives of the formula (3), converting the partial derivatives into linear equations and solving the angle r of the minimum deflection, the minimum horizontal offset x and the minimum vertical offset y by the following process:
the partial derivative formula four is as follows:
Figure GDA0004006414360000083
Figure GDA0004006414360000084
Figure GDA0004006414360000085
the conversion into a linear equation to find the angle of minimum deflection r, the minimum horizontal offset x and the minimum vertical offset y is as follows:
Figure GDA0004006414360000086
preferably, the specific steps in the step S2 until the coincidence degree of the coordinate selection template map and the target part satisfies a threshold value are as follows:
and C4: acquiring the number of times of modifying the pose of the current template graph and the distances between all the first identification points and the second identification points, and judging the number of the second identification points meeting the distance threshold value on the modified template graph and the number of times of modifying the template graph;
and when the number of the second identification points meeting the distance threshold is less than the second number threshold and the template graph correction times are less than the time threshold, updating the change matrix by using the last change matrix, and continuously correcting the template graph until the number of the second identification points meeting the distance threshold is greater than the second number threshold or the template graph correction times are equal to the time threshold.
One of the above technical solutions has the following advantages or beneficial effects: 1. because the Y-axis coordinate direction is arranged along the width direction of the transmission crawler, when the crawler moves, the part does not change at the Y-axis coordinate, the Y-axis coordinate of the part on the transmission crawler is collected in advance through the position collection module, so that the image acquisition range of the welding mechanical arm is limited at the Y-axis coordinate, the identification number of the welding mechanical arm can be reduced, and the identification speed of the welding mechanical arm is accelerated.
2. Before obtaining the welding coordinate of target part welding point, it is still right the template picture rotates and the displacement operation, makes template picture and target part draw close, up to the degree of coincidence of template picture and target part satisfies the threshold value, just can think this template picture and target part almost coincide this moment, when the welding coordinate of the welding point on drawing the template picture, just can correspond with the welding point of target part in the reality, in the numerical value of the X axle of the welding coordinate of in time updating through the velocity of motion of transmission track, realize accurate tracking welding.
Drawings
FIG. 1 is a schematic block diagram of one embodiment of the system of the present invention.
FIG. 2 is a schematic diagram of an identification module in one embodiment of the system of the present invention.
FIG. 3 is a schematic diagram of a preparation module in one embodiment of the system of the present invention.
FIG. 4 is a schematic diagram of a matching module in one embodiment of the system of the present invention.
FIG. 5 is a schematic diagram of a correction module in one embodiment of the system of the present invention.
FIG. 6 is a flow chart of one embodiment of the method of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "axial", "radial", "circumferential", and the like, indicate orientations and positional relationships based on those shown in the drawings, and are used merely for convenience of description and for simplicity of description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be considered as limiting the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless otherwise specified.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
As shown in fig. 1 to 6, an industrial robot control system based on characteristic point docking comprises an industrial robot and a position acquisition module, wherein the industrial robot is provided with a welding mechanical arm, and the welding mechanical arm is electrically connected with the position acquisition module;
the position acquisition module is arranged at the front end of the transmission crawler belt and used for acquiring Y-axis coordinates of each part on the transmission crawler belt when the parts enter the transmission crawler belt and sending the Y-axis coordinates of each part to the welding mechanical arm one by one;
in the invention, the planar coordinate system may be that the width direction of the transmission crawler belt is taken as the direction of the Y axis of the planar coordinate system, the transportation direction of the transmission crawler belt is taken as the direction of the X axis of the planar coordinate system, taking a transmission crawler belt moving from right to left as an example, the origin of the planar coordinate system is the angular point of the right end of the transmission crawler belt, and the measurement unit of the part is the length unit of the coordinate; the position acquisition module can be an intelligent camera, can identify a part entering the transmission crawler belt, and acquires a Y-axis coordinate of the part in a plane coordinate system by taking the center of the part as a coordinate point of the part; and because the Y-axis coordinate is arranged along the width direction of the transmission crawler, when the crawler moves, the part is not changed at the Y-axis coordinate, and at the moment, the Y-axis coordinate is only required to be sent to the welding mechanical arm in advance, so that the image acquisition range of the welding mechanical arm is limited at the Y-axis coordinate, the identification complexity of the welding mechanical arm can be reduced, and the identification speed of the welding mechanical arm is accelerated.
It should be noted that, in the present application, the position acquisition module identifies the object as non-accurate identification, and does not need to identify the type and shape pose of the object, and a camera with the complete function may be purchased as the position acquisition module, and the implementation process of the function is not within the protection scope of the present application, so that no excessive explanation is made here.
The welding mechanical arm comprises an image acquisition module, an identification module and a control module;
the image acquisition module is used for receiving the Y-axis coordinate sent by the position acquisition module and limiting the acquisition range of the image acquisition module at the Y-axis coordinate;
the image acquisition module is a camera arranged on the welding mechanical arm, and the acquisition range of the image acquisition module is limited in the Y-axis coordinate position, and the realization process is as follows: after the image acquisition module receives the Y-axis coordinate sent by the position acquisition module, the welding mechanical arm moves the image acquisition module to the Y-axis coordinate, then the welding mechanical arm descends the shooting height of the image acquisition module, the vertical distance between the image acquisition module and the transmission crawler is reduced to 15-20 cm, the acquisition range of the image acquisition module is reduced by reducing the shooting height of the image acquisition module, the number of parts recorded in the image acquisition module is reduced, the processing number of the identification module is reduced, and the data processing speed is improved. The image acquisition module is positioned at the Y-axis coordinate, so that the corresponding part can be shot without missing shooting.
The image acquisition module is also used for shooting the part to acquire the image information of the part and sending the image information to the identification module when the part enters the acquisition range of the image acquisition module;
the identification module is used for receiving the image information, identifying the part by adopting a plurality of identification template pictures at different angles, judging whether the part is a target part needing to be welded or not, if so, acquiring a template picture with the highest matching score with the target part in the identification template pictures as a coordinate selection template picture, rotating and displacing the coordinate selection template picture until the coincidence degree of the coordinate selection template picture and the target part meets a threshold value, selecting welding coordinates of a welding point from the coordinate selection template picture, and sending the welding coordinates to the control module;
the control module receives the welding coordinates and obtains the movement speed of the transmission crawler, controls the welding mechanical arm to move to the welding coordinates, updates the X-axis coordinates according to the movement speed of the transmission crawler, and performs tracking welding on the target part.
In addition, in the present application, the identification module has a function of identifying whether a part is a target part to be welded and acquiring a welding point in the module part. The identification module is provided with a plurality of identification template pictures with different angles, and because parts are scattered on the transmission track at random, if the parts are identified through only one template picture, the situation of error identification may occur, for example, the template picture is in a positive 180-degree direction, and the parts are in a negative 180-degree situation on the transmission track, at the moment, the identification characteristics on the template picture are opposite to the identified identification characteristic positions of the parts in the image information, and when a score matching algorithm is adopted to calculate whether the parts are target parts, a lower matching score can be obtained, so that the situation that the target parts cannot be identified occurs. When a plurality of template drawings with different angles exist, the part in the image information can be matched with the plurality of template drawings, the matching scores of the part and the plurality of template drawings are calculated through the scoring matching algorithm, meanwhile, a matching score threshold value is set, and when the matching score threshold value is met, the part can be determined to be the target part.
At this time, although the part is identified as the target part, the welding point of the target part is not determined, and the position of the welding point of the target part can be determined according to the Y axis and the coordinates of the welding point in the template drawing. However, due to the fact that the posture deviation between the template drawing and the target part exists, the real welding point can also have deviation, especially for tiny parts, the deviation value can be amplified in the part, and the welding point cannot be welded on the target part by the welding mechanical arm accurately.
According to the invention, before the welding coordinates of the welding point of the target part are obtained, the template graph is rotated and displaced, so that the template graph and the target part are drawn together until the contact ratio of the template graph and the target part meets a threshold value, the template graph and the target part can be determined to be almost coincident at the moment, when the welding coordinates of the welding point on the template graph are extracted, the template graph and the target part can correspond to the actual welding point of the target part, the control module is driven to move to the welding coordinates, and the welding point of the target part is accurately welded through a welding device on the welding mechanical arm. So as to realize accurate and stable welding of tiny parts.
Preferably, the identification module comprises a preparation module;
the preparation module comprises a template making sub-module, an identification feature extraction sub-module and a storage sub-module;
the template making submodule is used for making a plurality of template drawings, wherein each template drawing corresponds to a different integer angle;
the identification feature extraction submodule comprises a gradient quantization unit and a lifting unit;
the gradient quantization unit is used for performing first-layer pyramid direction gradient quantization and second-layer pyramid direction gradient quantization on the template pictures to respectively obtain identification features corresponding to the template pictures;
the lifting unit is used for acquiring the identification features by taking the current angle as a list;
the storage submodule is used for storing all the identification features in the different angle table lists.
Because the part is at will put on the transmission track, the angle of its angle of putting can influence the recognition effect. There may be a certain angle of placement that cannot be identified, resulting in parts being marked as other parts. For this reason, including the preparation module in the identification module of this application, be provided with template preparation submodule in the preparation module N template picture has been made in the template preparation submodule, N can be according to identification module's configuration is decided, works as identification module's configuration is high, and the value of N can correspondingly improve, increases the quantity of template picture, can not influence the recognition rate simultaneously, and after confirming N's quantity, uses 360/N, obtains the angle that each template picture corresponds. By increasing the number of the template drawings, the template drawings can cover the placing angle of each target part as much as possible, and the problem that the target part is identified by the identification module due to the problem of the placing angle is avoided.
The gradient quantization unit can perform gradient quantization on the template graph, so that the identification features in the template graph can be better acquired. In one embodiment, the first-layer pyramid direction gradient quantization and the second-layer pyramid direction gradient quantization are performed as follows:
calculating the gradient of a gradient image through sobel, wherein the template image is a three-channel image in one embodiment, and extracting a single-channel gradient amplitude maximum image matrix through a gradient square-sum non-maximum suppression algorithm in the X and Y directions;
obtaining an angle image matrix from the gradient image matrices in the X and Y directions;
quantizing the range of the angle image matrix from 0 to 360 degrees into an integer from 1 to 15, then continuously quantizing 7 remainder in 8 directions, acquiring pixels larger than a threshold value in the amplitude image matrix, then acquiring a quantized image matrix corresponding to 3 x 3 in the pixel field to form a histogram, acquiring more than 5 same directions in the pixel field, assigning values to the directions, and performing shift encoding on the index from 00000001 to 10000000;
wherein the gradient amplitude maximum image matrix calculation formula is as follows:
Figure GDA0004006414360000151
Figure GDA0004006414360000152
x represents the position of the object to be imaged,
Figure GDA0004006414360000153
for x-position gradient values, { R, G, B } for R-channel, G-channel, B-channel, ori for gradient direction, c for the position of the template map in the input image information.
After the gradient quantization is performed, the identification features in the template graph are obviously different from other pixel points in terms of pixel point values, and therefore, the process for identifying the features in the application is as follows: traversing the image matrix with the maximum gradient amplitude value, finding out pixel points with the maximum gradient amplitude value in each field of the image matrix with the maximum gradient amplitude value, and if the pixel points with the maximum gradient amplitude value are found out in the field, setting the gradient amplitude values of pixel value points except the pixel points with the maximum gradient amplitude value in the field to be zero;
judging whether the gradient amplitude of the pixel point with the maximum gradient amplitude in all the fields is larger than a gradient amplitude threshold value or not, and if so, marking the pixel point as an identification feature;
acquiring the quantity of all the identification features, judging whether the quantity of all the identification features is larger than a quantity threshold value, if so, adding all the identification features into a feature set and storing the feature set in the storage submodule; if not, judging whether the identification feature has at least another identification feature in the range of the distance threshold, if so, rejecting the identification feature and the identification feature in the distance threshold, and if not, storing the identification feature in the storage sub-module.
The identifying features in the storage sub-module will store groupings of identifying features in each group in terms of angles. In the process of identification by the identification module, the identification features in the storage sub-module are called, and identification matching is carried out on the identification features of each group and the parts in the image information.
Preferably, the identification module comprises a further matching module;
the matching module comprises a processing submodule and a score calculating submodule;
the processing submodule comprises: the system comprises a pyramid linear memory data container processing unit, a translation unit and a similarity response matrix image acquisition unit;
the pyramid linear memory data container processing unit is used for carrying out gradient extraction and quantification on the image information, creating two layers of pyramid linear memory data containers and traversing the image information line through the data of the two layers of pyramids;
the pyramid traversal process specifically comprises the following steps: acquiring the magnitude of a target gradient diffusion translation value, and acquiring a pyramid linear memory data container;
the translation unit is used for performing bit translation within a range of 4 × 4 on the image information quantization gradient, and executing or operating the obtained 16 graphs pixel by pixel to obtain a diffusion gradient matrix graph of the image information after gradient diffusion;
the similarity response matrix image acquisition unit is used for converting the diffusion gradient matrix image of the image information into gradient matrix images in the first four directions and the last four directions through AND operation, and finding out the maximum similarity of each gradient matrix image angle and each angle in a lookup table through a preset lookup table, wherein the lookup table is a pre-calculated table of various combinations in 8 directions;
obtaining a similarity response matrix diagram in a certain direction, obtaining 8 similarity response matrix diagrams due to 8 directions of the diffusion gradient matrix diagram, converting all the similarity response matrix diagrams into a 16-order or 64-order format, and storing the format in the continuous linear memory data container in a linearized manner;
the lookup manner of the lookup table with 8 similarity directions is as follows:
Figure GDA0004006414360000161
where i is the index of the quantization direction, L is the set of one direction in the diffusion gradient matrix map, 1 is the direction of the diffusion gradient matrix map, T is the index of the quantization direction i [L]Is a similarity response matrix diagram;
the score calculation submodule comprises a calling unit and a similarity calculation unit;
the calling unit is used for calling the two layers of pyramids in the storage submodule, finding access entries of linear memories of the two layers of pyramids according to the 8 similarity response matrix graphs, and acquiring identification features corresponding to all angles;
acquiring data of identification features corresponding to two layers of pyramids of a first template drawing from the storage submodule, simultaneously acquiring similarity response matrix drawings in 8 directions in a pyramid linear memory data container of a second layer of image information, finding an access entry of the linear memory data container in the corresponding direction according to the information of the pyramid identification features of the second layer, calculating the similarity of corresponding positions through iteration circulation of position range information of the template drawings obtained through calculation and MIPP accumulation, obtaining a matching similarity matrix of a single identification feature of the second layer of pyramids of the first template drawing, traversing all the identification features of the second layer of pyramids of the first template drawing, obtaining a matching similarity matrix data set of the second layer of pyramids, converting the data in the matching similarity matrix data set into a 100-system, obtaining scores of each matching similarity matrix, and removing the matching similarity matrix with the scores smaller than a threshold value.
The similarity calculation unit is used for matching the image information and the identification features, obtaining the matching scores of the image information and the identification features of each angle, judging whether the highest matching score is larger than a threshold value, if so, judging that the content of the image information is a target part, and obtaining a template corresponding to the highest matching score as a coordinate selection template.
Selecting the position of the identification feature of the second pyramid layer left by the calling unit from the identification feature position corresponding to the first pyramid layer, selecting a linear similarity matrix diagram in a certain direction of 8 directions of a first layer pyramid target detection diagram, finding a similarity response matrix diagram in 8 directions in a linear memory data container of the first layer pyramid, finding an access entry of the linear memory data container in the corresponding direction according to the information of the identification feature of the first layer pyramid, calculating the similarity of the corresponding position through iterative cycle of template diagram position range information obtained through calculation and MIPP accumulation to obtain a matching similarity matrix of a single identification feature of the first layer pyramid of the first template diagram, traversing all the identification features of the first layer pyramid of the first template diagram to obtain a first layer pyramid matching similarity matrix data set, converting data in the matching similarity matrix data set into a 100-system to obtain the score of each matching similarity matrix, and recording the scores of the first layer pyramid of the template diagram and the identification feature of the second layer as a matching score;
the matching score is calculated in the following manner:
Figure GDA0004006414360000181
where Q is the input image information, T represents the template map, c is the position of the template map in the input image information, P represents the domain centered on c, P is the offset position, o is the recognition feature, and epsilon (Q, T, c) is the matching score.
In an embodiment of the present application, a score of similarity between image information and a template map, i.e., a matching score, is calculated by acquiring an identification feature in the template map. And when the matching score is larger than the threshold value, indicating that the content of the image information is the target part.
Preferably, the identification module further comprises a correction module;
the correction module comprises a target part extraction sub-module, an identification feature association sub-module and a rotation translation sub-module;
the target part extraction submodule is used for extracting the frame of the target part from the image information in a sub-pixel point mode to obtain a target frame and sending the target frame to the identification feature association submodule;
in one embodiment, the implementation process of obtaining the target frame is as follows:
the method comprises the steps of collecting an edge point set of a target part in image information through a Canny operator, carrying out binary quadratic polynomial fitting on the edge point set, solving a binary quadratic polynomial through a facet model to obtain a Hessian matrix, solving the Hessian matrix to obtain a characteristic value and a characteristic vector of the edge point set, deriving the characteristic value through a Taylor expansion formula to obtain sub-pixels of the edge point set, and extracting through a target frame of the target part. The edge point set of the target part is detected through a Canny operator, then a binary quadratic polynomial is fitted, the coefficient is solved through a facet model, a Hessian matrix is obtained, the eigenvalue and the eigenvector are solved, the eigenvector is the direction vector of the second identification point, taylor expansion derivation is carried out, the corresponding sub-pixel point is obtained through combining the direction vector of the point, the corresponding sub-pixel point set and the direction vector point set are obtained through circulation in this way, and the sub-pixel point set and the direction vector point set are stored at the corresponding position of the Kdtree data structure body. By constructing a KDTree algorithm, the storage sequence of the sub-pixel point sets and the direction vector point sets in the kdTree data structure is associated with leaf nodes of the KDTree, namely the storage sequence of the original sub-pixels and the original direction vectors in the data structure is changed. In addition, the sub-pixel points of the edge are extracted in the application, and the target part is extracted. The edge points of the sub-pixels can improve the definition of the edge, the extracted target part can be more accurate, and the edge points or the feature points on the target frame can be more accurate.
The identification feature association submodule is used for combining the identification features on the target frame and other identification features into a first identification point according to a proportion and finding out a second identification point corresponding to the first identification point on the template picture according to the first identification point;
acquiring distances between all first identification points and second identification points corresponding to the first identification points, judging whether the distances between all first identification feature points and the second identification points corresponding to the first identification feature points are larger than a distance threshold, if the distances between the first identification feature points and the second identification points corresponding to the first identification feature points are larger than the distance threshold, acquiring the number of first identification features meeting the distance threshold, judging whether the number of the first identification features meets a first number threshold again, and if the number of the first identification features is larger than the first number threshold, sending a correction instruction to the rotation and translation sub-module;
one embodiment of the invention is characterized in that the ratio of 3: and 7, acquiring the identification features on the target frame and combining the rest identification features into a first identification point according to the proportion, wherein the rest identification features are identification features on a non-target edge, the proportion can reduce the time for picking out the identification features of the target frame and the rest identification features, and meanwhile, a large amount of the rest identification features can ensure the accuracy of the template pose correction.
The manner of acquiring the first identification point and the second identification point is as follows: and obtaining a tangent line of the first identification point, making a perpendicular line for the tangent line of the first identification point, connecting the perpendicular line with the second identification point, and calculating the length of the perpendicular line, wherein the length of the perpendicular line is the distance between the first identification point and the second identification point.
And then, acquiring the distances between the first identification points and the second identification points which are in one-to-one correspondence after association, and judging whether the distances are greater than a distance threshold value. Only when the distance is greater than the distance threshold, the situation that the pose of the target part is greatly different from that of the template graph can be shown, and the pose of the template graph needs to be corrected. And after all the first identification points and the second identification points meeting the distance threshold are obtained, counting the number of the first identification points and the second identification points, and correcting the template when the number meets the number threshold. Because the first recognition point and the second recognition point are correlated with each other in the pose, but it is possible that the first recognition point is a rotation edge point on the frame of the object, and the associated second recognition point edge point is only close in the pose, and the rotation edge point cannot be completely coincided with the edge point. Therefore, when the corrected pose of the template graph approaches to the target part, the first identification characteristic point and the second identification characteristic point of the type still meet the requirement of a distance threshold. If the distance threshold is only adopted to judge whether the template pose needs to be modified, the pose of the template graph can be corrected all the time, and the running resources of the system are wasted.
And the rotation and translation sub-module receives the correction instruction, substitutes the first identification point and the second identification point into a change matrix, and corrects the pose of the template graph to obtain a corrected template graph.
The change matrix comprises a rotation matrix and a translation matrix;
firstly, the coordinates of the first identification point and the coordinates of the second identification point are substituted into formula (1):
Figure GDA0004006414360000201
wherein R is a rotation matrix, is selected>
Figure GDA0004006414360000202
To translate the matrix, q i And p i Respectively the coordinates, n, of the associated first and second identifying feature points i Is a feature vector, i is a natural integer greater than 1, and epsilon is a variation matrix;
then, the minimum deflection angle R of the first identification point and the second identification point is obtained, the minimum deflection angle R is substituted into the formula (2), and the minimum value of the rotation matrix R is obtained through calculation, wherein the formula (2) is as follows:
Figure GDA0004006414360000203
substituting the minimum value of the rotation matrix R back into the formula (1) to obtain a formula (3);
Figure GDA0004006414360000211
wherein c is i =p i ×n i
And (3) solving the partial derivatives of the formula (3), converting the partial derivatives into linear equations and solving the angle r of the minimum deflection, the minimum horizontal offset x and the minimum vertical offset y by the following process:
the partial derivative formula four is as follows:
Figure GDA0004006414360000212
Figure GDA0004006414360000213
Figure GDA0004006414360000214
the conversion into a linear equation to find the angle of minimum deflection r, the minimum horizontal offset x and the minimum vertical offset y is as follows:
Figure GDA0004006414360000215
preferably, the correction module further comprises a verification sub-module;
the verification submodule is used for acquiring the number of times of modifying the pose of the current template drawing and the distances between all the first identification points and the second identification points, and judging the number of the second identification points meeting the distance threshold value on the modified template drawing and the number of times of modifying the template drawing;
and when the number of the second identification points meeting the distance threshold is less than the second number threshold and the template graph correction frequency is less than the frequency threshold, updating the current change matrix by using the last change matrix, continuously sending a correction instruction, and continuously correcting the template graph until the number of the second identification points meeting the distance threshold is greater than the second number threshold or the template graph correction frequency is equal to the frequency threshold.
The invention also sets the frequency threshold value for ending the frequency of the template graph correction, because the pose of the template graph can only be infinitely close to the pose of the target part in the algorithm, after the limited correction, the pose of the template graph is already very close to the pose of the target image, and the template graph can be regarded as being overlapped with the target part. The points extracted on the template map can also be very close to the corresponding points of the target image. And the memory resource is wasted by correcting the pose of the template graph. Since the template image and the target image have a linear relationship when the method is applied to the 2d image, the theoretical algorithm of the method is AX = B, and X is a linear relationship, namely a change matrix. After one pose correction is carried out, the epsilon = epsilon, the current epsilon is multiplied by the epsilon of the last time, and the change matrix is updated, so that the accuracy of the change matrix for the pose correction of the template graph is improved.
An industrial robot control method based on characteristic point docking is applied to an industrial robot control system based on characteristic point docking, the industrial robot is provided with a welding mechanical arm, and the method comprises the following steps:
step S1: when the parts enter the transmission crawler belt, acquiring Y-axis coordinates of each part on the transmission crawler belt, and sending the Y-axis coordinates of each part to the welding mechanical arm one by one;
step S2: receiving the Y-axis coordinate by the welding mechanical arm, and limiting the welding mechanical arm at the Y-axis coordinate;
when a part enters an image acquisition range of the welding mechanical arm, the welding mechanical arm shoots the part, obtains image information of the part and identifies the image information;
the image information identification step comprises the following steps:
receiving the image information, identifying the part by adopting a plurality of identification template pictures at different angles, judging whether the part is a target part needing to be welded, if so, obtaining a template picture which is matched with the target part and has the highest score in the identification template pictures as a coordinate selection template picture, rotating and displacing the coordinate selection template picture until the coincidence degree of the coordinate selection template picture and the target part meets a threshold value, and selecting the welding coordinate of a welding point from the coordinate selection template picture;
and step S3: and receiving the welding coordinate and acquiring the movement speed of the transmission crawler, moving the welding mechanical arm to the welding coordinate, updating the X-axis coordinate according to the movement speed of the transmission crawler, and tracking and welding the target part.
Preferably, the following steps are also required to be executed before step S2:
step A1: making a plurality of template drawings, wherein each template drawing corresponds to a different integer angle;
step A2: carrying out first-layer pyramid direction gradient quantization and second-layer pyramid direction gradient quantization on the plurality of template pictures, respectively obtaining identification features corresponding to the plurality of template pictures, obtaining the identification features by taking the current angle as a list, and storing the identification features;
step A3: all identifying features in the different angle table columns are stored.
Preferably, the specific step of selecting the coordinate selection template map in step S2 is as follows:
step B1: gradient extraction and quantification are carried out on the image information, a two-layer pyramid linear memory data container is created, and the image information line traverses the data of the two layers of pyramids;
and step B2: performing bit-by-bit translation on the image information quantization gradient within the range of 4 x 4, and executing or operating the obtained 16 images pixel by pixel to obtain a diffusion gradient matrix image of the image information after gradient diffusion;
the similarity response matrix image acquisition unit is used for converting the diffusion gradient matrix image of the image information into gradient matrix images in the first four directions and the last four directions through AND operation, and finding out the maximum similarity of each gradient matrix image angle and each angle in a lookup table through a preset lookup table, wherein the lookup table is a pre-calculated table of various combinations in 8 directions;
acquiring 8 similarity response matrix diagrams, converting all the similarity response matrix diagrams into a format of 16 orders or 64 orders, and storing the formats in the continuous linear memory data containers in a linearized manner;
and step B3: finding an access entry of a linear memory of the two layers of pyramids by using the two layers of pyramids in the storage submodule according to the 8 similarity response matrix diagrams, and acquiring identification characteristics corresponding to each angle;
and step B4: matching the image information and the identification features, acquiring the matching scores of the image information and the identification features of each angle, judging whether the highest matching score is greater than a threshold value, if so, judging that the content of the image information is a target part, and acquiring a template corresponding to the highest matching score as a coordinate selection template;
the matching score is calculated in the following manner:
Figure GDA0004006414360000241
where Q is the input image information, T represents the template map, c is the position of the template map in the input image information, P represents the domain centered on c, P is the offset position, o is the recognition feature, and epsilon (Q, T, c) is the matching score.
Preferably, the specific steps of performing rotation and displacement on the coordinate selection template map in step S2 are as follows:
step C1: extracting the frame of the target part from the image information in a sub-pixel point mode to obtain a target frame;
and step C2: combining the recognition features on the target frame and the rest recognition features into a first recognition point according to a proportion, and finding out a second recognition point corresponding to the first recognition point on the template picture according to the first recognition point;
acquiring the distance between all first identification points and second identification points corresponding to the first identification points, judging whether the distance between all first identification feature points and the second identification points corresponding to the first identification feature points is greater than a distance threshold, if the distance between the first identification feature points and the second identification points corresponding to the first identification feature points is greater than the distance threshold, acquiring the number of first identification features meeting the distance threshold, judging whether the number meets a first number threshold, and if the number is greater than the first number threshold, correcting the template graph;
and C3: and substituting the first identification point and the second identification point into a change matrix, and correcting the pose of the template graph to obtain a corrected template graph, wherein the calculation process of the change matrix is as follows:
substituting the coordinates of the first recognition point and the coordinates of the second recognition point into formula (1):
Figure GDA0004006414360000251
wherein R is a rotation matrix, is selected>
Figure GDA0004006414360000252
To translate the matrix, q i And p i Respectively the coordinates, n, of the associated first and second identifying feature points i Is a feature vector, i is a natural integer greater than 1, and epsilon is a variation matrix;
then, the minimum deflection angle R of the first identification point and the second identification point is obtained, the minimum deflection angle R is substituted into the formula (2), and the minimum value of the rotation matrix R is obtained through calculation, wherein the formula (2) is as follows:
Figure GDA0004006414360000253
substituting the minimum value of the rotation matrix R back into the formula (1) to obtain a formula (3);
Figure GDA0004006414360000254
wherein c is i =p i ×n i
And (3) solving the partial derivatives of the formula (3), converting the partial derivatives into linear equations and solving the angle r of the minimum deflection, the minimum horizontal offset x and the minimum vertical offset y by the following process:
the partial derivative formula four is as follows:
Figure GDA0004006414360000255
Figure GDA0004006414360000256
Figure GDA0004006414360000257
the conversion into a linear equation to find the angle of minimum deflection r, the minimum horizontal offset x and the minimum vertical offset y is as follows:
Figure GDA0004006414360000261
preferably, the specific steps in the step S2 until the coincidence degree of the coordinate selection template map and the target part satisfies a threshold value are as follows:
and C4: acquiring the number of times of modifying the pose of the current template graph and the distances between all the first identification points and the second identification points, and judging the number of the second identification points meeting the distance threshold value on the modified template graph and the number of times of modifying the template graph;
and when the number of the second identification points meeting the distance threshold is less than the second number threshold and the template graph correction times are less than the time threshold, updating the change matrix by using the last change matrix, and continuously correcting the template graph until the number of the second identification points meeting the distance threshold is greater than the second number threshold or the template graph correction times are equal to the time threshold.
In the description of the present specification, reference to the description of "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example," or "some examples" or the like means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (8)

1. An industrial robot control system based on characteristic point docking is characterized by comprising an industrial robot and a position acquisition module, wherein the industrial robot is provided with a welding mechanical arm which is electrically connected with the position acquisition module;
the position acquisition module is arranged at the front end of the transmission crawler belt and used for acquiring Y-axis coordinates of each part on the transmission crawler belt when the parts enter the transmission crawler belt and sending the Y-axis coordinates of each part to the welding mechanical arm one by one;
the welding mechanical arm comprises an image acquisition module, an identification module and a control module;
the image acquisition module is used for receiving the Y-axis coordinate sent by the position acquisition module and limiting the acquisition range of the image acquisition module at the Y-axis coordinate;
the image acquisition module is also used for shooting the part to acquire the image information of the part and sending the image information to the identification module when the part enters the acquisition range of the image acquisition module;
the identification module is used for receiving the image information, identifying the part by adopting a plurality of identification template pictures at different angles, judging whether the part is a target part needing to be welded or not, if so, acquiring a template picture with the highest matching score with the target part in the identification template pictures as a coordinate selection template picture, rotating and displacing the coordinate selection template picture until the coincidence degree of the coordinate selection template picture and the target part meets a threshold value, selecting welding coordinates of a welding point from the coordinate selection template picture, and sending the welding coordinates to the control module;
the control module receives the welding coordinates and obtains the movement speed of the transmission crawler, controls the welding mechanical arm to move to the welding coordinates, updates the X-axis coordinates according to the movement speed of the transmission crawler, and performs tracking welding on the target part;
the identification module comprises a preparation module;
the preparation module comprises a template making sub-module, an identification feature extraction sub-module and a storage sub-module;
the template making submodule is used for making a plurality of template drawings, wherein each template drawing corresponds to a different integer angle;
the identification feature extraction submodule comprises a gradient quantization unit and a lifting unit;
the gradient quantization unit is used for performing first-layer pyramid direction gradient quantization and second-layer pyramid direction gradient quantization on the template pictures to respectively obtain identification features corresponding to the template pictures;
the lifting unit is used for acquiring the identification features by taking the current angle as a list;
the storage submodule is used for storing all the identification features in the different angle list;
the identification module comprises a further matching module;
the matching module comprises a processing submodule and a score calculating submodule;
the processing submodule comprises: the system comprises a pyramid linear memory data container processing unit, a translation unit and a similarity response matrix image acquisition unit;
the pyramid linear memory data container processing unit is used for carrying out gradient extraction and quantification on the image information, creating two layers of pyramid linear memory data containers and traversing the image information line through the data of the two layers of pyramids;
the translation unit is used for performing bit-by-bit translation on the image information quantization gradient within the range of 4 x 4, and executing or operating the obtained 16 graphs pixel by pixel to obtain a diffusion gradient matrix graph of the image information after gradient diffusion;
the similarity response matrix image acquisition unit is used for converting the diffusion gradient matrix image of the image information into gradient matrix images in the first four directions and the last four directions through AND operation, and finding out the maximum similarity of each gradient matrix image angle and each angle in a lookup table through a preset lookup table, wherein the lookup table is a pre-calculated table of various combinations in 8 directions;
acquiring 8 similarity response matrix diagrams, converting all the similarity response matrix diagrams into a format of 16 orders or 64 orders, and storing the formats in the continuous linear memory data containers in a linearized manner;
the score calculation submodule comprises a calling unit and a similarity calculation unit;
the calling unit is used for calling the two layers of pyramids in the storage submodule, finding access entries of linear memories of the two layers of pyramids according to 8 similarity response matrix diagrams, and acquiring identification features corresponding to each angle;
the similarity calculation unit is used for matching the image information and the identification features, obtaining the matching scores of the image information and the identification features of each angle, judging whether the highest matching score is larger than a threshold value, if so, judging that the content of the image information is a target part, and obtaining a template corresponding to the highest matching score as a coordinate selection template.
2. An industrial robot control system based on feature point docking according to claim 1, characterized in that the identification module further comprises a correction module;
the correction module comprises a target part extraction sub-module, an identification feature association sub-module and a rotation translation sub-module;
the target part extraction submodule is used for extracting the frame of the target part from the image information in a sub-pixel point mode to obtain a target frame and sending the target frame to the identification feature association submodule;
the identification feature association submodule is used for combining the identification features on the target frame and other identification features into a first identification point according to a proportion and finding out a second identification point corresponding to the first identification point on the template picture according to the first identification point;
acquiring distances between all first identification points and second identification points corresponding to the first identification points, judging whether the distances between all first identification feature points and the second identification points corresponding to the first identification feature points are larger than a distance threshold, if the distances between the first identification feature points and the second identification points corresponding to the first identification feature points are larger than the distance threshold, acquiring the number of first identification features meeting the distance threshold, judging whether the number of the first identification features meets a first number threshold again, and if the number of the first identification features is larger than the first number threshold, sending a correction instruction to the rotation and translation sub-module;
and the rotation and translation sub-module receives the correction instruction, substitutes the first identification point and the second identification point into a change matrix, and corrects the pose of the template graph to obtain a corrected template graph.
3. The feature point docking based industrial robot control system of claim 2, wherein the revision module further comprises a verification sub-module;
the verification submodule is used for acquiring the number of times of modifying the pose of the current template drawing and the distances between all the first identification points and the second identification points, and judging the number of the second identification points meeting the distance threshold value on the modified template drawing and the number of times of modifying the template drawing;
and when the number of the second identification points meeting the distance threshold is less than the second number threshold and the template graph correction frequency is less than the frequency threshold, updating the current change matrix by using the last change matrix, continuously sending a correction instruction, and continuously correcting the template graph until the number of the second identification points meeting the distance threshold is greater than the second number threshold or the template graph correction frequency is equal to the frequency threshold.
4. An industrial robot control method based on characteristic point docking is applied to an industrial robot control system based on characteristic point docking as claimed in any one of claims 1 to 3, the industrial robot is provided with a welding mechanical arm, and the method is characterized by comprising the following steps:
step S1: when the parts enter the transmission crawler belt, acquiring Y-axis coordinates of each part on the transmission crawler belt, and sending the Y-axis coordinates of each part to the welding mechanical arm one by one;
step S2: receiving the Y-axis coordinate by the welding mechanical arm, and limiting the welding mechanical arm at the Y-axis coordinate;
when a part enters an image acquisition range of the welding mechanical arm, shooting the part by the welding mechanical arm, acquiring image information of the part, and identifying the image information;
the image information identification step comprises the following steps:
receiving the image information, identifying the part by adopting a plurality of identification template pictures at different angles, judging whether the part is a target part needing to be welded, if so, obtaining a template picture which is matched with the target part and has the highest score in the identification template pictures as a coordinate selection template picture, rotating and displacing the coordinate selection template picture until the coincidence degree of the coordinate selection template picture and the target part meets a threshold value, and selecting the welding coordinate of a welding point from the coordinate selection template picture;
and step S3: and receiving the welding coordinate and acquiring the movement speed of the transmission crawler, moving the welding mechanical arm to the welding coordinate, updating the X-axis coordinate according to the movement speed of the transmission crawler, and tracking and welding the target part.
5. The method for controlling an industrial robot based on feature point docking according to claim 4, wherein the following steps are executed before the step S2:
step A1: making a plurality of template drawings, wherein each template drawing corresponds to a different integer angle;
step A2: performing first-layer pyramid direction gradient quantization and second-layer pyramid direction gradient quantization on the plurality of template pictures, respectively obtaining identification features corresponding to the plurality of template pictures, taking the current angle as a list to obtain the identification features, and storing the identification features;
step A3: all identifying features in the different angle table columns are stored.
6. The method for controlling an industrial robot based on feature point docking as claimed in claim 5, wherein the specific step of selecting the coordinate selection template map in step S2 is as follows:
step B1: gradient extraction and quantification are carried out on the image information, a two-layer pyramid linear memory data container is created, and the image information line traverses the data of the two layers of pyramids;
and step B2: performing bit-by-bit translation on the image information quantization gradient within the range of 4 x 4, and executing or operating the obtained 16 images pixel by pixel to obtain a diffusion gradient matrix image of the image information after gradient diffusion;
the similarity response matrix image acquisition unit is used for converting the diffusion gradient matrix image of the image information into gradient matrix images in the first four directions and the last four directions through AND operation, and finding out the maximum similarity of each gradient matrix image angle and each angle in a lookup table through a preset lookup table, wherein the lookup table is a pre-calculated table of various combinations in 8 directions;
acquiring 8 similarity response matrix diagrams, converting all the similarity response matrix diagrams into a format of 16 orders or 64 orders, and storing the formats in the continuous linear memory data containers in a linearized manner;
and step B3: finding an access entry of a linear memory of the two layers of pyramids by using the two layers of pyramids in the storage submodule according to the 8 similarity response matrix diagrams, and acquiring identification characteristics corresponding to each angle;
and step B4: matching the image information and the identification features, acquiring the matching scores of the image information and the identification features of each angle, judging whether the highest matching score is greater than a threshold value, if so, judging that the content of the image information is a target part, and acquiring a template corresponding to the highest matching score as a coordinate selection template;
the matching score is calculated in the following manner:
Figure FDA0004006414350000061
where Q is the input image information, T represents the template map, c is the position of the template map in the input image information, P represents the domain centered on c, P is the offset position, o is the recognition feature, and epsilon (Q, T, c) is the matching score.
7. The method for controlling an industrial robot based on feature point docking as claimed in claim 6, wherein the specific steps of performing rotation and displacement on the coordinate selection template map in the step S2 are as follows:
step C1: extracting the frame of the target part from the image information in a sub-pixel point mode to obtain a target frame;
and step C2: combining the recognition features on the target frame and the rest recognition features into a first recognition point according to a proportion, and finding out a second recognition point corresponding to the first recognition point on the template picture according to the first recognition point;
acquiring the distance between all first identification points and second identification points corresponding to the first identification points, judging whether the distance between all first identification feature points and the second identification points corresponding to the first identification feature points is greater than a distance threshold, if the distance between the first identification feature points and the second identification points corresponding to the first identification feature points is greater than the distance threshold, acquiring the number of first identification features meeting the distance threshold, judging whether the number meets a first number threshold, and if the number is greater than the first number threshold, correcting the template graph;
and C3: and substituting the first identification point and the second identification point into a change matrix, and correcting the pose of the template graph to obtain a corrected template graph, wherein the calculation process of the change matrix is as follows:
substituting the coordinates of the first recognition point and the coordinates of the second recognition point into formula (1):
Figure FDA0004006414350000071
wherein R is a rotation matrix, and R is a rotation matrix,
Figure FDA0004006414350000072
to translate the matrix, q i And p i Coordinates, n, of the associated first and second identifying feature points, respectively i Is a feature vector, i is a natural integer greater than 1, and epsilon is a variation matrix;
then, the minimum deflection angle R of the first identification point and the second identification point is obtained, the minimum deflection angle R is substituted into the formula (2), and the minimum value of the rotation matrix R is obtained through calculation, wherein the formula (2) is as follows:
Figure FDA0004006414350000073
substituting the minimum value of the rotation matrix R back into the formula (1) to obtain a formula (3);
Figure FDA0004006414350000074
wherein c is i =p i ×n i
And (3) solving the partial derivatives of the formula (3), converting the partial derivatives into linear equations and solving the angle r of the minimum deflection, the minimum horizontal offset x and the minimum vertical offset y by the following process:
the partial derivative formula four is as follows:
Figure FDA0004006414350000075
Figure FDA0004006414350000076
Figure FDA0004006414350000077
the conversion into a linear equation to find the angle of minimum deflection r, the minimum horizontal offset x and the minimum vertical offset y is as follows:
Figure FDA0004006414350000081
8. the feature point docking-based industrial robot control method according to claim 7, wherein the specific steps in the step S2 until the degree of coincidence of the coordinate selection template map and the target part satisfies a threshold value are as follows:
and C4: acquiring the number of times of modifying the pose of the current template graph and the distances between all the first identification points and the second identification points, and judging the number of the second identification points meeting the distance threshold value on the modified template graph and the number of times of modifying the template graph;
and when the number of the second identification points meeting the distance threshold is less than the second number threshold and the template graph correction times are less than the time threshold, updating the change matrix by using the last change matrix, and continuously correcting the template graph until the number of the second identification points meeting the distance threshold is greater than the second number threshold or the template graph correction times are equal to the time threshold.
CN202210669671.1A 2022-06-14 2022-06-14 Industrial robot control system and method based on feature point docking Active CN115026822B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210669671.1A CN115026822B (en) 2022-06-14 2022-06-14 Industrial robot control system and method based on feature point docking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210669671.1A CN115026822B (en) 2022-06-14 2022-06-14 Industrial robot control system and method based on feature point docking

Publications (2)

Publication Number Publication Date
CN115026822A CN115026822A (en) 2022-09-09
CN115026822B true CN115026822B (en) 2023-03-24

Family

ID=83124271

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210669671.1A Active CN115026822B (en) 2022-06-14 2022-06-14 Industrial robot control system and method based on feature point docking

Country Status (1)

Country Link
CN (1) CN115026822B (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103895042A (en) * 2014-02-28 2014-07-02 华南理工大学 Industrial robot workpiece positioning grabbing method and system based on visual guidance
CN105729468B (en) * 2016-01-27 2018-01-09 浙江大学 A kind of robotic workstation based on the enhancing of more depth cameras
CN106737664B (en) * 2016-11-25 2020-02-14 中国科学院自动化研究所 Delta robot control method and system for sorting multiple types of workpieces
JP6478234B2 (en) * 2017-06-26 2019-03-06 ファナック株式会社 Robot system
CN108674922B (en) * 2018-05-16 2020-06-12 广州视源电子科技股份有限公司 Conveyor belt synchronous tracking method, device and system for robot
CN109396053A (en) * 2018-10-30 2019-03-01 福建省亿顺机械设备有限公司 Intelligent sorting method
CN110936355B (en) * 2019-11-25 2021-06-22 广州微林软件有限公司 Mechanical arm guiding system and method based on visual speed measurement positioning

Also Published As

Publication number Publication date
CN115026822A (en) 2022-09-09

Similar Documents

Publication Publication Date Title
US9227323B1 (en) Methods and systems for recognizing machine-readable information on three-dimensional objects
CN115008093B (en) Multi-welding-point welding robot control system and method based on template identification
CN110580725A (en) Box sorting method and system based on RGB-D camera
CN112233181A (en) 6D pose recognition method and device and computer storage medium
CN110560373A (en) multi-robot cooperation sorting and transporting method and system
CN111360821A (en) Picking control method, device and equipment and computer scale storage medium
CN114751153B (en) Full-angle multi-template stacking system
CN111611989A (en) Multi-target accurate positioning identification method based on autonomous robot
CN115049861A (en) Automatic correction polishing method and system for industrial robot
CN112109072B (en) Accurate 6D pose measurement and grabbing method for large sparse feature tray
CN114742883A (en) Automatic assembly method and system based on plane type workpiece positioning algorithm
CN112344923A (en) Robot positioning method and positioning device thereof
CN113050636A (en) Control method, system and device for autonomous tray picking of forklift
CN115026823B (en) Industrial robot control method and system based on coordinate welding
CN115026822B (en) Industrial robot control system and method based on feature point docking
CN114986051B (en) Industrial robot welding control system and method based on template recognition
CN114102593A (en) Method for grabbing regular materials by robot based on two-dimensional low-definition image
CN115049860B (en) System based on feature point identification and capturing method
CN110533717B (en) Target grabbing method and device based on binocular vision
WO2023036212A1 (en) Shelf locating method, shelf docking method and apparatus, device, and medium
CN117340879A (en) Industrial machine ginseng number identification method and system based on graph optimization model
CN114800508B (en) Grabbing control system and method of industrial robot
CN113807116A (en) Robot six-dimensional pose positioning method based on two-dimensional code
Liao et al. Pole detection for autonomous gripping of biped climbing robots
Tan et al. Unmanned Sorting Site Combined with Path Planning and Barcode Identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 528000, No. 43 Longliang Road, Longyan Industrial Zone, Longyan Village, Leliu Street, Shunde District, Foshan City, Guangdong Province

Patentee after: GUANGDONG TIANTAI ROBOT Co.,Ltd.

Address before: 528322 third floor, No. 6 complex building, Dadun section of Daliang 105 National Highway (plot 5-1), Shunde District, Foshan City, Guangdong Province (No. 9, No. 23, Honggang section of Guangzhou Zhuhai Highway)

Patentee before: GUANGDONG TIANTAI ROBOT Co.,Ltd.