CN116460851A - Mechanical arm assembly control method for visual migration - Google Patents
Mechanical arm assembly control method for visual migration Download PDFInfo
- Publication number
- CN116460851A CN116460851A CN202310526714.5A CN202310526714A CN116460851A CN 116460851 A CN116460851 A CN 116460851A CN 202310526714 A CN202310526714 A CN 202310526714A CN 116460851 A CN116460851 A CN 116460851A
- Authority
- CN
- China
- Prior art keywords
- image
- assembled
- equipment
- registration
- assembly
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 230000000007 visual effect Effects 0.000 title claims abstract description 26
- 238000013508 migration Methods 0.000 title claims abstract description 19
- 230000005012 migration Effects 0.000 title claims abstract description 18
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 12
- 238000007781 pre-processing Methods 0.000 claims abstract description 5
- 238000001914 filtration Methods 0.000 claims description 27
- 238000012937 correction Methods 0.000 claims description 21
- 238000001514 detection method Methods 0.000 claims description 17
- 239000013598 vector Substances 0.000 claims description 16
- 238000004364 calculation method Methods 0.000 claims description 14
- 238000003702 image correction Methods 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 12
- 238000012545 processing Methods 0.000 claims description 11
- 230000003872 anastomosis Effects 0.000 claims description 9
- 238000010586 diagram Methods 0.000 claims description 8
- 230000006870 function Effects 0.000 claims description 7
- 238000003708 edge detection Methods 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 3
- 230000007704 transition Effects 0.000 claims 1
- 238000004891 communication Methods 0.000 description 7
- 238000010276 construction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000004378 air conditioning Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
- B25J9/1605—Simulation of manipulator lay-out, design, modelling of manipulator
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
- B25J9/1687—Assembly, peg and hole, palletising, straight line, weaving pattern movement
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of mechanical arm control, and discloses a mechanical arm assembly control method for visual migration, which comprises the following steps: acquiring an image of equipment to be assembled by using a vision system of a mechanical arm, and automatically preprocessing the acquired image of the equipment to be assembled; inputting the preprocessed equipment image to be assembled into an offline product registration model to obtain registration coincidence degree, if the registration coincidence degree is larger than a specified threshold value, assembling, otherwise, inputting the preprocessed equipment image to be assembled into a real-time product registration robust regulation model to correct; and controlling the mechanical arm to carry out the assembly treatment of the equipment to be assembled. According to the invention, the assembly rotation angle of the equipment to be assembled is determined at the initial position based on the registration coincidence degree of the images, the mechanical arm is controlled to move along the assembly path based on the obstacle avoidance algorithm to reach the assembly position, the rotation angle based on the vision of the initial position is migrated to the final assembly position, the mechanical arm is controlled to rotate, and the assembly control based on the vision of the mechanical arm is realized.
Description
Technical Field
The invention relates to the technical field of mechanical arm control, in particular to a mechanical arm assembly control method for visual migration.
Background
The product assembly by the mechanical arm is widely applied to the field of industrial manufacturing, the existing automatic mechanical arm assembly technology is mainly divided into two types of PLC control and visual control, the PLC control logic is simple, the simple assembly scene is adapted, and the product assembly under the complex scene cannot be qualified; the visual control technology can process a large amount of scene information, can be qualified for product assembly in a complex environment, but has the problem of poor robustness, namely poor scene self-adaptation capability, and can not realize self-adaptation assembly control under a plurality of different scene environment conditions. Aiming at the problem, the invention provides a mechanical arm assembly control method for visual migration, which mainly solves the problem of mechanical arm robust assembly control under the condition of complex scene change.
Disclosure of Invention
In view of the above, the present invention provides a method for controlling assembly of a mechanical arm for visual migration, which aims to: 1) Dividing an image of the equipment to be assembled into a plurality of sub-images based on the edges of the preprocessed divided images, calculating corresponding convolution feature images of the sub-images and the template registration image to obtain semantic similarity of the sub-images and the template registration image, selecting the sub-image with the largest semantic similarity, extracting a plurality of local feature images, determining a position area of the equipment to be assembled in the image of the equipment to be assembled based on cosine similarity of the convolution feature images of the equipment to be assembled in the local feature images and the template registration image, converting the equipment to be assembled into three-dimensional image representation, carrying out random rotation combination on the represented equipment to be assembled and an assembly position to obtain different combination results, calculating registration coincidence degree of the different combination results and the template registration image, selecting the largest combination result and the corresponding registration coincidence degree to output, if the registration coincidence degree is larger than a specified threshold, enabling a mechanical arm to calculate a rotation angle in the combination result, controlling the mechanical arm to carry out rotation assembly according to the recorded rotation angle after the local feature image and simultaneously providing a real-time correction scheme of the image of the equipment to be assembled, carrying out correction processing on the image of the equipment to be assembled by utilizing correction codes, so that the middle area of the image to be assembled is subjected to large-scale equipment to be assembled, and the equipment to be assembled can be more identified; 2) The method comprises the steps of taking a connecting rod between adjacent joints of a mechanical arm as a reference unit of an obstacle avoidance algorithm, determining whether each path node in an assembly path has a collision condition of the connecting rod and an obstacle or not by calculating the virtual stress condition of the connecting rod between adjacent joint nodes and the obstacle, correcting the path node with the collision to obtain the assembly path for avoiding the collision of the mechanical arm, controlling the mechanical arm to move along the assembly path to reach an assembly position, controlling the mechanical arm to rotate the equipment to be assembled according to the recorded rotation angle, and migrating the rotation angle based on the vision of the initial position to the final assembly position to realize the assembly control based on the vision of the mechanical arm.
The invention provides a method for controlling the assembly of a visual migration mechanical arm, which comprises the following steps:
s1: acquiring an image of equipment to be assembled by using a vision system of a mechanical arm, and automatically preprocessing the acquired image of the equipment to be assembled;
s2: an off-line product registration model is constructed, the model takes a template registration image and an image of equipment to be assembled as inputs,
taking the registration coincidence degree as output;
s3: constructing a real-time product registration robust regulation model, wherein the model takes an image of equipment to be assembled as input and takes a corrected image of the equipment to be registered as output;
s4: inputting the preprocessed equipment image to be assembled into an offline product registration model to obtain registration coincidence degree, if the registration coincidence degree is larger than a specified threshold value, assembling, otherwise, inputting the preprocessed equipment image to be assembled into a real-time product registration robust regulation model to obtain a corrected equipment image to be assembled, and inputting the corrected equipment image to be assembled into the offline product registration model;
s5: and controlling the mechanical arm to carry out the assembly treatment of the equipment to be assembled according to the equipment image to be assembled which can be assembled.
As a further improvement of the present invention:
Optionally, the step S1 of acquiring an image of the device to be assembled includes:
the method comprises the steps that an image of equipment to be assembled is obtained by utilizing a vision system of a mechanical arm, wherein the vision system of the mechanical arm is a camera on the mechanical arm, the image of the equipment to be assembled represents a scene image of an area where the equipment to be assembled is located, the vision system of the mechanical arm automatically carries out pretreatment of graying, stretching and edge detection on the obtained image of the equipment to be assembled, and the flow of graying, stretching and edge detection is as follows:
s11: obtaining the maximum value of RGB color components of each pixel in the acquired image of the equipment to be assembled, and taking the maximum value as the gray value of the pixel point to obtain a gray level image of the equipment to be assembled;
s12: stretching the gray value of each pixel in the gray map:
wherein:
g (i, j) represents the gray value of pixel (i, j) in the gray map, pixel (i, j) represents the pixel of the ith row and the jth column in the gray map, the number of the pixel rows of the gray map is M, and the number of the pixel columns is N;
MAX g representing the maximum gray value, MIN, in the gray map g Representing a minimum gray value in the gray scale map;
g' (i, j) represents the gray value of the pixel (i, j) after gray stretching;
s13: constructing a Gaussian filter template with the size of 3 multiplied by 3 pixels, and filtering the gray-scale image after gray-scale stretching, wherein the constructed Gaussian filter template is G σ Sigma represents the scale of a Gaussian filter template, the scale is set to be 2, and convolution processing is carried out on each pixel in the gray level diagram after gray level stretching by using the Gaussian filter template, so that the gray level diagram after Gaussian filtering is obtained:
g″(i,j)=g′(i,j)*G σ
wherein:
g' (i, j) represents the Gaussian filtering result of the pixel (i, j), and the Gaussian filtering results of all the pixels form a Gaussian filtered gray scale map;
in the embodiment of the invention, the construction flow of the Gaussian filter template is as follows:
setting an initial Gaussian filtering template:
wherein:
G σ (i ', j') represents the element values of the i 'th row and j' th column elements in the initial gaussian filter template of scale σ;
normalizing the element values of each element in the initial Gaussian filter template to ensure that all elements are added to be 1, thereby obtaining a Gaussian filter template G with a scale sigma σ ;
S14: calculating the gradient value of each pixel in the gray scale map after Gaussian filtering:
wherein:
grad (i, j) represents the gradient value of pixel (i, j) in the gray-scale image after gaussian filtering;
marking pixels with gradient values larger than a preset threshold value as edge pixels;
in the embodiment of the present invention, if the pixel (i+1, j) does not exist in the gray scale map after gaussian filtering, the gaussian filtering result of the pixel is marked as 0.
Optionally, the step S2 of constructing an offline product registration model includes:
an offline product registration model is constructed, wherein the offline product registration model takes a template registration image and an image of equipment to be assembled as input and takes registration coincidence degree as output; the template registration image is a display image after the pretreatment of the equipment to be assembled successfully;
the off-line product registration model comprises an equipment to be assembled identification module, a combined registration module and a registration fitness calculation module, wherein the equipment to be assembled identification module is used for identifying equipment to be assembled from an equipment image to be assembled, the combined registration module is used for randomly combining the equipment to be assembled with an assembly position to obtain different combined results, the registration fitness calculation module is used for calculating the registration fitness of the different combined results and a template registration image, and the largest combined result and the corresponding registration fitness are selected to be output;
the process for carrying out registration calculation on the equipment images to be assembled based on the offline registration model comprises the following steps:
s21: dividing an image of equipment to be assembled into a plurality of sub-images according to the edges of the image;
s22: inputting the sub-image of the equipment to be assembled and the template registration image into an equipment identification module to be assembled, wherein the equipment identification module to be assembled carries out convolution processing on the sub-image and the template registration image by utilizing two layers of convolution layers respectively to obtain an initial convolution characteristic image of the sub-image and the template registration image, wherein the convolution kernel of the convolution layers is 5 multiplied by 5 pixels, and the step length is 2;
S23: calculating an inner product result of the initial convolution feature image corresponding to the sub-image and the initial convolution feature image corresponding to the template registration image, and taking the inner product result as semantic similarity of the sub-image and the template registration image;
s24: selecting an initial convolution feature image corresponding to the sub-image with highest semantic similarity, and sliding the initial convolution feature image of the sub-image by utilizing a sliding window to extract a local feature image, wherein the size of the sliding window is the size of equipment to be assembled in the template registration image; calculating cosine similarity of an initial convolution feature image corresponding to a to-be-assembled equipment region in the local feature image and the template registration image, and selecting a local feature image with highest cosine similarity as the to-be-assembled equipment feature image, wherein the to-be-assembled equipment image region corresponding to the local feature image is the identified to-be-assembled equipment;
s25: the combined registration module converts pixel coordinates of the equipment to be assembled, which are identified in the equipment to be assembled image, into a world coordinate system to obtain equipment to be assembled represented by a three-dimensional image:
wherein:
d 1 ,d 2 respectively representing the lengths of unit pixels in the horizontal direction and the vertical direction in the image shot by the vision system;
x 0 ,y 0 respectively representing the center pixel coordinate and the image origin pixel coordinate of the image shot by the vision system The number of pixels in the horizontal direction and the vertical direction, which differ from each other;
f represents the focal length of the mechanical arm vision system;
K 1 ,K 2 external parameters representing the vision system, namely the position and rotation direction parameters of the vision system;
z represents the coordinate value of the Z axis of the pixel coordinate (x, y) of the equipment to be assembled in a camera coordinate system;
(X * ,Y * ,Z * ) Representing the mapping result of the pixel coordinates (x, y) of the equipment to be assembled in a world coordinate system;
in the embodiment of the invention, the method for acquiring the external parameters of the vision system comprises the following steps:
shooting a reference image with known three-dimensional world coordinates, substituting pixel coordinates of the reference image into a coordinate conversion formula, and selecting an external parameter which minimizes the error between the converted world coordinates and the known three-dimensional world coordinates as an external parameter of a vision system;
s26: randomly rotating the equipment to be assembled represented by the three-dimensional image to obtain a plurality of rotation results, and randomly combining the edges of the rotation results with the assembly positions to obtain a plurality of combination results;
s27: the registration matching degree calculation module calculates the curvature and position coordinates of the combined edge curve in each combined result, and calculates the curvature and position coordinates of the edge curve of the successfully assembled result in the template registration image, so as to obtain the registration matching degree of each combined result:
Wherein:
β s (m) represents the curvature of the s-th edge curve in the m-th combination, α s Representing the curvature of the edge curve closest to the s-th edge curve in the m-th combination of the position coordinates in the successful assembly result in the template registration image;
R m representing the registration goodness of fit of the mth combination.
Optionally, constructing a real-time product registration robust regulation model in the step S3 includes:
constructing a real-time product registration robust regulation model, wherein the real-time product registration robust regulation model takes an image of equipment to be assembled as input and takes a corrected image of the equipment to be registered as output;
the regulation and control flow based on the real-time product registration robust regulation and control model comprises the following steps:
s31: selecting a sub-image with highest semantic similarity in the image of the equipment to be assembled as an image to be corrected;
s32: constructing an image correction code, wherein the image correction code is in the form of a sliding window, the size of the sliding window is the pixel number of each row of pixels in an image to be corrected, and the constructed image correction code is as follows:
wherein:
h n (Q) a correction code representing the nth pixel of the nth row, Q n Representing the total number of pixels of the nth row of pixels;
s33: correcting each row of pixels in the image to be corrected based on the image correction code to obtain a corrected image, wherein the correction result of the nth row and the qth pixel in the image to be corrected is as follows:
g″′ n,q =ωg″ n,q +(1-ω)h n (q)
Wherein:
g″ n,q representing the result of the Gaussian filtering of the nth row of the qth pixel in the image to be corrected, ω represents the correction factor, which is set to 0.82, g '' n,q Representing a correction result of the nth row of the q-th pixel in the image to be corrected;
s34: and adding the corrected image into the image of the equipment to be assembled to serve as the corrected image of the equipment to be assembled.
Optionally, in the step S4, the image of the device to be assembled is input into an offline product registration model to obtain a registration matching degree, and if the registration matching degree is greater than a specified threshold, assembly may be performed, including:
inputting the to-be-assembled equipment image into an offline product registration model to obtain registration anastomoses of different registration combination results, assembling if the maximum registration anastomoses are larger than a specified threshold value, obtaining the to-be-assembled equipment image capable of being assembled, and recording a rotation angle corresponding to the maximum registration anastomoses.
Optionally, in the step S4, inputting the image of the device to be assembled with the registration matching degree lower than the specified threshold value into the real-time product registration robust regulation model to obtain a corrected image of the device to be assembled, including:
inputting the to-be-assembled equipment image with the registration matching degree lower than the specified threshold value into a real-time product registration robust regulation model for correction to obtain a corrected to-be-assembled equipment image, inputting the corrected to-be-assembled equipment image into an offline product registration model to obtain registration matching degrees of different registration combination results, assembling if the maximum registration matching degree is greater than the specified threshold value, obtaining an assembled to-be-assembled equipment image, recording a rotation angle corresponding to the maximum registration matching degree, and otherwise, correcting again.
Optionally, in the step S5, based on the image of the equipment to be assembled, the controlling the mechanical arm to perform the assembling process of the equipment to be assembled includes:
based on the image of the equipment to be assembled which can be assembled and the rotation angle corresponding to the maximum registration coincidence degree, obtaining the rotation angle of the equipment to be assembled when the equipment to be assembled is assembled, recording the rotation angle by the mechanical arm, and grabbing the equipment to be assembled;
the mechanical arm acquires a map of the equipment to be assembled and an assembly position by using a vision system, an assembly path of the mechanical arm is generated by adopting an obstacle avoidance algorithm, the mechanical arm is controlled to move along the assembly path to reach the assembly position, the mechanical arm is controlled to rotate the equipment to be assembled according to the recorded rotation angle, and the assembly control based on the vision of the mechanical arm is completed;
the assembly path generation flow based on the obstacle avoidance algorithm is as follows:
s51: initializing a plurality of assembly paths, wherein the k-th assembly path is in the form of:
L k =[L k (1),L k (2),...,L k (u),...L k (U)]
L k (u)=[θ k,u (1),θ k,u (2),...,θ k,u (v)] T
wherein:
L k (U) represents the position of the U-th path node in the kth assembly path, U represents the total number of path nodes, and in the embodiment of the invention, the position of the first path node is the initial position of the mechanical arm, and the position of the last path node is the assembly position;
θ k,u (v) The method comprises the steps that when the mechanical arm is positioned at a position of a node of a ith path in a kth assembly path, the angle of a node of a jth joint in the mechanical arm is represented, wherein v represents the total number of the nodes of the joint of the mechanical arm;
s52: initializing obstacle avoidance detection vectors of a kth assembly path with dimension U: [0, ], 0];
s53: constructing a virtual stress condition function of a connecting rod and an obstacle between adjacent joint nodes in the mechanical arm:
wherein:
F k,u (a, a+1) represents the virtual stress condition of the link and the obstacle between the a-th joint node and the a+1-th joint node at the position of the u-th path node in the kth assembly path, d (r) a,a+1 ,r next ) Representing the connecting rod r a,a+1 Distance from nearest obstacle next, delta represents a very small positive number, r next Representing the radius of the nearest obstacle next;
if F is present k,u (a, a+1) is greater than the stress threshold, which indicates that a collision exists at a node of a u-th path in the kth assembly path, and the u-th value in the obstacle avoidance detection vector is 1; in the embodiment of the invention, a, a+1E [1, v ]];
S54: traversing the obstacle avoidance detection vector of the kth assembly path, randomly selecting path nodes with the value of 1 from the rest assembly paths to replace the path nodes with the corresponding sequence until the obstacle avoidance detection vector in each assembly path is 0;
S55: calculating the moving time of each assembly path, selecting the assembly path with the least moving time as output, and controlling the mechanical arm to move along the assembly path to reach the assembly position.
In order to solve the above-described problems, the present invention provides an electronic apparatus including:
a memory storing at least one instruction;
the communication interface is used for realizing the communication of the electronic equipment; a kind of electronic device with high-pressure air-conditioning system
And the processor executes the instructions stored in the memory to realize the mechanical arm assembly control method for visual migration.
In order to solve the above-mentioned problems, the present invention also provides a computer-readable storage medium having stored therein at least one instruction that is executed by a processor in an electronic device to implement the above-mentioned method for controlling the assembly of a robot arm for visual migration.
Compared with the prior art, the invention provides a mechanical arm assembly control method for visual migration, which has the following advantages:
firstly, the scheme provides an assembly scheme determining method, and an offline product registration model is constructed, wherein the offline product registration model takes a template registration image and an equipment image to be assembled as input, and registration coincidence degree as output; the template registration image is a display image after the pretreatment of the equipment to be assembled successfully; the off-line product registration model comprises an equipment to be assembled identification module, a combination registration module and a registration matching degree calculation module, wherein the equipment to be assembled identification module is used for identifying equipment to be assembled from an equipment image to be assembled, the combination registration module is used for randomly combining the equipment to be assembled with an assembly position to obtain different combination results, the registration matching degree calculation module is used for calculating the registration matching degree of the different combination results and the template registration image, and the largest combination result and the corresponding registration matching degree are selected to be output. According to the scheme, an image of the equipment to be assembled is divided into a plurality of sub-images based on the preprocessed divided image edges, the sub-images and the template registration images are calculated to correspond to convolution feature images, semantic similarity of the sub-images and the template registration images is obtained, sub-images with the largest semantic similarity are selected, the plurality of local feature images are extracted, accordingly, based on cosine similarity of the convolution feature images of the equipment to be assembled in the local feature images and the template registration images, the position area of the equipment to be assembled is determined, the equipment to be assembled is converted into a three-dimensional image to be represented, the represented equipment to be assembled is subjected to random rotation combination to obtain different combination results, registration coincidence degree of the different combination results and the template registration images is calculated, the largest combination result and the corresponding registration coincidence degree are selected to be output, if the registration coincidence degree is larger than a specified threshold value, the mechanical arm is made to calculate the rotation angle in the combination result, the mechanical arm is controlled to rotate and assemble according to the recorded rotation angle, meanwhile, a real-time correction scheme of the equipment to be assembled is provided, correction processing is carried out on the equipment to be assembled in the middle area of the equipment to be assembled, and the middle area of the equipment to be assembled is corrected by utilizing correction codes, and the equipment to be assembled is large in the middle area, and the equipment to be assembled can be more identified.
Meanwhile, the scheme provides an assembly path determining algorithm, wherein a visual system is utilized by a mechanical arm to acquire a map of equipment to be assembled and an assembly position, an obstacle avoidance algorithm is adopted to generate an assembly path of the mechanical arm, the mechanical arm is controlled to move along the assembly path to reach the assembly position, the mechanical arm is controlled to rotate according to the recorded rotation angle, and the assembly control based on the visual sense of the mechanical arm is completed; the assembly path generation flow based on the obstacle avoidance algorithm is as follows: initializing a plurality of assembly paths, wherein the k-th assembly path is in the form of:
L k =[L k (1),L k (2),...,L k (u),...L k (U)]
L k (u)=[θ k,u (1),θ k,u (2),...,θ k,u (v)] T
wherein: l (L) k (U) represents a position of a U-th path node in the kth assembly path, U representing a total number of path nodes; θ k,u (v) The method comprises the steps that when the mechanical arm is positioned at a position of a node of a ith path in a kth assembly path, the angle of a node of a jth joint in the mechanical arm is represented, wherein v represents the total number of the nodes of the joint of the mechanical arm; initializing obstacle avoidance detection vectors of a kth assembly path with dimension U: [0,0,...,0]The method comprises the steps of carrying out a first treatment on the surface of the Constructing a virtual stress condition function of a connecting rod and an obstacle between adjacent joint nodes in the mechanical arm:
wherein: f (F) k,u (a, a+1) represents the virtual stress condition of the link and the obstacle between the a-th joint node and the a+1-th joint node at the position of the u-th path node in the kth assembly path, d (r) a,a+1 ,r next ) Representing the connecting rod r a,a+1 Distance from nearest obstacle next, delta represents a very small positive number, r next Representing the radius of the nearest obstacle next; if F is present k,u (a, a+1) is greater than the stress threshold, which indicates that a collision exists at a node of a u-th path in the kth assembly path, and the u-th value in the obstacle avoidance detection vector is 1; traversing the obstacle avoidance detection vector of the kth assembly path, randomly selecting path nodes with the value of 1 from the rest assembly paths to replace the path nodes with the corresponding sequence until the obstacle avoidance detection vector in each assembly path is 0; calculating the moving time of each assembly path, selecting the assembly path with the least moving time as output, and controlling the mechanical arm to move along the assembly path to reach the assembly position. According to the scheme, connecting rods between adjacent joints of the mechanical arm are used as reference units of an obstacle avoidance algorithm, whether collision conditions of the connecting rods and the obstacles exist at all path nodes in an assembly path are determined by calculating virtual stress conditions of the connecting rods and the obstacles between the adjacent joint nodes, the collision path nodes are corrected, an assembly path for avoiding the collision of the mechanical arm is obtained, and the mechanical arm is controlled to be assembled along the assembly directionAnd (3) moving the path to the assembling position, controlling the mechanical arm to rotate the equipment to be assembled according to the recorded rotation angle, and transferring the rotation angle based on the vision of the initial position to the final assembling position to realize the assembling control based on the vision of the mechanical arm.
Drawings
Fig. 1 is a schematic flow chart of a method for controlling assembly of a visual-migration mechanical arm according to an embodiment of the invention;
fig. 2 is a schematic structural diagram of an electronic device for implementing a method for controlling assembly of a mechanical arm for visual migration according to an embodiment of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The embodiment of the application provides a method for controlling the assembly of a visual migration mechanical arm. The execution subject of the method for controlling the assembly of the mechanical arm for visual migration includes, but is not limited to, at least one of a server, a terminal and the like capable of being configured to execute the method provided by the embodiment of the application. In other words, the method for controlling the assembly of the mechanical arm for visual migration may be performed by software or hardware installed in the terminal device or the server device, and the software may be a blockchain platform. The service end includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
Example 1:
s1: and acquiring an image of the equipment to be assembled by using a vision system of the mechanical arm, and automatically preprocessing the acquired image of the equipment to be assembled.
The step S1 of acquiring the image of the equipment to be assembled comprises the following steps:
the method comprises the steps that an image of equipment to be assembled is obtained by utilizing a vision system of a mechanical arm, wherein the vision system of the mechanical arm is a camera on the mechanical arm, the image of the equipment to be assembled represents a scene image of an area where the equipment to be assembled is located, the vision system of the mechanical arm automatically carries out pretreatment of graying, stretching and edge detection on the obtained image of the equipment to be assembled, and the flow of graying, stretching and edge detection is as follows:
s11: obtaining the maximum value of RGB color components of each pixel in the acquired image of the equipment to be assembled, and taking the maximum value as the gray value of the pixel point to obtain a gray level image of the equipment to be assembled;
s12: stretching the gray value of each pixel in the gray map:
wherein:
g (i, j) represents the gray value of pixel (i, j) in the gray map, pixel (i, j) represents the pixel of the ith row and the jth column in the gray map, the number of the pixel rows of the gray map is M, and the number of the pixel columns is N;
MAX g representing the maximum gray value, MIN, in the gray map g Representing a minimum gray value in the gray scale map;
g' (i, j) represents the gray value of the pixel (i, j) after gray stretching;
s13: constructing a Gaussian filter template with the size of 3 multiplied by 3 pixels, and filtering the gray-scale image after gray-scale stretching, wherein the constructed Gaussian filter template is G σ Sigma represents the scale of a Gaussian filter template, the scale is set to be 2, and convolution processing is carried out on each pixel in the gray level diagram after gray level stretching by using the Gaussian filter template, so that the gray level diagram after Gaussian filtering is obtained:
g″(i,j)=g′(i,j)*G σ
wherein:
g' (i, j) represents the Gaussian filtering result of the pixel (i, j), and the Gaussian filtering results of all the pixels form a Gaussian filtered gray scale map;
in the embodiment of the invention, the construction flow of the Gaussian filter template is as follows:
setting an initial Gaussian filtering template:
wherein:
G σ (i ', j') represents the element values of the i 'th row and j' th column elements in the initial gaussian filter template of scale σ;
normalizing the element values of each element in the initial Gaussian filter template to ensure that all elements are added to be 1, thereby obtaining a Gaussian filter template G with a scale sigma σ ;
S14: calculating the gradient value of each pixel in the gray scale map after Gaussian filtering:
wherein:
grad (i, j) represents the gradient value of pixel (i, j) in the gray-scale image after gaussian filtering;
Marking pixels with gradient values larger than a preset threshold value as edge pixels;
in the embodiment of the present invention, if the pixel (i+1, j) does not exist in the gray scale map after gaussian filtering, the gaussian filtering result of the pixel is marked as 0.
S2: and constructing an offline product registration model, wherein the model takes a template registration image and an image of equipment to be assembled as input and registration coincidence degree as output.
In the step S2, an offline product registration model is constructed, which comprises the following steps:
an offline product registration model is constructed, wherein the offline product registration model takes a template registration image and an image of equipment to be assembled as input and takes registration coincidence degree as output; the template registration image is a display image after the pretreatment of the equipment to be assembled successfully;
the off-line product registration model comprises an equipment to be assembled identification module, a combined registration module and a registration fitness calculation module, wherein the equipment to be assembled identification module is used for identifying equipment to be assembled from an equipment image to be assembled, the combined registration module is used for randomly combining the equipment to be assembled with an assembly position to obtain different combined results, the registration fitness calculation module is used for calculating the registration fitness of the different combined results and a template registration image, and the largest combined result and the corresponding registration fitness are selected to be output;
The process for carrying out registration calculation on the equipment images to be assembled based on the offline registration model comprises the following steps:
s21: dividing an image of equipment to be assembled into a plurality of sub-images according to the edges of the image;
s22: inputting the sub-image of the equipment to be assembled and the template registration image into an equipment identification module to be assembled, wherein the equipment identification module to be assembled carries out convolution processing on the sub-image and the template registration image by utilizing two layers of convolution layers respectively to obtain an initial convolution characteristic image of the sub-image and the template registration image, wherein the convolution kernel of the convolution layers is 5 multiplied by 5 pixels, and the step length is 2;
s23: calculating an inner product result of the initial convolution feature image corresponding to the sub-image and the initial convolution feature image corresponding to the template registration image, and taking the inner product result as semantic similarity of the sub-image and the template registration image;
s24: selecting an initial convolution feature image corresponding to the sub-image with highest semantic similarity, and sliding the initial convolution feature image of the sub-image by utilizing a sliding window to extract a local feature image, wherein the size of the sliding window is the size of equipment to be assembled in the template registration image; calculating cosine similarity of an initial convolution feature image corresponding to a to-be-assembled equipment region in the local feature image and the template registration image, and selecting a local feature image with highest cosine similarity as the to-be-assembled equipment feature image, wherein the to-be-assembled equipment image region corresponding to the local feature image is the identified to-be-assembled equipment;
S25: the combined registration module converts pixel coordinates of the equipment to be assembled, which are identified in the equipment to be assembled image, into a world coordinate system to obtain equipment to be assembled represented by a three-dimensional image:
wherein:
d 1 ,d 2 respectively representing the lengths of unit pixels in the horizontal direction and the vertical direction in the image shot by the vision system;
x 0 ,y 0 the pixel numbers in the horizontal direction and the vertical direction respectively representing the phase difference between the center pixel coordinate of the image shot by the vision system and the pixel coordinate of the origin of the image;
f represents the focal length of the mechanical arm vision system;
K 1 ,K 2 external parameters representing the vision system, namely the position and rotation direction parameters of the vision system;
z represents the coordinate value of the Z axis of the pixel coordinate (x, y) of the equipment to be assembled in a camera coordinate system;
(X * ,Y * ,Z * ) Representing the mapping result of the pixel coordinates (x, y) of the equipment to be assembled in a world coordinate system;
s26: randomly rotating the equipment to be assembled represented by the three-dimensional image to obtain a plurality of rotation results, and randomly combining the edges of the rotation results with the assembly positions to obtain a plurality of combination results;
s27: the registration matching degree calculation module calculates the curvature and position coordinates of the combined edge curve in each combined result, and calculates the curvature and position coordinates of the edge curve of the successfully assembled result in the template registration image, so as to obtain the registration matching degree of each combined result:
Wherein:
β s (m) represents the curvature of the s-th edge curve in the m-th combination, α s Representing the curvature of the edge curve closest to the s-th edge curve in the m-th combination of the position coordinates in the successful assembly result in the template registration image;
R m representing the registration goodness of fit of the mth combination.
S3: and constructing a real-time product registration robust regulation model, wherein the model takes an image of equipment to be assembled as input and takes a corrected image of the equipment to be registered as output.
And in the step S3, a real-time product registration robust regulation model is constructed, which comprises the following steps:
constructing a real-time product registration robust regulation model, wherein the real-time product registration robust regulation model takes an image of equipment to be assembled as input and takes a corrected image of the equipment to be registered as output;
the regulation and control flow based on the real-time product registration robust regulation and control model comprises the following steps:
s31: selecting a sub-image with highest semantic similarity in the image of the equipment to be assembled as an image to be corrected;
s32: constructing an image correction code, wherein the image correction code is in the form of a sliding window, the size of the sliding window is the pixel number of each row of pixels in an image to be corrected, and the constructed image correction code is as follows:
wherein:
h n (Q) a correction code representing the nth pixel of the nth row, Q n Representing the total number of pixels of the nth row of pixels;
s33: correcting each row of pixels in the image to be corrected based on the image correction code to obtain a corrected image, wherein the correction result of the nth row and the qth pixel in the image to be corrected is as follows:
g″′ n,q =ωg″ n,q +(1-ω)h n (q)
wherein:
g″ n,q representing the result of the Gaussian filtering of the nth row of the qth pixel in the image to be corrected, ω represents the correction factor, which is set to 0.82, g '' n,q Representing a correction result of the nth row of the q-th pixel in the image to be corrected;
s34: and adding the corrected image into the image of the equipment to be assembled to serve as the corrected image of the equipment to be assembled.
S4: inputting the preprocessed equipment image to be assembled into an offline product registration model to obtain registration coincidence degree, if the registration coincidence degree is larger than a specified threshold value, assembling, otherwise, inputting the preprocessed equipment image to be assembled into a real-time product registration robust regulation model to obtain a corrected equipment image to be assembled, and inputting the corrected equipment image to be assembled into the offline product registration model.
In the step S4, the image of the equipment to be assembled is input into an offline product registration model to obtain registration coincidence degree, and if the registration coincidence degree is larger than a specified threshold value, the assembly can be carried out, including:
Inputting the to-be-assembled equipment image into an offline product registration model to obtain registration anastomoses of different registration combination results, assembling if the maximum registration anastomoses are larger than a specified threshold value, obtaining the to-be-assembled equipment image capable of being assembled, and recording a rotation angle corresponding to the maximum registration anastomoses.
In the step S4, inputting the image of the equipment to be assembled with the registration matching degree lower than the specified threshold value into the real-time product registration robust regulation model to obtain a corrected image of the equipment to be assembled, which comprises the following steps:
inputting the to-be-assembled equipment image with the registration matching degree lower than the specified threshold value into a real-time product registration robust regulation model for correction to obtain a corrected to-be-assembled equipment image, inputting the corrected to-be-assembled equipment image into an offline product registration model to obtain registration matching degrees of different registration combination results, assembling if the maximum registration matching degree is greater than the specified threshold value, obtaining an assembled to-be-assembled equipment image, recording a rotation angle corresponding to the maximum registration matching degree, and otherwise, correcting again.
S5: and controlling the mechanical arm to carry out the assembly treatment of the equipment to be assembled according to the equipment image to be assembled which can be assembled.
In the step S5, based on the image of the equipment to be assembled, the mechanical arm is controlled to perform the assembly process of the equipment to be assembled, including:
based on the image of the equipment to be assembled which can be assembled and the rotation angle corresponding to the maximum registration coincidence degree, obtaining the rotation angle of the equipment to be assembled when the equipment to be assembled is assembled, recording the rotation angle by the mechanical arm, and grabbing the equipment to be assembled;
the mechanical arm acquires a map of the equipment to be assembled and an assembly position by using a vision system, an assembly path of the mechanical arm is generated by adopting an obstacle avoidance algorithm, the mechanical arm is controlled to move along the assembly path to reach the assembly position, the mechanical arm is controlled to rotate the equipment to be assembled according to the recorded rotation angle, and the assembly control based on the vision of the mechanical arm is completed;
the assembly path generation flow based on the obstacle avoidance algorithm is as follows:
s51: initializing a plurality of assembly paths, wherein the k-th assembly path is in the form of:
L k =[L k (1),L k (2),...,L k (u),...L k (U)]
L k (u)=[θ k,u (1),θ k,u (2),...,θ k,u (v)] T
wherein:
L k (U) represents the position of the U-th path node in the kth assembly path, U represents the total number of path nodes, and in the embodiment of the invention, the position of the first path node is the initial position of the mechanical arm, and the position of the last path node is the assembly position;
θ k,u (v) The method comprises the steps that when the mechanical arm is positioned at a position of a node of a ith path in a kth assembly path, the angle of a node of a jth joint in the mechanical arm is represented, wherein v represents the total number of the nodes of the joint of the mechanical arm;
s52: initializing obstacle avoidance detection vectors of a kth assembly path with dimension U: [0, ], 0];
s53: constructing a virtual stress condition function of a connecting rod and an obstacle between adjacent joint nodes in the mechanical arm:
wherein:
F k,u (a, a+1) represents the virtual stress condition of the link and the obstacle between the a-th joint node and the a+1-th joint node at the position of the u-th path node in the kth assembly path, d (r) a,a+1 ,r next ) Representing the connecting rod r a,a+1 Distance from nearest obstacle next, delta represents a very small positive number, r next Representing the radius of the nearest obstacle next;
if F is present k,u (a, a+1) is greater than the stress threshold, which indicates that a collision exists at a node of a u-th path in the kth assembly path, and the u-th value in the obstacle avoidance detection vector is 1;
s54: traversing the obstacle avoidance detection vector of the kth assembly path, randomly selecting path nodes with the value of 1 from the rest assembly paths to replace the path nodes with the corresponding sequence until the obstacle avoidance detection vector in each assembly path is 0;
s55: calculating the moving time of each assembly path, selecting the assembly path with the least moving time as output, and controlling the mechanical arm to move along the assembly path to reach the assembly position.
Example 2:
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, which is used to implement a method for controlling assembly of a mechanical arm for visual migration.
The electronic device 1 may comprise a processor 10, a memory 11, a communication interface 13 and a bus, and may further comprise a computer program, such as program 12, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, including flash memory, a mobile hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may in other embodiments also be an external storage device of the electronic device 1, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only for storing application software installed in the electronic device 1 and various types of data, such as codes of the program 12, but also for temporarily storing data that has been output or is to be output.
The processor 10 may be comprised of integrated circuits in some embodiments, for example, a single packaged integrated circuit, or may be comprised of multiple integrated circuits packaged with the same or different functions, including one or more central processing units (Central Processing unit, CPU), microprocessors, digital processing chips, graphics processors, combinations of various control chips, and the like. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects respective parts of the entire electronic device using various interfaces and lines, executes or executes programs or modules (a program 12 for realizing arm assembly Control, etc.) stored in the memory 11, and invokes data stored in the memory 11 to perform various functions of the electronic device 1 and process data.
The communication interface 13 may comprise a wired interface and/or a wireless interface (e.g. WI-FI interface, bluetooth interface, etc.), typically used to establish a communication connection between the electronic device 1 and other electronic devices and to enable connection communication between internal components of the electronic device.
The bus may be a peripheral component interconnect standard (peripheral component interconnect, PCI) bus or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. The bus is arranged to enable a connection communication between the memory 11 and at least one processor 10 etc.
Fig. 2 shows only an electronic device with components, it being understood by a person skilled in the art that the structure shown in fig. 2 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or may combine certain components, or may be arranged in different components.
For example, although not shown, the electronic device 1 may further include a power source (such as a battery) for supplying power to each component, and preferably, the power source may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management, and the like are implemented through the power management device. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The electronic device 1 may further include various sensors, bluetooth modules, wi-Fi modules, etc., which will not be described herein.
The electronic device 1 may optionally further comprise a user interface, which may be a Display, an input unit, such as a Keyboard (Keyboard), or a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like. The display may also be referred to as a display screen or display unit, as appropriate, for displaying information processed in the electronic device 1 and for displaying a visual user interface.
It should be understood that the embodiments described are for illustrative purposes only and are not limited to this configuration in the scope of the patent application.
The program 12 stored in the memory 11 of the electronic device 1 is a combination of instructions that, when executed in the processor 10, may implement:
acquiring an image of equipment to be assembled by using a vision system of a mechanical arm, and automatically preprocessing the acquired image of the equipment to be assembled;
constructing an offline product registration model;
constructing a real-time product registration robust regulation model;
inputting the preprocessed equipment image to be assembled into an offline product registration model to obtain registration coincidence degree, if the registration coincidence degree is larger than a specified threshold value, assembling, otherwise, inputting the preprocessed equipment image to be assembled into a real-time product registration robust regulation model to obtain a corrected equipment image to be assembled, and inputting the corrected equipment image to be assembled into the offline product registration model;
and controlling the mechanical arm to carry out the assembly treatment of the equipment to be assembled according to the equipment image to be assembled which can be assembled.
Specifically, the specific implementation method of the above instruction by the processor 10 may refer to descriptions of related steps in the corresponding embodiments of fig. 1 to 2, which are not repeated herein.
It should be noted that, the foregoing reference numerals of the embodiments of the present invention are merely for describing the embodiments, and do not represent the advantages and disadvantages of the embodiments. And the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, apparatus, article or method that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.
Claims (7)
1. A method for controlling assembly of a visual-transition mechanical arm, the method comprising:
s1: acquiring an image of equipment to be assembled by using a vision system of a mechanical arm, and automatically preprocessing the acquired image of the equipment to be assembled;
s2: an offline product registration model is constructed, wherein the model takes a template registration image and an image of equipment to be assembled as input and takes registration coincidence degree as output;
s3: constructing a real-time product registration robust regulation model, wherein the model takes an image of equipment to be assembled as input and takes a corrected image of the equipment to be registered as output;
s4: inputting the preprocessed equipment image to be assembled into an offline product registration model to obtain registration coincidence degree, if the registration coincidence degree is larger than a specified threshold value, assembling, otherwise, inputting the preprocessed equipment image to be assembled into a real-time product registration robust regulation model to obtain a corrected equipment image to be assembled, and inputting the corrected equipment image to be assembled into the offline product registration model;
S5: and controlling the mechanical arm to carry out the assembly treatment of the equipment to be assembled according to the equipment image to be assembled which can be assembled.
2. The method for controlling assembly of a visual mobile robot arm according to claim 1, wherein the step S1 of acquiring an image of a device to be assembled comprises:
the method comprises the steps that an image of equipment to be assembled is obtained by utilizing a vision system of a mechanical arm, wherein the vision system of the mechanical arm is a camera on the mechanical arm, the image of the equipment to be assembled represents a scene image of an area where the equipment to be assembled is located, the vision system of the mechanical arm automatically carries out pretreatment of graying, stretching and edge detection on the obtained image of the equipment to be assembled, and the flow of graying, stretching and edge detection is as follows:
s11: obtaining the maximum value of RGB color components of each pixel in the acquired image of the equipment to be assembled, and taking the maximum value as the gray value of the pixel point to obtain a gray level image of the equipment to be assembled;
s12: stretching the gray value of each pixel in the gray map:
wherein:
g (i, j) represents the gray value of pixel (i, j) in the gray map, pixel (i, j) represents the pixel of the ith row and the jth column in the gray map, the number of the pixel rows of the gray map is M, and the number of the pixel columns is N;
MAX g Representing the maximum gray value, MIN, in the gray map g Representing a minimum gray value in the gray scale map;
g' (i, j) represents the gray value of the pixel (i, j) after gray stretching;
s13: constructing a Gaussian filter template with the size of 3 multiplied by 3 pixels, and filtering the gray-scale image after gray-scale stretching, wherein the constructed Gaussian filter template is G σ Sigma represents the scale of a Gaussian filter template, the scale is set to be 2, and convolution processing is carried out on each pixel in the gray level diagram after gray level stretching by using the Gaussian filter template, so that the gray level diagram after Gaussian filtering is obtained:
g″(i,j)=g′(i,j)*G σ
wherein:
g' (i, j) represents the Gaussian filtering result of the pixel (i, j), and the Gaussian filtering results of all the pixels form a Gaussian filtered gray scale map;
s14: calculating the gradient value of each pixel in the gray scale map after Gaussian filtering:
wherein:
grad (i, j) represents the gradient value of pixel (i, j) in the gray-scale image after gaussian filtering;
pixels with gradient values greater than a preset threshold are marked as edge pixels.
3. The method for controlling assembly of a visual mobile robot arm according to claim 1, wherein the step S2 of constructing an offline product registration model comprises:
an offline product registration model is constructed, wherein the offline product registration model takes a template registration image and an image of equipment to be assembled as input and takes registration coincidence degree as output; the template registration image is a display image after the pretreatment of the equipment to be assembled successfully;
The off-line product registration model comprises an equipment to be assembled identification module, a combined registration module and a registration fitness calculation module, wherein the equipment to be assembled identification module is used for identifying equipment to be assembled from an equipment image to be assembled, the combined registration module is used for randomly combining the equipment to be assembled with an assembly position to obtain different combined results, the registration fitness calculation module is used for calculating the registration fitness of the different combined results and a template registration image, and the largest combined result and the corresponding registration fitness are selected to be output;
the process for carrying out registration calculation on the equipment images to be assembled based on the offline registration model comprises the following steps:
s21: dividing an image of equipment to be assembled into a plurality of sub-images according to the edges of the image;
s22: inputting the sub-image of the equipment to be assembled and the template registration image into an equipment identification module to be assembled, wherein the equipment identification module to be assembled carries out convolution processing on the sub-image and the template registration image by utilizing two layers of convolution layers respectively to obtain an initial convolution characteristic image of the sub-image and the template registration image, wherein the convolution kernel of the convolution layers is 5 multiplied by 5 pixels, and the step length is 2;
s23: calculating an inner product result of the initial convolution feature image corresponding to the sub-image and the initial convolution feature image corresponding to the template registration image, and taking the inner product result as semantic similarity of the sub-image and the template registration image;
S24: selecting an initial convolution feature image corresponding to the sub-image with highest semantic similarity, and sliding the initial convolution feature image of the sub-image by utilizing a sliding window to extract a local feature image, wherein the size of the sliding window is the size of equipment to be assembled in the template registration image; calculating cosine similarity of an initial convolution feature image corresponding to a to-be-assembled equipment region in the local feature image and the template registration image, and selecting a local feature image with highest cosine similarity as the to-be-assembled equipment feature image, wherein the to-be-assembled equipment image region corresponding to the local feature image is the identified to-be-assembled equipment;
s25: the combined registration module converts pixel coordinates of the equipment to be assembled, which are identified in the equipment to be assembled image, into a world coordinate system to obtain equipment to be assembled represented by a three-dimensional image:
wherein:
d 1 ,d 2 respectively representing the lengths of unit pixels in the horizontal direction and the vertical direction in the image shot by the vision system;
x 0 ,y 0 the pixel numbers in the horizontal direction and the vertical direction respectively representing the phase difference between the center pixel coordinate of the image shot by the vision system and the pixel coordinate of the origin of the image;
f represents the focal length of the mechanical arm vision system;
K 1 ,K 2 external parameters representing the vision system, namely the position and rotation direction parameters of the vision system;
Z represents the coordinate value of the Z axis of the pixel coordinate (x, y) of the equipment to be assembled in a camera coordinate system;
(X * ,Y * ,Z * ) Representing the mapping result of the pixel coordinates (x, y) of the equipment to be assembled in a world coordinate system;
s26: randomly rotating the equipment to be assembled represented by the three-dimensional image to obtain a plurality of rotation results, and randomly combining the edges of the rotation results with the assembly positions to obtain a plurality of combination results;
s27: the registration matching degree calculation module calculates the curvature and position coordinates of the combined edge curve in each combined result, and calculates the curvature and position coordinates of the edge curve of the successfully assembled result in the template registration image, so as to obtain the registration matching degree of each combined result:
wherein:
β s (m) represents the curvature of the s-th edge curve in the m-th combination, α s Representing the curvature of the edge curve closest to the s-th edge curve in the m-th combination of the position coordinates in the successful assembly result in the template registration image;
R m representing the registration goodness of fit of the mth combination.
4. The method for controlling assembly of a visual transition manipulator according to claim 3, wherein the constructing a real-time product registration robust regulation model in step S3 includes:
constructing a real-time product registration robust regulation model, wherein the real-time product registration robust regulation model takes an image of equipment to be assembled as input and takes a corrected image of the equipment to be registered as output;
The regulation and control flow based on the real-time product registration robust regulation and control model comprises the following steps:
s31: selecting a sub-image with highest semantic similarity in the image of the equipment to be assembled as an image to be corrected;
s32: constructing an image correction code, wherein the image correction code is in the form of a sliding window, the size of the sliding window is the pixel number of each row of pixels in an image to be corrected, and the constructed image correction code is as follows:
wherein:
h n (Q) a correction code representing the nth pixel of the nth row, Q n Representing the total number of pixels of the nth row of pixels;
s33: correcting each row of pixels in the image to be corrected based on the image correction code to obtain a corrected image, wherein the correction result of the nth row and the qth pixel in the image to be corrected is as follows:
g″′ n,q =ωg″ n,q +(1-ω)h n (q)
wherein:
g″ n,q representing the result of the Gaussian filtering of the nth row of the qth pixel in the image to be corrected, ω represents the correction factor, which is set to 0.82, g '' n,q Representing a correction result of the nth row of the q-th pixel in the image to be corrected;
s34: and adding the corrected image into the image of the equipment to be assembled to serve as the corrected image of the equipment to be assembled.
5. The method for controlling assembly of a mechanical arm for visual migration according to claim 1, wherein in the step S4, the image of the device to be assembled is input into an offline product registration model to obtain a registration matching degree, and if the registration matching degree is greater than a specified threshold, assembly can be performed, including:
Inputting the to-be-assembled equipment image into an offline product registration model to obtain registration anastomoses of different registration combination results, assembling if the maximum registration anastomoses are larger than a specified threshold value, obtaining the to-be-assembled equipment image capable of being assembled, and recording a rotation angle corresponding to the maximum registration anastomoses.
6. The method for controlling assembly of a mechanical arm for visual migration according to claim 5, wherein in the step S4, the image of the device to be assembled having the registration matching degree lower than the specified threshold is input to the real-time product registration robust regulation model to obtain the corrected image of the device to be assembled, and the method comprises the following steps:
inputting the to-be-assembled equipment image with the registration matching degree lower than the specified threshold value into a real-time product registration robust regulation model for correction to obtain a corrected to-be-assembled equipment image, inputting the corrected to-be-assembled equipment image into an offline product registration model to obtain registration matching degrees of different registration combination results, assembling if the maximum registration matching degree is greater than the specified threshold value, obtaining an assembled to-be-assembled equipment image, recording a rotation angle corresponding to the maximum registration matching degree, and otherwise, correcting again.
7. The method for controlling assembly of a robot arm for visual migration according to claim 6, wherein the step S5 of controlling the robot arm to perform the assembly process of the device to be assembled based on the image of the device to be assembled which can be assembled comprises:
based on the image of the equipment to be assembled which can be assembled and the rotation angle corresponding to the maximum registration coincidence degree, obtaining the rotation angle of the equipment to be assembled when the equipment to be assembled is assembled, recording the rotation angle by the mechanical arm, and grabbing the equipment to be assembled;
the mechanical arm acquires a map of the equipment to be assembled and an assembly position by using a vision system, an assembly path of the mechanical arm is generated by adopting an obstacle avoidance algorithm, the mechanical arm is controlled to move along the assembly path to reach the assembly position, the mechanical arm is controlled to rotate the equipment to be assembled according to the recorded rotation angle, and the assembly control based on the vision of the mechanical arm is completed;
the assembly path generation flow based on the obstacle avoidance algorithm is as follows:
s51: initializing a plurality of assembly paths, wherein the k-th assembly path is in the form of:
L k =[L k (1),L k (2),...,L k (u),...L k (U)]
L k (u)=[θ k,u (1),θ k,u (2),...,θ k,u (v)] T
wherein:
L k (U) represents a position of a U-th path node in the kth assembly path, U representing a total number of path nodes;
θ k,u (v) Representation mechanical arm When the angle of the ith joint node in the mechanical arm is positioned at the position of the ith path node in the kth assembly path, v represents the total number of the mechanical arm joint nodes;
s52: initializing obstacle avoidance detection vectors of a kth assembly path with dimension U: [0, ], 0];
s53: constructing a virtual stress condition function of a connecting rod and an obstacle between adjacent joint nodes in the mechanical arm:
wherein:
F k,u (a, a+1) represents the virtual stress condition of the link and the obstacle between the a-th joint node and the a+1-th joint node at the position of the u-th path node in the kth assembly path, d (r) a,a+1 ,r next ) Representing the connecting rod r a,a+1 Distance from nearest obstacle next, delta represents a very small positive number, r next Representing the radius of the nearest obstacle next;
if F is present k,u (a, a+1) is greater than the stress threshold, which indicates that a collision exists at a node of a u-th path in the kth assembly path, and the u-th value in the obstacle avoidance detection vector is 1;
s54: traversing the obstacle avoidance detection vector of the kth assembly path, randomly selecting path nodes with the value of 1 from the rest assembly paths to replace the path nodes with the corresponding sequence until the obstacle avoidance detection vector in each assembly path is 0;
s55: calculating the moving time of each assembly path, selecting the assembly path with the least moving time as output, and controlling the mechanical arm to move along the assembly path to reach the assembly position.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310526714.5A CN116460851A (en) | 2023-05-11 | 2023-05-11 | Mechanical arm assembly control method for visual migration |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310526714.5A CN116460851A (en) | 2023-05-11 | 2023-05-11 | Mechanical arm assembly control method for visual migration |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116460851A true CN116460851A (en) | 2023-07-21 |
Family
ID=87180869
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310526714.5A Pending CN116460851A (en) | 2023-05-11 | 2023-05-11 | Mechanical arm assembly control method for visual migration |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116460851A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117798936A (en) * | 2024-02-29 | 2024-04-02 | 卡奥斯工业智能研究院(青岛)有限公司 | Control method and device for mechanical arm cluster, electronic equipment and storage medium |
-
2023
- 2023-05-11 CN CN202310526714.5A patent/CN116460851A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117798936A (en) * | 2024-02-29 | 2024-04-02 | 卡奥斯工业智能研究院(青岛)有限公司 | Control method and device for mechanical arm cluster, electronic equipment and storage medium |
CN117798936B (en) * | 2024-02-29 | 2024-06-07 | 卡奥斯工业智能研究院(青岛)有限公司 | Control method and device for mechanical arm cluster, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110363816B (en) | Mobile robot environment semantic mapping method based on deep learning | |
JP6946831B2 (en) | Information processing device and estimation method for estimating the line-of-sight direction of a person, and learning device and learning method | |
CN107953329B (en) | Object recognition and attitude estimation method and device and mechanical arm grabbing system | |
JP2885823B2 (en) | Visual recognition device | |
US10366307B2 (en) | Coarse-to-fine search method, image processing device and recording medium | |
CN110363817B (en) | Target pose estimation method, electronic device, and medium | |
CN109886124B (en) | Non-texture metal part grabbing method based on wire harness description subimage matching | |
CN111507908B (en) | Image correction processing method, device, storage medium and computer equipment | |
WO2019171628A1 (en) | Image processing system and image processing method | |
CN108010082B (en) | Geometric matching method | |
CN112947458B (en) | Robot accurate grabbing method based on multi-mode information and computer readable medium | |
CN112597955A (en) | Single-stage multi-person attitude estimation method based on feature pyramid network | |
CN109903323B (en) | Training method and device for transparent object recognition, storage medium and terminal | |
US11562489B2 (en) | Pixel-wise hand segmentation of multi-modal hand activity video dataset | |
CN115063768A (en) | Three-dimensional target detection method, encoder and decoder | |
WO2022134842A1 (en) | Method and apparatus for identifying building features | |
CN116460851A (en) | Mechanical arm assembly control method for visual migration | |
CN115147488B (en) | Workpiece pose estimation method and grabbing system based on dense prediction | |
CN112053441A (en) | Full-automatic layout recovery method for indoor fisheye image | |
CN115972198B (en) | Mechanical arm visual grabbing method and device under incomplete information condition | |
CN111611917A (en) | Model training method, feature point detection device, feature point detection equipment and storage medium | |
CN114083533B (en) | Data processing method and device based on mechanical arm | |
US11551379B2 (en) | Learning template representation libraries | |
Zhao et al. | A Computationally Efficient Visual SLAM for Low-light and Low-texture Environments Based on Neural Networks | |
CN114926753B (en) | Rapid target scene information extraction method under condition of massive images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |