CN109903279B - Automatic teaching method and device for welding seam movement track - Google Patents

Automatic teaching method and device for welding seam movement track Download PDF

Info

Publication number
CN109903279B
CN109903279B CN201910140444.8A CN201910140444A CN109903279B CN 109903279 B CN109903279 B CN 109903279B CN 201910140444 A CN201910140444 A CN 201910140444A CN 109903279 B CN109903279 B CN 109903279B
Authority
CN
China
Prior art keywords
image
point cloud
color image
neural network
cloud image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910140444.8A
Other languages
Chinese (zh)
Other versions
CN109903279A (en
Inventor
王柯
戚骁亚
齐立哲
刘建都
刘旭
李梦炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Deep Singularity Technology Co ltd
Original Assignee
Beijing Deep Singularity Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Deep Singularity Technology Co ltd filed Critical Beijing Deep Singularity Technology Co ltd
Priority to CN201910140444.8A priority Critical patent/CN109903279B/en
Publication of CN109903279A publication Critical patent/CN109903279A/en
Application granted granted Critical
Publication of CN109903279B publication Critical patent/CN109903279B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application relates to an automatic teaching method and device for a welding seam movement track; the method comprises the following steps: acquiring a color image and a three-dimensional point cloud image of an object to be welded; processing the color image to locate a weld joint area; extracting a local point cloud image of a welding seam region from the three-dimensional point cloud image according to the corresponding relation between the color image and the three-dimensional point cloud image; and determining a starting point and an end point of a welding seam according to the extracted local point cloud image. The method can automatically extract the welding line to be welded aiming at the angle welding straight welding objects with different placing postures and different welding line shapes, thereby avoiding a large amount of manual teaching repeated operations in the welding process; the method greatly enhances the accuracy and robustness of weld detection and positioning by using a technical scheme combining artificial intelligence and a traditional method.

Description

Automatic teaching method and device for welding seam movement track
Technical Field
The application relates to the technical field of robot control, in particular to an automatic teaching method and device for a welding seam movement track.
Background
The welding robot is widely applied in current industrial production, and the proportion of the welding robot in all industrial robots can reach more than 40 percent.
When a welding robot performs a work, it is necessary to give a command to the robot in advance to specify an operation to be performed by the robot and a specific content of the work, and this command process is referred to as teaching to the robot or programming to the robot. Teaching contents for the robot are generally stored in a control device of the robot, and the robot can realize an operation requested by a person and a work content requested to be given by the person by reproducing the stored contents.
The teaching content mainly comprises two parts, namely off-line teaching of the welding seam movement track of the robot and off-line teaching of the operation conditions of the robot. The off-line teaching of the welding seam movement track of the robot is mainly used for teaching the movement track of the end part of a welding wire for completing a certain operation, including the movement type and the movement speed. The teaching of the robot operation conditions is mainly to obtain good welding quality, and the teaching of the welding conditions includes the material and thickness of the metal to be welded, the posture of the welding torch corresponding to the shape of the weld, welding parameters, the control method of the welding power source, and the like.
In the related art, the robot welding seam motion trajectory offline teaching method aims at different welding seams, and all welding seam welding trajectories need to be taught manually. When the structure of the welding workpiece or the shape and the position of the welding seam are changed greatly, a large amount of labor cost and time cost are consumed for offline teaching; due to the existence of machining errors, workpieces and welding seams with the same structure have certain differences, and the welding quality may be influenced by using an off-line teaching track.
In recent years, deep learning has been a rapidly advancing result in the fields of image recognition, stereoscopic vision, and the like. The traditional welding robot needs to adopt the latest AI technology to improve the labor productivity and improve the welding quality.
Disclosure of Invention
To overcome at least some of the problems of the related art, the present application provides a method and apparatus for automatically teaching a movement trajectory of a weld.
According to a first aspect of embodiments of the present application, there is provided a method for automatically teaching a welding seam movement track, including:
acquiring a color image and a three-dimensional point cloud image of an object to be welded;
processing the color image to locate a weld joint area;
extracting a local point cloud image of a welding seam region from the three-dimensional point cloud image according to the corresponding relation between the color image and the three-dimensional point cloud image;
and determining a starting point and an end point of a welding seam according to the extracted local point cloud image.
Further, the processing the color image includes:
and processing the color image by using a convolutional neural network, wherein the convolutional neural network is a convolutional neural network model trained in advance.
Further, the processing the color image using a convolutional neural network includes:
processing the color image through a first neural network to obtain a rectangular area position of a welding seam area in the color image;
and processing the image in the rectangular area through a second neural network to obtain a welding seam area.
Further, the first neural network is an object detection neural network, and the second neural network is an object segmentation neural network.
Further, the corresponding relationship between the color image and the three-dimensional point cloud image is as follows: correspondence between pixel coordinates.
Further, the extracting a local point cloud image of a weld region from the three-dimensional point cloud image includes:
converting the coordinates of the pixels of the color image into a homogeneous coordinate form;
converting homogeneous coordinates of pixels of the color image into space coordinates according to the corresponding relation between the color image and the depth image;
converting the space coordinates into homogeneous coordinates of pixels of the three-dimensional point cloud image according to parameters of a camera;
extracting a local point cloud image of a welding seam region according to homogeneous coordinates of pixels of the three-dimensional point cloud image;
the depth image and the three-dimensional point cloud image are acquired through the same camera.
Further, the determining the starting point and the end point of the welding seam according to the extracted local point cloud image comprises:
denoising the local point cloud image;
performing plane segmentation on the denoised image to obtain two plane equations;
calculating an equation of an intersection line of the two plane equations, and converting the equation into a vector corresponding to the intersection line;
locating the maximum and minimum points in the local point cloud image along the vector direction;
calculating two planes which respectively pass through the maximum value point and the minimum value point by taking the vector as a normal vector;
calculating two intersection points of the two planes and the intersection line;
and taking the intersection point which is closer to the welding robot in the X direction of the coordinates in the two intersection points as the starting point of the welding seam, and taking the other intersection point as the end point of the welding seam.
Further, the denoising processing of the local point cloud image includes:
filtering the local point cloud image to remove invalid points and outliers in the local point cloud image;
the filtered image is down sampled.
Further, the performing plane segmentation on the denoised image includes:
the point cloud image is segmented into two planes using the ransac algorithm.
According to a second aspect of the embodiments of the present application, there is provided an automatic teaching apparatus for a weld movement trajectory, including:
the acquisition module is used for acquiring a color image and a three-dimensional point cloud image of an object to be welded;
the positioning module is used for processing the color image and positioning a welding seam area;
the extraction module is used for extracting a local point cloud image of a welding seam region from the three-dimensional point cloud image according to the corresponding relation between the color image and the three-dimensional point cloud image;
and the determining module is used for determining the starting point and the end point of the welding seam according to the extracted local point cloud image.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
the method can automatically extract the welding line to be welded aiming at the angle welding straight welding objects with different placing postures and different welding line shapes, and avoids a large amount of manual teaching repeated operations in the welding process. The method greatly enhances the accuracy and robustness of weld detection and positioning by using a technical scheme combining artificial intelligence and a traditional method.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 is a flow chart illustrating a method for automatic teaching of weld movement trajectories according to an exemplary embodiment.
Fig. 2 is a diagram illustrating a depth image and a data structure thereof according to an exemplary embodiment.
FIG. 3 illustrates a three-dimensional point cloud image and its data structure according to an exemplary embodiment.
FIG. 4 is a schematic diagram of the structure of the object detection neural network (Faster-rcnn).
Fig. 5 is a schematic structural diagram of an object segmentation neural network (Mask-rcnn).
FIG. 6 is a graph showing the results of the Faster-rcnn assay.
FIG. 7 is a graph showing the results of Mask-rcnn segmentation.
FIG. 8 is a block circuit diagram illustrating an automatic weld movement trajectory teaching apparatus according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
FIG. 1 is a flow chart illustrating a method for automatic teaching of weld movement trajectories according to an exemplary embodiment. As shown, the method comprises the following steps:
step S1: acquiring a color image and a three-dimensional point cloud image of an object to be welded;
step S2: processing the color image to locate a weld joint area;
and step S3: extracting a local point cloud image of a weld joint region from the three-dimensional point cloud image according to the corresponding relation between the color image and the three-dimensional point cloud image;
and step S4: and determining a starting point and an end point of a welding seam according to the extracted local point cloud image.
The invention relates to a method for automatically detecting and positioning a welding line by stereoscopic vision and artificial intelligence technology, thereby completing automatic teaching of the welding line.
The method can automatically extract the welding line to be welded aiming at the angle welding straight welding objects with different placing postures and different welding line shapes, and avoids a large amount of manual teaching repeated operations in the welding process. The method greatly enhances the accuracy and robustness of weld detection and positioning by using a technical scheme combining artificial intelligence and a traditional method.
In step S1, a corresponding vision sensor is required to acquire the image. In some embodiments of the invention, the visual sensor used is a stereo camera, the data available to the stereo camera including RGB color images, depth images, three-dimensional point cloud images. It should be noted that the stereo camera includes an RGB camera and a 3D camera, where the RGB camera is used to obtain a color image, and the 3D camera is used to obtain a depth image and a three-dimensional point cloud image.
Fig. 2 is a diagram illustrating a depth image and a data structure thereof according to an exemplary embodiment. The data structure of the depth image is defined as a matrix of x, y, where x and y represent the planar position of a point in the depth map, respectively, and z is the data itself. Note that the depth image corresponds to the pixel position of the color image one by one. After the stereo camera acquires the color image and the depth image, pixels between the color image and the depth image are automatically corresponded in the stereo camera.
FIG. 3 is a three-dimensional point cloud image and its data structure shown in accordance with an exemplary embodiment. The three-dimensional point cloud image is similar to the depth image, but the stored data is [ x, y, z ] information.
The invention is designed for straight welding seams formed by angle joint. The stereo camera obtains the sensing data aiming at various welding workpieces with different shapes and changeable placing positions, so that the angular joint straight welding seam is automatically detected and positioned, the starting point and the end point of the welding seam are obtained, the position information of the starting point and the end point when the welding gun of the welding robot moves along the welding seam is generated, and the automatic teaching aiming at the welding seam is completed. The general idea of the invention is as follows:
(1) Firstly, detecting a welding workpiece by using a convolution neural network, adopting a scheme of combining object detection and object segmentation neural network aiming at workpieces with different shapes, extracting key characteristics in the workpieces, and finishing the detection and the primary positioning of a welding seam area of the welding workpiece in an image.
(2) And then, further dividing the welding seam three-dimensional point cloud image in the ROI by adopting a technical scheme of combining a deep neural network with a traditional machine learning method.
(3) And finally, finishing the accurate positioning of the welding line to be welded according to the topological structure and the prior knowledge of the welding line to obtain the position information of the starting point and the end point of the welding line.
The process deepens layer by layer from the whole to the local, detection and positioning of the welding seam in the welding workpiece are progressively completed, and automatic teaching of the welding seam of the welding robot is achieved.
Further, the processing the color image includes:
and processing the color image by using a convolutional neural network, wherein the convolutional neural network is a pre-trained convolutional neural network model.
Further, the processing the color image using a convolutional neural network includes:
processing the color image through a first neural network to obtain a rectangular area position of a welding seam area in the color image;
and processing the image in the rectangular area through a second neural network to obtain a welding seam area.
Further, the first neural network is an object detection neural network, and the second neural network is an object segmentation neural network.
The extraction process of the welding seam area is as follows:
the technical scheme that an object detection neural network and an object segmentation neural network are combined is adopted for weld area extraction, firstly, an Faster-rcnn network is adopted to detect a weld area in an RGB color image, and a rectangular area position of the weld area in the image is obtained; then, the weld region is correctly segmented in the matrix region image obtained above by using a Mask-rcnn network. The Faster-rcnn and the Mask-rcnn networks are not the subject of the invention, and the extraction of the weld region by applying the two steps is the subject of the invention, and the neural network structures of the Faster-rcnn and the Mask-rcnn are shown in fig. 4 and 5.
The result of the Faster-rcnn detection is shown in fig. 6, and the area in the dotted line frame in the figure is the result area output by the Faster-rcnn detection.
The result of the Mask-rcnn segmentation is shown in fig. 7, and the region in the dotted line frame in the figure is the result region output by the Mask-rcnn segmentation.
Further, the corresponding relationship between the color image and the three-dimensional point cloud image is as follows: correspondence between pixel coordinates.
Further, the extracting of the local point cloud image of the weld region in the three-dimensional point cloud image includes:
converting the coordinates of the pixels of the color image into a homogeneous coordinate form;
converting homogeneous coordinates of pixels of the color image into space coordinates according to the corresponding relation between the color image and the depth image;
converting the space coordinates into homogeneous coordinates of pixels of the three-dimensional point cloud image according to parameters of a camera;
extracting a local point cloud image of a welding seam region according to the homogeneous coordinates of the pixels of the three-dimensional point cloud image;
the depth image and the three-dimensional point cloud image are acquired through the same camera.
Further, the determining a start point and an end point of a weld according to the extracted local point cloud image includes:
denoising the local point cloud image;
performing plane segmentation on the denoised image to obtain two plane equations;
calculating an equation of an intersection line of the two plane equations, and converting the equation into a vector corresponding to the intersection line;
locating the maximum and minimum points in the local point cloud image along the vector direction;
calculating two planes which respectively pass through the maximum value point and the minimum value point by taking the vector as a normal vector;
calculating two intersection points of the two planes and the intersection line;
and taking the intersection point which is closer to the welding robot in the X direction of the coordinates in the two intersection points as the starting point of the welding seam, and taking the other intersection point as the end point of the welding seam.
Further, the denoising processing of the local point cloud image includes:
filtering the local point cloud image to remove invalid points and outliers in the local point cloud image;
the filtered image is down sampled.
Further, the performing plane segmentation on the denoised image includes:
the point cloud image is segmented into two planes using the ransac algorithm.
After the welding seam area is extracted, further extracting the welding seam linear equation in the area obtained by segmentation in the RGB color image, wherein the specific process is as follows:
(1) And determining the corresponding weld joint area in the three-dimensional point cloud image according to the weld joint area obtained by segmentation in the RGB color image. Let (x) 0 ,y 0 ) For a pixel coordinate in a weld zone in an RGB color image, writing the coordinate into a homogeneous coordinate vector form P RGB =[x 0 ,y 0 ,1.0],T RGB Is an internal reference matrix, T, of an RGB camera RGB -1 Is T RGB The depth number on the depth image corresponding to the pixel coordinate is d 0 Then the pixel coordinates are converted into spatial coordinates P relative to the RGB camera RGB3D The following were used:
P RGB3D =d 0 ·T RGB -1 ·P RGB
to obtain P RGB3D Is of the form [ x ] 1 ,y 1 ,z 1 ]Converted to homogeneous coordinate form P RGB3D =[x 1 ,y 1 ,z 1 ,1.0],T RGB23D Obtaining a spatial coordinate P of the point relative to the 3D camera for an external parameter matrix from the RGB camera to the 3D camera 3D3D The following were used:
P 3D3D =T RGB23D ·P RGB3D
to obtain P 3D3D Is of the form [ x 2 ,y 2 ,z 2 ,1.0]Converted to non-homogeneous coordinate form P 3D3D =[x 2 ,y 2 ,z 2 ],T 3D Obtaining a homogeneous coordinate P corresponding to a pixel in a three-dimensional point cloud image for an internal reference matrix of a 3D camera 3D The following were used:
P 3D =T 3D ·P 3D3D
(2) And filtering the obtained three-dimensional point cloud image area, removing invalid points and outliers in the three-dimensional point cloud image area, and performing down-sampling on the point cloud image area, so that the subsequent steps can be completed more quickly, and the real-time performance of the algorithm is ensured.
(3) And carrying out plane segmentation on the point cloud image after down sampling, and segmenting the area point cloud image into two planes by using a ransac algorithm to obtain two plane equations.
(4) And calculating to obtain an intersection line equation of the two planes, and converting the intersection line equation into a vector corresponding to the intersection line equation.
(5) And obtaining maximum and minimum value points of the point cloud image area along the vector direction, and obtaining two planes which pass through the maximum and minimum value points by taking the intersection line vector as a normal vector.
(6) And obtaining the intersection points of the maximum value point plane and the minimum value point plane and the intersection line, wherein the two intersection points are two vertexes of the angle joint straight welding line.
(7) Of these two vertexes, the point closer to the welding robot in the X-coordinate direction is set as the start point of the weld, and the other point is set as the end point of the weld.
FIG. 8 is a block circuit diagram illustrating an automatic weld movement trajectory teaching apparatus according to an exemplary embodiment. Referring to fig. 8, the apparatus includes:
the acquisition module is used for acquiring a color image and a three-dimensional point cloud image of an object to be welded;
the positioning module is used for processing the color image and positioning a welding seam area;
the extraction module is used for extracting a local point cloud image of a welding seam region from the three-dimensional point cloud image according to the corresponding relation between the color image and the three-dimensional point cloud image;
and the determining module is used for determining the starting point and the end point of the welding seam according to the extracted local point cloud image.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The invention relates to a method for automatically detecting and positioning a welding line by stereoscopic vision and artificial intelligence technology, thereby completing automatic teaching of the welding line.
(1) Aiming at the fillet weld straight seam welding objects with different placing postures and different welding seam shapes, the welding seam to be welded can be automatically extracted, and a large amount of manual teaching repeated operations in the welding process are avoided.
(2) The technical scheme combining artificial intelligence and the traditional method is used, so that the accuracy and robustness of weld joint detection and positioning are greatly enhanced.
(3) The method for extracting the welding line from the whole to the local part layer by layer is adopted, the basic rule of accurate positioning is met, and the effect of application in industry is achieved.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.
It should be noted that, in the description of the present application, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present application, the meaning of "a plurality" means at least two unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following technologies, which are well known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (5)

1. An automatic teaching method for a welding seam movement track is characterized by comprising the following steps:
step S1: acquiring a color image, a depth image and a three-dimensional point cloud image of an object to be welded with an angle welding straight weld by using a stereo camera; the stereo camera comprises an RGB camera and a 3D camera, the RGB camera acquires the color image, and the 3D camera acquires the depth image and the three-dimensional point cloud image; after the stereo camera acquires the color image and the depth image, pixels between the color image and the depth image are automatically corresponding in the stereo camera;
step S2: processing the color image to locate a weld joint area, comprising:
processing the color image by using a convolutional neural network, wherein the convolutional neural network is a convolutional neural network model trained in advance;
the processing the color image using a convolutional neural network includes:
processing the color image through a first neural network to obtain a rectangular area position of a welding seam area in the color image;
processing the image in the rectangular area through a second neural network to obtain a welding seam area;
the first neural network is an object detection neural network, and the second neural network is an object segmentation neural network;
and step S3: extracting a local point cloud image of a welding seam region from the three-dimensional point cloud image according to the corresponding relation between the color image and the three-dimensional point cloud image, wherein the local point cloud image comprises the following steps:
when determining a weld joint region in a corresponding three-dimensional point cloud image according to the weld joint region obtained by segmentation in the color image, converting the coordinates of the pixels of the color image of the RGB camera of the stereo camera into homogeneous coordinates of the pixels in the corresponding three-dimensional point cloud image relative to the 3D camera based on the one-to-one correspondence between the pixel positions of the depth image and the color image;
denoising the local point cloud image, comprising: filtering the local point cloud image to remove invalid points and outliers in the local point cloud image; down-sampling the filtered image;
and step S4: determining a starting point and an end point of a welding seam according to the extracted local point cloud image, wherein the method comprises the following steps:
performing plane segmentation on the denoised image to obtain two plane equations;
calculating an equation of an intersection line of the two plane equations, and converting the equation into a vector corresponding to the intersection line;
locating the maximum and minimum points in the local point cloud image along the vector direction;
calculating two planes which respectively pass through the maximum value point and the minimum value point by taking the vector as a normal vector;
calculating two intersection points of the two planes and the intersection line;
and taking the intersection point which is closer to the welding robot in the X direction of the coordinates in the two intersection points as the starting point of the welding seam, and taking the other intersection point as the end point of the welding seam.
2. The method of claim 1, wherein the correspondence between the color image and the three-dimensional point cloud image is: correspondence between pixel coordinates.
3. The method of claim 2, wherein the extracting a local point cloud image of the weld region from the three-dimensional point cloud image comprises:
converting the coordinates of the pixels of the color image into a homogeneous coordinate form;
converting homogeneous coordinates of pixels of the color image into space coordinates according to the corresponding relation between the color image and the depth image;
converting the space coordinates into homogeneous coordinates of pixels of the three-dimensional point cloud image according to parameters of a camera;
and extracting a local point cloud image of a welding seam region according to the homogeneous coordinates of the pixels of the three-dimensional point cloud image.
4. The method of claim 1, wherein the performing plane segmentation on the denoised image comprises:
the point cloud image is segmented into two planes using the ransac algorithm.
5. An automatic teaching apparatus of a weld movement locus according to the automatic teaching method according to any one of claims 1 to 4, comprising:
the acquisition module is used for acquiring a color image and a three-dimensional point cloud image of an object to be welded;
the positioning module is used for processing the color image and positioning a welding seam area; the processing the color image comprises: processing the color image by using a convolutional neural network, wherein the convolutional neural network is a convolutional neural network model trained in advance; the processing the color image using a convolutional neural network includes: processing the color image through a first neural network to obtain a rectangular region position of a welding seam region in the color image; processing the image in the rectangular area through a second neural network to obtain a welding seam area; the first neural network is an object detection neural network, and the second neural network is an object segmentation neural network;
the extraction module is used for extracting a local point cloud image of a welding seam region from the three-dimensional point cloud image according to the corresponding relation between the color image and the three-dimensional point cloud image;
and the determining module is used for determining the starting point and the end point of the welding seam according to the extracted local point cloud image.
CN201910140444.8A 2019-02-25 2019-02-25 Automatic teaching method and device for welding seam movement track Active CN109903279B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910140444.8A CN109903279B (en) 2019-02-25 2019-02-25 Automatic teaching method and device for welding seam movement track

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910140444.8A CN109903279B (en) 2019-02-25 2019-02-25 Automatic teaching method and device for welding seam movement track

Publications (2)

Publication Number Publication Date
CN109903279A CN109903279A (en) 2019-06-18
CN109903279B true CN109903279B (en) 2022-11-18

Family

ID=66945335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910140444.8A Active CN109903279B (en) 2019-02-25 2019-02-25 Automatic teaching method and device for welding seam movement track

Country Status (1)

Country Link
CN (1) CN109903279B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110773842B (en) * 2019-10-21 2022-04-15 大族激光科技产业集团股份有限公司 Welding positioning method and device
CN110961778B (en) * 2019-12-05 2021-12-21 珠海屏珠科技有限公司 Method for automatically identifying welding area of welding workpiece, computer device and computer-readable storage medium
CN111098302B (en) * 2019-12-25 2021-06-01 广州机械科学研究院有限公司 Robot path searching method, system, device and storage medium
CN111152230B (en) * 2020-04-08 2020-09-04 季华实验室 Robot teaching method, system, teaching robot and storage medium
CN111390915B (en) * 2020-04-17 2022-07-15 上海智殷自动化科技有限公司 Automatic weld path identification method based on AI
CN112053376B (en) * 2020-09-07 2023-10-20 南京大学 Workpiece weld joint identification method based on depth information
CN112809130B (en) * 2020-12-31 2022-04-19 鹏城实验室 Intelligent welding seam detection and trajectory planning method and system
CN113333998B (en) * 2021-05-25 2023-10-31 绍兴市上虞区武汉理工大学高等研究院 Automatic welding system and method based on cooperative robot
CN113506211B (en) * 2021-09-10 2022-01-07 深圳市信润富联数字科技有限公司 Polishing method and device for hub rib window, terminal device and storage medium
CN114119461B (en) * 2021-10-08 2022-11-29 厦门微亚智能科技有限公司 Deep learning-based lithium battery module side weld appearance detection method and system
CN114237159B (en) * 2022-02-24 2022-07-12 深圳市大族封测科技股份有限公司 Welding arc automatic generation method and device, computer equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255813A (en) * 2018-09-06 2019-01-22 大连理工大学 A kind of hand-held object pose real-time detection method towards man-machine collaboration

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105608730A (en) * 2014-10-28 2016-05-25 富泰华工业(深圳)有限公司 Point-cloud paintbrush selection system and point-cloud paintbrush selection method
US10363632B2 (en) * 2015-06-24 2019-07-30 Illinois Tool Works Inc. Time of flight camera for welding machine vision
CN105665970B (en) * 2016-03-01 2018-06-22 中国科学院自动化研究所 For the path point automatic creation system and method for welding robot
CN107316298B (en) * 2017-07-10 2020-06-02 北京深度奇点科技有限公司 Real-time measurement method and device for welding gap and electronic equipment
CN108555423B (en) * 2018-01-16 2024-03-15 中国计量大学 Automatic three-dimensional weld joint recognition device and method
CN108171748B (en) * 2018-01-23 2021-12-07 哈工大机器人(合肥)国际创新研究院 Visual identification and positioning method for intelligent robot grabbing application
CN108453439A (en) * 2018-03-14 2018-08-28 清华大学天津高端装备研究院洛阳先进制造产业研发基地 The robot welding track self-programming system and method for view-based access control model sensing
CN108846314A (en) * 2018-05-08 2018-11-20 天津大学 A kind of food materials identification system and food materials discrimination method based on deep learning
CN109285139A (en) * 2018-07-23 2019-01-29 同济大学 A kind of x-ray imaging weld inspection method based on deep learning
CN109031954B (en) * 2018-08-03 2021-06-25 北京深度奇点科技有限公司 Welding parameter determination method based on reinforcement learning, welding method and welding equipment
CN109175608B (en) * 2018-09-30 2023-06-20 华南理工大学 Weld joint characteristic point position online measurement method and weld joint track automatic measurement system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255813A (en) * 2018-09-06 2019-01-22 大连理工大学 A kind of hand-held object pose real-time detection method towards man-machine collaboration

Also Published As

Publication number Publication date
CN109903279A (en) 2019-06-18

Similar Documents

Publication Publication Date Title
CN109903279B (en) Automatic teaching method and device for welding seam movement track
Wang et al. A robust weld seam recognition method under heavy noise based on structured-light vision
CN210046133U (en) Welding seam visual tracking system based on laser structured light
CN107876970B (en) Robot multilayer multi-pass welding seam three-dimensional detection and welding seam inflection point identification method
Zhang et al. 3D reconstruction of complex spatial weld seam for autonomous welding by laser structured light scanning
US10060857B1 (en) Robotic feature mapping and motion control
CN103759648B (en) A kind of complicated angle welding method for detecting position based on Binocular stereo vision with laser
CN111805051B (en) Groove cutting method, device, electronic equipment and system
US11179793B2 (en) Automated edge welding based on edge recognition using separate positioning and welding robots
CN112958959A (en) Automatic welding and detection method based on three-dimensional vision
CN112767426B (en) Target matching method and device and robot
CN108907526A (en) A kind of weld image characteristic recognition method with high robust
CN111784655B (en) Underwater robot recycling and positioning method
CN113920060A (en) Autonomous operation method and device for welding robot, electronic device, and storage medium
CN114851209B (en) Industrial robot working path planning optimization method and system based on vision
Shah et al. A review paper on vision based identification, detection and tracking of weld seams path in welding robot environment
Liu et al. Welding seam recognition and tracking for a novel mobile welding robot based on multi-layer sensing strategy
CN109894779A (en) A kind of machine vision tracking system and method
CN114283139A (en) Weld joint detection and segmentation method and device based on area array structured light 3D vision
Eren et al. Recent developments in computer vision and artificial intelligence aided intelligent robotic welding applications
Pachidis et al. Vision-based path generation method for a robot-based arc welding system
Ye et al. Weld seam tracking based on laser imaging binary image preprocessing
CN108067725A (en) A kind of new robotic laser vision weld joint detecting system and method
CN116160458A (en) Multi-sensor fusion rapid positioning method, equipment and system for mobile robot
CN114842144A (en) Binocular vision three-dimensional reconstruction method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant