CN108257180B - Welding gap positioning method and device - Google Patents

Welding gap positioning method and device Download PDF

Info

Publication number
CN108257180B
CN108257180B CN201810126289.XA CN201810126289A CN108257180B CN 108257180 B CN108257180 B CN 108257180B CN 201810126289 A CN201810126289 A CN 201810126289A CN 108257180 B CN108257180 B CN 108257180B
Authority
CN
China
Prior art keywords
layer
convolution
pooling
output
gap
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810126289.XA
Other languages
Chinese (zh)
Other versions
CN108257180A (en
Inventor
刘旭
戚骁亚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Deep Singularity Technology Co ltd
Original Assignee
Beijing Deep Singularity Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Deep Singularity Technology Co ltd filed Critical Beijing Deep Singularity Technology Co ltd
Priority to CN201810126289.XA priority Critical patent/CN108257180B/en
Publication of CN108257180A publication Critical patent/CN108257180A/en
Application granted granted Critical
Publication of CN108257180B publication Critical patent/CN108257180B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30152Solder
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a welding gap positioning method and device. In the method, after a gap candidate region is screened out from an original image, feature extraction is carried out on the gap candidate region based on a specific CNN network, and the position of an endpoint of a gap is output. The architecture of the CNN network used therein comprises, in order from input to output: 1 first convolution layer, N second convolution layers, N pooling layers, 1 splicing layer, 1 residual error network layer, 1 full connection layer and 1 output layer; the first convolution layer includes three M1 x M1 convolution kernels; the next layer of the output results of two convolution kernels in the first convolution layer is a second convolution layer, and the next layer of the other convolution kernel is the residual error network layer; the second convolution layer comprises 1M 2 x M2 convolution kernel and 1M 3 x M3 convolution kernel; the pooling layer comprises two M4 x M4 pooling windows; the output result of the splicing layer is the splicing result of the output results of the two pooling windows of the pooling layer.

Description

Welding gap positioning method and device
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a welding gap positioning method and device.
Background
In the related art, when welding is performed by an intelligent control welding machine, the position of a gap to be welded needs to be positioned, and then the welding machine is controlled to perform welding according to the positioned gap position. The accuracy of the gap positioning has important significance for welding parameter adjustment, path tracking and the like, and the higher the accuracy of the gap positioning is, the more the situation that the path tracking is more deviated is reduced.
At present, in the positioning of a gap, a straight line detection method such as hough and radon is mostly used, and although the accuracy is high, the detection is unstable and is greatly interfered by external environment, for example, the detection cannot be realized under the arc light interference.
Disclosure of Invention
To overcome at least some of the problems associated with the related art, the present application provides a weld gap positioning method and apparatus.
According to a first aspect of embodiments of the present application, there is provided a welding gap positioning method, including:
inputting the original image into a fast-RCNN network, and screening a gap candidate region in the original image;
inputting the gap candidate region into a CNN network, extracting the characteristics of the gap candidate region, and outputting the position of a gap endpoint; wherein:
the architecture of the CNN network comprises, in order from input to output: 1 first convolution layer, N second convolution layers, N pooling layers, 1 splicing layer, 1 residual error network layer, 1 full connection layer and 1 output layer;
the first convolution layer includes three M1 x M1 convolution kernels; the next layer of the output results of two convolution kernels of the first convolution layer is a second convolution layer, and the next layer of the other convolution kernel is the residual error network layer;
the second convolution layer comprises 1M 2 x M2 convolution kernel and 1M 3 x M3 convolution kernel;
the pooling layer comprises two M4 x M4 pooling windows;
and the output result of the splicing layer is the splicing result of the output results of the two pooling windows of the pooling layer.
Preferably, the first convolution layer includes three 1*1 convolution kernels.
Preferably, the values of M2 and M3 are different.
Preferably, the pooling mode of the first pooling layer is mean pooling.
Preferably, the output result of the residual network layer is the sum of the output result of the splicing layer and the output result of the first convolution layer.
According to a second aspect of embodiments of the present application, there is provided a welding gap positioning device, including:
the gap candidate region extraction module is used for inputting the original image into a Faster-RCNN network and screening the gap candidate region in the original image;
the gap positioning module is used for inputting the gap candidate region into the CNN network, extracting the characteristics of the gap candidate region and outputting the position of a gap endpoint; wherein:
the architecture of the CNN network comprises, in order from input to output: 1 first convolution layer, N second convolution layers, N pooling layers, 1 splicing layer, 1 residual error network layer, 1 full connection layer and 1 output layer;
the first convolution layer includes three M1 x M1 convolution kernels; the next layer of the output results of two convolution kernels of the first convolution layer is a second convolution layer, and the next layer of the other convolution kernel is the residual error network layer;
the second convolution layer comprises 1M 2 x M2 convolution kernel and 1M 3 x M3 convolution kernel;
the pooling layer comprises two M4 x M4 pooling windows;
and the output result of the splicing layer is a splicing result of the output results of the two pooling windows of the pooling layer.
Preferably, the first convolution layer includes three 1*1 convolution kernels.
Preferably, the values of M2 and M3 are different.
Preferably, the first pooling layer adopts mean pooling.
Preferably, the output result of the residual network layer is the sum of the output result of the splicing layer and the output result of the first convolution layer.
According to a third aspect of embodiments of the present application, there is provided a non-transitory computer readable storage medium, which when executed by a processor of a terminal, causes the terminal to perform a welding gap positioning method, the method comprising:
inputting the original image into a fast-RCNN network, and screening a gap candidate region in the original image;
inputting the gap candidate region into a CNN network, extracting the characteristics of the gap candidate region, and outputting the position of a fixed gap endpoint; wherein:
the architecture of the CNN network comprises, in order from input to output: 1 first convolution layer, N second convolution layers, N pooling layers, 1 splicing layer, 1 residual error network layer, 1 full connection layer and 1 output layer;
the first convolution layer includes three M1 x M1 convolution kernels; the next layer of the output results of two convolution kernels of the first convolution layer is a second convolution layer, and the next layer of the other convolution kernel is the residual error network layer;
the second convolution layer comprises 1M 2 x M2 convolution kernel and 1M 3 x M3 convolution kernel;
the pooling layer comprises two M4 x M4 pooling windows;
and the output result of the splicing layer is the splicing result of the output results of the two pooling windows of the pooling layer.
According to a fourth aspect of embodiments of the present application, there is provided a welding gap positioning device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to:
inputting the original image into a fast-RCNN network, and screening a gap candidate region in the original image;
inputting the gap candidate region into a CNN network, extracting the characteristics of the gap candidate region, and outputting the position of a gap endpoint; wherein:
the architecture of the CNN network comprises, in order from input to output: 1 first convolution layer, N second convolution layers, N pooling layers, 1 splicing layer, 1 residual error network layer, 1 full connection layer and 1 output layer;
the first convolution layer includes three M1 x M1 convolution kernels; the next layer of the output results of two convolution kernels of the first convolution layer is a second convolution layer, and the next layer of the other convolution kernel is the residual error network layer;
the second convolution layer comprises 1M 2 x M2 convolution kernel and 1M 3 x M3 convolution kernel;
the pooling layer comprises two M4 x M4 pooling windows;
and the output result of the splicing layer is the splicing result of the output results of the two pooling windows of the pooling layer.
The technical scheme provided by the embodiment of the application can comprise the following beneficial effects: in the technical scheme provided by the application, after the gap candidate area is screened out from the original image, the gap candidate area is subjected to feature extraction based on a specific CNN network, and the position of the end point of the gap is output. The architecture of the CNN network used therein comprises, in order from input to output: 1 first convolution layer, N second convolution layers, N pooling layers, 1 splicing layer, 1 residual error network layer, 1 full connection layer and 1 output layer; the first convolution layer includes three M1 x M1 convolution kernels; the next layer of the output results of two convolution kernels of the first convolution layer is a second convolution layer, and the next layer of the other convolution kernel is the residual error network layer; the second convolution layer comprises 1M 2 x M2 convolution kernel and 1M 3 x M3 convolution kernel; the pooling layer comprises two M4 x M4 pooling windows; and the output result of the splicing layer is the splicing result of the output results of the two pooling windows of the pooling layer. Based on the specific CNN network, the characteristics of the end points of the gap can be extracted more accurately, and the positions of the end points of the gap can be determined. The input of the residual network layer not only comprises the output result of the connecting layer but also comprises the output result of the first convolution layer, namely, the input of the residual network layer comprises the characteristics extracted through the deep layer and the characteristics extracted through the shallow layer, so that deep errors can be transferred to the shallow layer, and the accuracy of the determined gap is further improved. Compared with the straight line detection algorithm in the related technology, the CNN network in the scheme has strong noise immunity, can eliminate certain arc light interference, and has better and more stable detection effect.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 is a flow chart illustrating a weld gap positioning method according to an exemplary embodiment.
Fig. 2 is a schematic diagram of a CNN network architecture according to an exemplary embodiment.
FIG. 3 is a block diagram illustrating a weld gap positioning device, according to an example embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
According to a first aspect of the embodiments of the present application, the present embodiment provides a welding gap positioning method, as shown in fig. 1, at least including the following steps:
step 110, inputting an original image into a fast regional convolutional neural network (Faster region Convolutional Neural Network, fast-RCNN) network, and screening a gap candidate region in the original image;
step 120, inputting the gap candidate region into a convolutional neural network (Convolutional Neural Network, CNN) network, extracting features of the gap candidate region, and outputting the positions of gap endpoints; wherein:
the architecture of the CNN network comprises, in order from input to output: 1 first convolution layer, N second convolution layers, 1 pooling layer, 1 splicing layer, 1 residual network layer, 1 full connection layer and 1 output layer;
the first convolution layer includes three M1 x M1 convolution kernels; the next layer of the output results of two convolution kernels of the first convolution layer is a second convolution layer, and the next layer of the other convolution kernel is a residual error network layer;
the second convolution layer comprises 1M 2 x M2 convolution kernel and 1M 3 x M3 convolution kernel;
the pooling layer comprises two M4 x M4 pooling windows;
the output result of the splicing layer is the splicing result of the output results of the two pooling windows of the pooling layer.
The CNN network is a model obtained by training according to a large amount of data, and takes arc factors into consideration when training the model, so that the CNN network has certain resistance to arc during detection.
Wherein N, M, M2, M3 and M4 are all positive integers.
In this embodiment, after the gap candidate region is screened out from the original image, feature extraction is performed on the gap candidate region based on a specific CNN network, and the position of the end point of the gap is determined according to the extracted features. The architecture of the CNN network used therein comprises, in order from input to output: 1 first convolution layer, N second convolution layers, N pooling layers, 1 splicing layer, 1 residual error network layer, 1 full connection layer and 1 output layer; the first convolution layer includes three M1 x M1 convolution kernels; the next layer of the output results of two convolution kernels of the first convolution layer is a second convolution layer, and the next layer of the other convolution kernel is a residual error network layer; the second convolution layer comprises 1M 2 x M2 convolution kernel and 1M 3 x M3 convolution kernel; the pooling layer comprises two M4 x M4 pooling windows; the output result of the splicing layer is the splicing result of the output results of the two pooling windows of the pooling layer. Based on the specific CNN network, the characteristics of the end points of the gap can be extracted more accurately, and the positions of the end points of the gap can be determined. The input of the residual network layer not only comprises the output result of the connecting layer but also comprises the output result of the first convolution layer, namely, the input of the residual network layer comprises the characteristics extracted through the deep layer and the characteristics extracted through the shallow layer, so that deep errors can be transferred to the shallow layer, and the accuracy of the determined gap is further improved. Compared with the straight line detection algorithm in the related technology, the CNN network in the scheme has strong noise immunity, can eliminate certain arc light interference, and has better and more stable detection effect.
The output result of the residual network layer is the sum of the output result of the splicing layer and the output result of the first convolution layer. Or, the output result of the residual network layer is the difference between the output result of the splicing layer and the output result of the first convolution layer, and so on. And when the output result of the splicing layer and the output result of the first convolution layer are summed, the gap detection effect is better.
Preferably, the first convolution layer includes three 1*1 convolution kernels. The 1*1 convolution kernel can be used for increasing the description of a network nonlinear structure, further improving the accuracy of gap positioning and simultaneously improving the speed of convolution. And the three 1*1 convolution kernels can also adjust the channel number of the characteristic data of the previous layer to the designated channel number, and the channel number can meet the requirements of a splicing layer and a residual network layer.
The number N of the second convolution layers may be set according to actual needs. Preferably, N has a value of 1. Preferably, the two convolution kernels of the second convolution layer may be the same size or may be different sizes. Preferably, the values of M2 and M3 are different. Thus, the image features can be described from multiple angles using a multi-sized convolution kernel, further improving the accuracy of gap localization.
Preferably, the pooling mode of the first pooling layer is mean pooling. In this way, redundant data can be cropped, further improving the accuracy of gap localization.
The welding gap positioning method provided by some embodiments is described in more detail below by taking a specific CNN network architecture as an example.
Step one, acquiring an original image.
The original image obtained in the step is a charge coupled device (Charge Coupled Device, CCD) image, which is a single-channel gray scale image, and white lines in the image are red lines of laser lines on a plate.
The resolution of the image acquired in this embodiment is 480 x 640.
Inputting the original image into a fast-RCNN network, and screening a gap candidate region in the original image.
In this embodiment, the resolution of the selected gap candidate region is 120×120.
Step three, as shown in fig. 2, the gap candidate region 201 is input into the CNN network to perform feature extraction on the gap candidate region, and the position of the gap endpoint is output. In this embodiment, as shown in fig. 2, the CNN network architecture includes, in order from input to output, 1 first convolution layer 202, 1 second convolution layer 203, 1 pooling layer 204, 1 splicing layer 205, 1 residual network layer 206, 1 full connection layer 207, and 1 output layer 208.
Wherein the first convolution layer 202 comprises three 1*1 convolution kernels conv1, conv12, conv31, wherein the number of output channels of the two convolution kernels conv1, conv12 is 4 and the activation function is a relu function; the remaining 1 convolution kernel conv31 output channel is 8 and the activation function is the relu function.
Wherein the second convolution layer 203 comprises a 3*3 convolution kernel conv2, the number of output channels is 4, the boundary is filled with 1, and the activation function is relu; also included is a 5*5 convolution kernel conv22, with an output channel number of 4, boundary fill of 2, and an activation function being a relu function. The difference between the boundary filling of the convolution kernel conv2 and the convolution kernel conv22 is to ensure that the sizes of the extracted feature images are consistent.
The pooling layer 204 includes two 3*3 pooling windows pool_conv2 and pool_conv22, wherein the step size of one pooling window pool_conv2 is 1, the boundary filling is 1, the number of output channels is 4, the average pooling mode is adopted, the step size of the other pooling window pool_conv22 is 1, the boundary filling is 2, the number of output channels is 4, and the average pooling mode is adopted. The boundary filling of the pooling windows pool_conv2 and pool_conv22 is different, so as to ensure that the sizes of the extracted feature images are consistent.
The output results of the two pooling windows of the pooling layer are merged according to channels, namely, the input is the output result of the two 4 channels of the pooling layer with the same resolution, and the output is the output result of 8 channels.
The residual network layer eltwise206 sums the output result of the convolution kernel conv31 with the output channel of the first convolution layer being 8 and the output result of the splicing layer concat according to the channels, that is, the input is the output result of two 8 channels, and outputs the output result of the 8 channels.
Wherein the fully connected layer ip207, the output is 128 neurons and the activation function is the thanh function.
Wherein the Output of Output layer 208 is 8 values.
The Output layer Output finally outputs the coordinate value of the gap endpoint.
According to a second aspect of the embodiments of the present application, there is provided a welding gap positioning device, as shown in fig. 3, including:
the gap candidate region extraction module 301 is configured to input an original image into a fast-RCNN network, and screen a gap candidate region in the original image;
the gap positioning module 302 is configured to input the gap candidate region into the CNN network, perform feature extraction on the gap candidate region, and output a position of a gap endpoint; wherein:
the architecture of the CNN network comprises, in order from input to output: 1 first convolution layer, N second convolution layers, N pooling layers, 1 splicing layer, 1 residual error network layer, 1 full connection layer and 1 output layer;
the first convolution layer includes three M1 x M1 convolution kernels; the next layer of the output results of two convolution kernels of the first convolution layer is a second convolution layer, and the next layer of the other convolution kernel is a residual error network layer;
the second convolution layer comprises 1M 2 x M2 convolution kernel and 1M 3 x M3 convolution kernel;
the pooling layer comprises two M4 x M4 pooling windows;
the output result of the splicing layer is the splicing result of the output results of the two pooling windows of the pooling layer.
Preferably, the first convolution layer includes three 1*1 convolution kernels.
Preferably, the values of M2 and M3 are different.
Preferably, the first pooling layer adopts mean pooling.
Preferably, the output result of the residual network layer is the sum of the output result of the splicing layer and the output result of the first convolution layer.
According to a third aspect of embodiments of the present application, there is provided a non-transitory computer readable storage medium, which when executed by a processor of a terminal, enables the terminal to perform a welding gap positioning method, the method comprising:
inputting the original image into a fast-RCNN network, and screening a gap candidate region in the original image;
inputting the gap candidate region into a CNN network, extracting the characteristics of the gap candidate region, and outputting the position of a gap endpoint; wherein:
the architecture of the CNN network comprises, in order from input to output: 1 first convolution layer, N second convolution layers, N pooling layers, 1 splicing layer, 1 residual error network layer, 1 full connection layer and 1 output layer;
the first convolution layer includes three M1 x M1 convolution kernels; the next layer of the output results of two convolution kernels of the first convolution layer is a second convolution layer, and the next layer of the other convolution kernel is a residual error network layer;
the second convolution layer comprises 1M 2 x M2 convolution kernel and 1M 3 x M3 convolution kernel;
the pooling layer comprises two M4 x M4 pooling windows;
the output result of the splicing layer is the splicing result of the output results of the two pooling windows of the pooling layer.
According to a fourth aspect of embodiments of the present application, there is provided a welding gap positioning device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to:
inputting the original image into a fast-RCNN network, and screening a gap candidate region in the original image;
inputting the gap candidate region into a CNN network, extracting the characteristics of the gap candidate region, and outputting the position of a gap endpoint; wherein:
the architecture of the CNN network comprises, in order from input to output: 1 first convolution layer, N second convolution layers, N pooling layers, 1 splicing layer, 1 residual error network layer, 1 full connection layer and 1 output layer;
the first convolution layer includes three M1 x M1 convolution kernels; the next layer of the output results of two convolution kernels of the first convolution layer is a second convolution layer, and the next layer of the other convolution kernel is a residual error network layer;
the second convolution layer comprises 1M 2 x M2 convolution kernel and 1M 3 x M3 convolution kernel;
the pooling layer comprises two M4 x M4 pooling windows;
the output result of the splicing layer is the splicing result of the output results of the two pooling windows of the pooling layer.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
It is to be understood that the same or similar parts in the above embodiments may be referred to each other, and that in some embodiments, the same or similar parts in other embodiments may be referred to.
It should be noted that in the description of the present application, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Furthermore, in the description of the present application, unless otherwise indicated, the meaning of "plurality" means at least two.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (2)

1. A welding gap positioning method, comprising:
inputting the original image into a fast-RCNN network, and screening a gap candidate region in the original image;
inputting the gap candidate region into a CNN network, extracting the characteristics of the gap candidate region, and outputting the position of a gap endpoint; wherein:
the architecture of the CNN network comprises, in order from input to output: 1 first convolution layer, 1 second convolution layer, N pooling layers, 1 splicing layer, 1 residual network layer, 1 full connection layer and 1 output layer;
the first convolution layer includes three 1*1 convolution kernels; the next layer of the output results of two convolution kernels of the first convolution layer is a second convolution layer, and the next layer of the output results of the other convolution kernel is the residual error network layer;
the second convolution layer includes 1 3*3 convolution kernel and 1 5*5 convolution kernel; boundary fill 1 of the 3*3 convolution kernel and boundary fill 2 of the 5*5 convolution kernel;
the pooling layer comprises two M4 x M4 pooling windows; the M4 pond window is a 3*3 pond window, and the boundary filling of the two pond windows is 1 and 2 respectively; the pooling mode of the pooling layer is mean pooling;
the output result of the splicing layer is the splicing result of the output results of the two pooling windows of the pooling layer;
the output result of the residual network layer is the sum of the output result of the splicing layer and the output result of the first convolution layer;
the CNN network is a model obtained by training according to a large amount of data, and arc light factors are taken into consideration when the model is trained, so that arc light interference is eliminated.
2. A welding gap positioning device, comprising:
the gap candidate region extraction module is used for inputting the original image into a Faster-RCNN network and screening the gap candidate region in the original image;
the gap positioning module is used for inputting the gap candidate region into the CNN network, extracting the characteristics of the gap candidate region and outputting the position of a gap endpoint; wherein:
the architecture of the CNN network comprises, in order from input to output: 1 first convolution layer, 1 second convolution layer, N pooling layers, 1 splicing layer, 1 residual network layer, 1 full connection layer and 1 output layer;
the first convolution layer includes three 1*1 convolution kernels; the next layer of the output results of two convolution kernels of the first convolution layer is a second convolution layer, and the next layer of the output results of the other convolution kernel is the residual error network layer;
the second convolution layer includes 1 3*3 convolution kernel and 1 5*5 convolution kernel; boundary fill 1 of the 3*3 convolution kernel and boundary fill 2 of the 5*5 convolution kernel;
the pooling layer comprises two M4 x M4 pooling windows; the M4 pond window is a 3*3 pond window, and the boundary filling of the two pond windows is 1 and 2 respectively; the pooling layer adopts mean pooling;
the output result of the splicing layer is the splicing result of the output results of the two pooling windows of the pooling layer;
the output result of the residual network layer is the sum of the output result of the splicing layer and the output result of the first convolution layer;
the CNN network is a model obtained by training according to a large amount of data, and arc light factors are taken into consideration when the model is trained, so that arc light interference is eliminated.
CN201810126289.XA 2018-02-07 2018-02-07 Welding gap positioning method and device Active CN108257180B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810126289.XA CN108257180B (en) 2018-02-07 2018-02-07 Welding gap positioning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810126289.XA CN108257180B (en) 2018-02-07 2018-02-07 Welding gap positioning method and device

Publications (2)

Publication Number Publication Date
CN108257180A CN108257180A (en) 2018-07-06
CN108257180B true CN108257180B (en) 2023-08-04

Family

ID=62744671

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810126289.XA Active CN108257180B (en) 2018-02-07 2018-02-07 Welding gap positioning method and device

Country Status (1)

Country Link
CN (1) CN108257180B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109977948A (en) * 2019-03-20 2019-07-05 哈尔滨工业大学 A kind of stirring friction welding seam defect identification method based on convolutional neural networks

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105891215A (en) * 2016-03-31 2016-08-24 浙江工业大学 Welding visual detection method and device based on convolutional neural network
CN106530284A (en) * 2016-10-21 2017-03-22 广州视源电子科技股份有限公司 Welding spot type detection and device based on image recognition
WO2017166586A1 (en) * 2016-03-30 2017-10-05 乐视控股(北京)有限公司 Image identification method and system based on convolutional neural network, and electronic device
CN107316298A (en) * 2017-07-10 2017-11-03 北京深度奇点科技有限公司 A kind of method for real-time measurement of welded gaps, device and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017166586A1 (en) * 2016-03-30 2017-10-05 乐视控股(北京)有限公司 Image identification method and system based on convolutional neural network, and electronic device
CN105891215A (en) * 2016-03-31 2016-08-24 浙江工业大学 Welding visual detection method and device based on convolutional neural network
CN106530284A (en) * 2016-10-21 2017-03-22 广州视源电子科技股份有限公司 Welding spot type detection and device based on image recognition
CN107316298A (en) * 2017-07-10 2017-11-03 北京深度奇点科技有限公司 A kind of method for real-time measurement of welded gaps, device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于卷积神经网络的图像识别;蒋帅;《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》;20171015;参见第3.2节、4.3节 *

Also Published As

Publication number Publication date
CN108257180A (en) 2018-07-06

Similar Documents

Publication Publication Date Title
US11847738B2 (en) Voxelization of mesh representations
CN109284280B (en) Simulation data optimization method and device and storage medium
JP2020113286A (en) Learning method and learning device for adjusting parameter of cnn by using multi-scale feature map, and testing method and testing device using the same
CN110738697A (en) Monocular depth estimation method based on deep learning
JP7158563B2 (en) Deep model training method and its device, electronic device and storage medium
KR102559021B1 (en) Apparatus and method for generating a defect image
CN110136052B (en) Image processing method and device and electronic equipment
KR20200075704A (en) Anomaly detection
CN110880001A (en) Training method, device and storage medium for semantic segmentation neural network
CN108257180B (en) Welding gap positioning method and device
CN112750139A (en) Image processing method and device, computing equipment and storage medium
CN110472640B (en) Target detection model prediction frame processing method and device
CN111915626A (en) Automatic segmentation method and device for ventricle area of heart ultrasonic image and storage medium
CN116580182B (en) Method, system, equipment and storage medium for automatically-adjusted target detection
CN110880183A (en) Image segmentation method, device and computer-readable storage medium
CN113954836A (en) Segmented navigation lane changing method and system, computer equipment and storage medium
WO2023231138A1 (en) Multi-angle-of-view image super-resolution reconstruction method and apparatus based on meta-imaging
KR100439577B1 (en) Triangular mesh segmentation apparatus and method based on surface normal
CN114897214A (en) Metal additive manufacturing time prediction system and method based on graphical processing
CN114022458A (en) Skeleton detection method and device, electronic equipment and computer readable storage medium
CN112766481A (en) Neural network model training method and device and image detection method
CN110545373B (en) Spatial environment sensing method and device
CN116897532A (en) Depth image restoration method and device, camera component and electronic equipment
CN114529514B (en) Depth data quality evaluation method and device based on graph structure
CN116910758B (en) Malicious software detection method and device, electronic equipment, chip and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
DD01 Delivery of document by public notice

Addressee: Li Lihua

Document name: Reexamination notice

DD01 Delivery of document by public notice
DD01 Delivery of document by public notice
DD01 Delivery of document by public notice

Addressee: Li Lihua

Document name: Notice of Case Closure for Reexamination

DD01 Delivery of document by public notice

Addressee: Li Lihua

Document name: Review Decision Letter

DD01 Delivery of document by public notice
GR01 Patent grant
GR01 Patent grant
DD01 Delivery of document by public notice

Addressee: Li Lihua

Document name: Notice of Approval for Restoration of Rights Request

DD01 Delivery of document by public notice