CN114913401A - Welding equipment for LED lamp core column and shell and welding quality monitoring method thereof - Google Patents

Welding equipment for LED lamp core column and shell and welding quality monitoring method thereof Download PDF

Info

Publication number
CN114913401A
CN114913401A CN202210818288.8A CN202210818288A CN114913401A CN 114913401 A CN114913401 A CN 114913401A CN 202210818288 A CN202210818288 A CN 202210818288A CN 114913401 A CN114913401 A CN 114913401A
Authority
CN
China
Prior art keywords
feature map
welding
interest
region
global
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210818288.8A
Other languages
Chinese (zh)
Other versions
CN114913401B (en
Inventor
陈旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Yiming Photoelectric Co ltd
Original Assignee
Jiangsu Yiming Photoelectric Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Yiming Photoelectric Co ltd filed Critical Jiangsu Yiming Photoelectric Co ltd
Priority to CN202210818288.8A priority Critical patent/CN114913401B/en
Publication of CN114913401A publication Critical patent/CN114913401A/en
Application granted granted Critical
Publication of CN114913401B publication Critical patent/CN114913401B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23KSOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
    • B23K37/00Auxiliary devices or processes, not specially adapted to a procedure covered by only one of the preceding main groups
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Optics & Photonics (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The application relates to the field of welding quality monitoring, in particular to welding equipment for an LED lamp core column and a shell and a welding quality monitoring method thereof, the method comprises the steps of mining a global feature map with global spatial implicit associated feature information from a welding image of a semi-finished product of the LED lamp after welding through a convolutional neural network model with a spatial attention mechanism, extracting high-dimensional implicit feature information of the region of interest of the welding image from the global feature map, further adjusting local-global characteristics of the features of the high-dimensional implicit feature information and then fusing the features, to introduce robustness around minimizing loss with respect to global characterizing information to each region of interest feature map, thereby improving the dependency of the region-of-interest feature map for local representation of features on the global desired features and thus improving the classification performance of the region-of-interest feature map. Therefore, the accuracy of detecting the welding quality of the LED lamp core column and the shell can be ensured.

Description

Welding equipment for LED lamp core column and shell and welding quality monitoring method thereof
Technical Field
The invention relates to the field of welding quality monitoring, in particular to welding equipment for an LED lamp core column and a shell and a welding quality monitoring method thereof.
Background
In the current production process of the LED lamp, most manufacturers still use a manual mode to weld the guide wire of the lamp core column and the pin of the LED light source plate together, so that the working efficiency is low, and the requirements and the trend of future automatic production are not met.
Welding to the wick post is a comparatively meticulous work, if the welding is inaccurate or make the chip take place to damage when the welding and can all make the LED lamp electric leakage, seriously influences the life of LED lamp, but the unable real-time detection wick post's of current device welding effect, wick post welding back can directly get into next production flow, only can detect after the LED lamp assembly is accomplished, can seriously reduce production efficiency and yields. Therefore, a welding apparatus having a real-time detection function is desired.
Disclosure of Invention
The present application is proposed to solve the above-mentioned technical problems. The embodiment of the application provides welding equipment for an LED lamp core column and a shell and a welding quality monitoring method thereof, wherein a convolutional neural network model with a spatial attention mechanism is used for excavating a global feature map with global spatial implicit associated feature information from a welding image of a semi-finished product of an LED lamp after welding, high-dimensional implicit feature information of an interested region of the welding image is extracted from the global feature map, local-global characteristics of features of the high-dimensional implicit feature map are further adjusted and then fused, robustness surrounding loss minimization relative to global characterization information is introduced into each interested region feature map, dependency of the interested region feature map for local representation of the features on global expected features is improved, and classification performance of the interested region feature map is improved. Therefore, the accuracy of detecting the welding quality of the LED lamp core column and the shell can be ensured.
According to one aspect of the application, an apparatus for welding an LED stem to a housing is provided, comprising: the welding image acquisition module is used for acquiring a welding image of the welded LED lamp semi-finished product; the welding image global coding module is used for enabling the welding image to pass through a first convolution neural network using a spatial attention mechanism to obtain a global feature map; a welding region extraction module for extracting first to third regions of interest corresponding to three welding regions from the global feature map; the interested region correction module is used for respectively carrying out feature distribution correction on each interested region in the first interested region to the third interested region so as to obtain corrected first interested region to the third interested region; the feature fusion module is used for fusing the global feature map and the corrected first to third interested areas to obtain a classification feature map; and the welding quality judging module is used for enabling the classification characteristic diagram to pass through a classifier to obtain a classification result, and the classification result is used for indicating whether the welding quality of the LED lamp core column and the shell meets a preset requirement or not.
In the above apparatus for welding an LED lamp stem and an LED lamp housing, the welding image global coding module is configured to: performing, using layers of the first convolutional neural network model, in forward pass of layers, input data: performing convolution processing based on a two-dimensional convolution kernel on the input data to generate a convolution characteristic diagram; pooling the convolved feature map to generate a pooled feature map; performing activation processing on the pooled feature map to generate an activated feature map; performing global average pooling on the activation feature map to obtain a spatial feature matrix; performing convolution processing and activation processing on the spatial feature matrix to generate a weight vector; weighting each feature matrix of the activation feature map by using the weight value of each position in the weight vector to obtain a generated feature map; wherein the generated feature map output by the last layer of the first convolutional neural network model is the global feature map.
In the above LED lamp stem and housing welding apparatus, the welding region extracting module is further configured to extract the first to third regions of interest corresponding to the three welding regions from the global feature map based on the positions of the three welding regions in the welding image.
In the above LED lamp stem and housing welding apparatus, the welding region extraction module is further configured to pass the global feature map through a target candidate box extraction network to anchor the first to third regions of interest corresponding to the three welding regions from the global feature map.
In the above apparatus for welding an LED lamp stem to an outer shell, the region of interest correction module includes: a local feature representation unit configured to calculate a logarithmic function value of a feature value and a weighted value of one for each position of each of the first to third regions of interest as a local feature representation for each position of each of the first to third regions of interest; a global feature representation unit, configured to calculate a sum value of feature values of all positions in the global feature map and one as a logarithmic function value as a global feature representation of the global feature map; and the correction unit is used for dividing the local feature representation of each position of each interested area in the first interested area to the third interested area by the global feature representation of the global feature map to obtain the corrected first interested area to the third interested area.
In the welding equipment of the LED lamp core column and the shell, the characteristic fusion module is used for calculating the weighted sum of the corrected first region to third region of interest according to the position to obtain a region of interest fusion characteristic diagram; the linear transformation unit is used for adjusting the global feature map to be the same as the size of the region-of-interest fusion feature map through linear transformation; and the fusion unit is used for calculating the weighted sum of the region-of-interest fusion feature map and the global feature map according to positions to obtain the classification feature map.
In the above apparatus for welding an LED lamp stem and a housing, the welding quality determining module is further configured to process the classification feature map by the classifier to generate a classification result according to the following formula:
Figure 347658DEST_PATH_IMAGE001
wherein
Figure 389563DEST_PATH_IMAGE002
Representing the projection of the classification feature map as a vector,
Figure 483421DEST_PATH_IMAGE003
to
Figure 218159DEST_PATH_IMAGE004
Is a weight matrix of the fully connected layers of each layer,
Figure 761748DEST_PATH_IMAGE005
to
Figure 731978DEST_PATH_IMAGE006
A bias matrix representing the layers of the fully connected layer.
According to another aspect of the application, a method for monitoring welding quality of welding equipment of an LED lamp core column and an outer shell comprises the following steps: the method comprises the steps of obtaining a welding image of a welded LED lamp semi-finished product; passing the welding image through a first convolution neural network using a spatial attention mechanism to obtain a global feature map; extracting first to third regions of interest corresponding to three welding regions from the global feature map; respectively carrying out feature distribution correction on each region of interest in the first region of interest to the third region of interest to obtain corrected first region of interest to third region of interest; fusing the global feature map and the corrected first to third regions of interest to obtain a classification feature map; and the classification characteristic diagram is processed by a classifier to obtain a classification result, and the classification result is used for indicating whether the welding quality of the LED lamp core column and the shell meets a preset requirement or not.
In the method for monitoring the welding quality of the welding equipment of the LED lamp core column and the shell, the welding image is passed through a first convolution neural network using a spatial attention mechanism to obtain a global feature map, and the method comprises the following steps: performing, using layers of the first convolutional neural network model, in forward pass of layers, input data: performing convolution processing based on a two-dimensional convolution kernel on the input data to generate a convolution characteristic diagram; pooling the convolved feature map to generate a pooled feature map; performing activation processing on the pooled feature map to generate an activated feature map; performing global average pooling on the activation feature map to obtain a spatial feature matrix; performing convolution processing and activation processing on the spatial feature matrix to generate a weight vector; weighting each feature matrix of the activation feature map by using the weight value of each position in the weight vector to obtain a generated feature map; wherein the generated feature map output by the last layer of the first convolutional neural network model is the global feature map.
In the method for monitoring the welding quality of the welding equipment of the LED lamp core column and the shell, the step of extracting the first to third interested areas corresponding to three welding areas from the global feature map comprises the following steps: extracting the first to third regions of interest corresponding to the three welding regions from the global feature map based on the positions of the three welding regions in the welding image.
In the method for monitoring the welding quality of the welding equipment of the LED lamp core column and the shell, the step of extracting the first to third interested areas corresponding to three welding areas from the global feature map comprises the following steps: passing the global feature map through a target candidate box extraction network to anchor the first through third regions of interest corresponding to the three weld regions from the global feature map.
In the method for monitoring the welding quality of the welding equipment for the LED lamp core column and the housing, the step of performing feature distribution correction on each region of interest in the first to third regions of interest to obtain corrected first to third regions of interest includes: calculating a logarithmic function value of a feature value and a weighted value of one for each position of each region of interest among the first to third regions of interest as a local feature representation for each position of each region of interest among the first to third regions of interest; calculating a sum value of the feature values of all positions in the global feature map and one to obtain a logarithmic function value as a global feature representation of the global feature map; dividing the local feature representation of each position of each region of interest in the first region of interest to the global feature representation of the global feature map to obtain the corrected first region of interest to the corrected third region of interest.
In the method for monitoring the welding quality of the welding equipment of the LED lamp core column and the housing, the fusing the global feature map and the corrected first to third regions of interest to obtain a classification feature map includes: calculating the weighted sum of the corrected first region of interest to the third region of interest according to the position to obtain a region of interest fusion feature map; adjusting the global feature map to be the same as the size of the region-of-interest fusion feature map through linear transformation; and calculating a position-weighted sum of the region-of-interest fused feature map and the global feature map to obtain the classification feature map.
In the method for monitoring the welding quality of the welding equipment of the LED lamp core column and the shell, the classification characteristic diagram is processed by a classifier to obtain a classification result, and the method comprises the following steps: the classifier processes the classification feature map to generate a classification result according to the following formula, wherein the formula is as follows:
Figure 414764DEST_PATH_IMAGE007
wherein
Figure 320403DEST_PATH_IMAGE008
Representing the projection of the classification feature map as a vector,
Figure 88639DEST_PATH_IMAGE009
to
Figure 206767DEST_PATH_IMAGE010
Is a weight matrix of the fully connected layers of each layer,
Figure 134272DEST_PATH_IMAGE011
to
Figure 351758DEST_PATH_IMAGE012
A bias matrix representing the layers of the fully connected layer.
Compared with the prior art, according to the welding equipment for the LED lamp stem and the shell and the welding quality monitoring method thereof, the global feature map with the global spatial implicit associated feature information is mined from the welding image of the semi-finished product of the welded LED lamp through the convolutional neural network model with the spatial attention mechanism, the high-dimensional implicit feature information of the interested region of the welding image is extracted from the global feature map, the local-global characteristics of the features of the high-dimensional implicit feature map are further adjusted and then the features are fused, so that robustness surrounding loss minimization relative to the global feature information is introduced into each interested region feature map, the dependency of the interested region feature map for local representation of the features on global expected features is improved, and the classification performance of the interested region feature map is improved. Therefore, the accuracy of detecting the welding quality of the LED lamp core column and the shell can be ensured.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 is an application scene diagram of an apparatus for welding an LED lamp stem and an outer shell according to an embodiment of the present application.
Fig. 2 is a block diagram of an apparatus for welding an LED stem to a housing according to an embodiment of the present application.
Fig. 3 is a block diagram of a region of interest correction module in an apparatus for welding an LED stem to a housing according to an embodiment of the present application.
Fig. 4 is a block diagram of a feature fusion module in an apparatus for welding an LED stem to a housing according to an embodiment of the present application.
Fig. 5 is a flowchart of a welding quality monitoring method of an LED lamp stem and housing welding device according to an embodiment of the present application.
Fig. 6 is a schematic configuration diagram of a welding quality monitoring method of a welding apparatus for an LED lamp stem and an outer shell according to an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only a few embodiments of the present application, and not all embodiments of the present application, and it should be understood that the present application is not limited to the example embodiments described herein.
Overview of a scene
As mentioned above, most manufacturers still use manual welding to weld the guide wire of the stem and the pins of the LED light source plate together in the current LED lamp production process, which results in low working efficiency and does not meet the requirements and trends of future automated production.
Welding to the wick post is a comparatively meticulous work, if the welding is inaccurate or make the chip take place to damage when the welding and can all make the LED lamp electric leakage, seriously influences the life of LED lamp, but the unable real-time detection wick post's of current device welding effect, wick post welding back can directly get into next production flow, only can detect after the LED lamp assembly is accomplished, can seriously reduce production efficiency and yields. Therefore, a welding apparatus having a real-time detection function is desired.
Chinese utility model patent publication specification CN205464894U discloses a core post spot welding device and LED lamp automated production system, automated production system includes core post spot welding device, closing device, exhaust apparatus, dress head device, line and supporting auxiliary device, wherein core post spot welding device includes the carousel structure and sets up a plurality of cooperation work pieces in carousel structure periphery, the carousel structure includes the ring guide rail and rotationally encircles the rolling disc of ring guide rail, the lift groove has been seted up on the ring guide rail, a plurality of stem clamping device install on the rolling disc, core stem clamping device's relative both sides are equipped with coaxial coupling's locating rack and pendulum commentaries on classics subassembly, pendulum commentaries on classics subassembly has the pressure to support to press on the ring guide rail, for the piece drives the connecting axle along with the lift on ring guide rail surface and rotates, and then drive locating rack and stem clamping device and rotate.
Although the system can realize the automatic production of the LED assembly line, the production efficiency is improved. However, the system has the following disadvantages: firstly, welding can be completed only after 3 stations; secondly, because the lift on circular guide rail surface makes the stem need rotate to different positions and welds, probably causes stem filament offset and leads to the welding inaccurate in the rotation process. Therefore, the LED lamp core column and the shell need to be detected in real time during the welding process to ensure the welding quality.
In recent years, deep learning and neural networks have been widely used in the fields of computer vision, natural language processing, text signal processing, and the like. In addition, deep learning and neural networks also exhibit a level close to or even exceeding that of humans in the fields of image classification, object detection, semantic segmentation, text translation, and the like.
The deep learning and the development of the neural network provide a new solution and scheme for the real-time detection of the welding quality of the LED lamp core column and the shell.
Accordingly, the inventors of the present application have considered that the quality of the welding of the LED stem to the envelope can be displayed using a welding image of the semi-finished LED lamp after welding. Therefore, in the technical scheme of the application, firstly, a welding image of the semi-finished product of the welded LED lamp is obtained, and then, a convolution neural network model with excellent performance in image feature extraction is used for carrying out local implicit feature extraction on the semi-finished product of the LED lamp. It should be understood that, in the technical solution of the present application, a first convolution neural network of a spatial attention mechanism is used to perform global local implicit feature mining on the welding image to extract a global feature map with high-dimensional implicit associated features, considering that the welding quality is related to the relative positions of the LED lamp stem and the housing in space due to the fact that the welding may be inaccurate due to the position offset of the LED lamp stem during the welding process.
In consideration of the fact that the quality of welding needs to be determined according to the area of the welding position in the welding process of the LED lamp core column and the shell, namely, the extraction of the features in a high-dimensional space needs to be focused on local implicit features of the welding area. Therefore, in the technical solution of the present application, the first to third regions of interest corresponding to the three welding regions are further extracted from the global feature map. That is, the first to third regions of interest corresponding to the three welding regions are extracted from the global feature map based on the positions of the three welding regions in the welding image. In one particular example, the global feature map may be passed through a target candidate box extraction network to anchor the first through third regions of interest corresponding to the three weld regions from the global feature map.
It should be understood that, since the feature maps of the regions of interest respectively correspond to local regions of the original object, and a local-global relationship exists between the local regions and the global feature map which is a semantic expression of the feature whole of the original object, before the feature maps of the regions of interest are fused, it is preferable to further adjust local-global characteristics of the features of the regions of interest, specifically:
Figure 872869DEST_PATH_IMAGE013
wherein
Figure 919322DEST_PATH_IMAGE014
Is as follows
Figure 839349DEST_PATH_IMAGE015
Characteristic map of region of interest (
Figure 821212DEST_PATH_IMAGE016
) A characteristic value of each position of the image, and
Figure 829619DEST_PATH_IMAGE017
representation versus global feature map
Figure 820709DEST_PATH_IMAGE018
The eigenvalues of all positions of (a) are summed.
In this way, through the cauchy normalization, robustness around the minimum loss relative to the global characterization information can be introduced into each region-of-interest feature map, so that the clustering performance of the feature of the region-of-interest feature map locally equivalent to the feature of the global feature map on the feature distribution is realized, that is, the dependency of the region-of-interest feature map for the local representation of the feature on the global expected feature is improved, and the classification performance of the region-of-interest feature map is improved.
Based on this, this application has proposed the welding equipment of LED wick post and shell, and it includes: the welding image acquisition module is used for acquiring a welding image of the welded LED lamp semi-finished product; the welding image global coding module is used for enabling the welding image to pass through a first convolution neural network using a spatial attention mechanism to obtain a global feature map; a welding region extraction module for extracting first to third regions of interest corresponding to three welding regions from the global feature map; the interested region correction module is used for respectively carrying out feature distribution correction on each interested region in the first interested region to the third interested region so as to obtain corrected first interested region to the third interested region; the feature fusion module is used for fusing the global feature map and the corrected first to third interested areas to obtain a classification feature map; and the welding quality judging module is used for enabling the classification characteristic diagram to pass through a classifier to obtain a classification result, and the classification result is used for indicating whether the welding quality of the LED lamp core column and the shell meets a preset requirement or not.
Fig. 1 illustrates an application scenario of an apparatus for welding an LED lamp stem and an outer shell according to an embodiment of the present application. As shown in fig. 1, in this application scenario, first, a welding image of a semi-finished product of the LED lamp after welding (e.g., L as illustrated in fig. 1) is acquired by a camera (e.g., C as illustrated in fig. 1) disposed in a welding apparatus (e.g., E as illustrated in fig. 1). Then, the obtained welding image of the semi-finished welded LED lamp is input into a server (for example, a server S as illustrated in fig. 1) in which an algorithm of a welding device for the LED lamp stem and the housing is deployed, wherein the server can process the welding image of the semi-finished welded LED lamp with the algorithm of the welding device for the LED lamp stem and the housing to generate a classification result indicating whether the welding quality of the LED lamp stem and the housing meets a preset requirement.
Having described the general principles of the present application, various non-limiting embodiments of the present application will now be described with reference to the accompanying drawings.
Exemplary System
Fig. 2 illustrates a block diagram of an apparatus for welding an LED stem to an outer shell in accordance with an embodiment of the present application. As shown in fig. 2, an LED lamp core column and shell welding device 200 according to the embodiment of the present application includes: the welding image acquisition module 210 is used for acquiring a welding image of the welded LED lamp semi-finished product; a welding image global coding module 220, configured to pass the welding image through a first convolutional neural network using a spatial attention mechanism to obtain a global feature map; a welding region extraction module 230 for extracting first to third regions of interest corresponding to three welding regions from the global feature map; an interested region correction module 240, configured to perform feature distribution correction on each of the first to third interested regions respectively to obtain corrected first to third interested regions; a feature fusion module 250, configured to fuse the global feature map and the corrected first to third regions of interest to obtain a classification feature map; and the welding quality judging module 260 is used for enabling the classification characteristic diagram to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the welding quality of the LED lamp core column and the shell meets a preset requirement or not.
Specifically, in the embodiment of the present application, the welding image acquisition module 210 and the welding image global coding module 220 are configured to acquire a welding image of a semi-finished product of an LED lamp after welding, and obtain a global feature map by passing the welding image through a first convolution neural network using a spatial attention mechanism. It should be appreciated that the weld image of the semi-finished welded LED lamp may be used to display the weld image in view of the quality of the weld between the LED stem and the housing. Therefore, in the technical scheme of the application, firstly, a welding image of the semi-finished product of the welded LED lamp is obtained, and then, a convolution neural network model with excellent performance in image feature extraction is used for carrying out local implicit feature extraction on the semi-finished product of the LED lamp. It should be understood that, in the technical solution of the present application, a first convolution neural network of spatial attention mechanism is used to perform global local implicit feature mining on the welding image to extract a global feature map with high-dimensional implicit associated features, considering that welding inaccuracy is caused by possible position deviation of the LED lamp core column during welding, and therefore the quality of welding is related to the relative positions of the LED lamp core column and the housing in space.
More specifically, in an embodiment of the present application, the welding image global coding module is configured to: performing, using layers of the first convolutional neural network model, in forward pass of layers, input data: performing convolution processing based on a two-dimensional convolution kernel on the input data to generate a convolution characteristic diagram; pooling the convolved feature map to generate a pooled feature map; performing activation processing on the pooled feature map to generate an activated feature map; performing global average pooling on the activation feature map to obtain a spatial feature matrix; performing convolution processing and activation processing on the spatial feature matrix to generate a weight vector; weighting each feature matrix of the activation feature map by using the weight value of each position in the weight vector to obtain a generated feature map; wherein the generated feature map output by the last layer of the first convolutional neural network model is the global feature map.
Specifically, in the embodiment of the present application, the welding region extraction module 230 is configured to extract first to third regions of interest corresponding to three welding regions from the global feature map. It should be understood that the quality of the welding needs to be determined according to the area of the welding position in consideration of the welding process of the LED lamp stem and the housing, that is, the extraction of the features in the high-dimensional space needs to be focused more on the local implicit features of the welding area. Therefore, in the technical solution of the present application, the first to third regions of interest corresponding to the three welding regions are further extracted from the global feature map. That is, the first to third regions of interest corresponding to the three welding regions are extracted from the global feature map based on the positions of the three welding regions in the welding image. Accordingly, in one particular example, the global feature map may be passed through a target candidate box extraction network to anchor the first through third regions of interest corresponding to the three weld regions from the global feature map.
Specifically, in the embodiment of the present application, the region-of-interest correction module 240 is configured to perform feature distribution correction on each of the first to third regions of interest respectively to obtain corrected first to third regions of interest. It should be understood that, since the region-of-interest feature maps respectively correspond to local regions of the original object, which have a local-global relationship with the global feature map that is a semantic representation of the feature of the original object as a whole, the local-global characteristics of the features of the region-of-interest feature maps are preferably further adjusted before the region-of-interest feature maps are fused. It should be understood that, through the correction of cauchy normalization, robustness around minimized loss with respect to global characterization information can be introduced to each of the region-of-interest feature maps, thereby realizing the clustering of the features of the region-of-interest feature maps locally equivalent to the features of the global feature map on the feature distribution as a whole, that is, improving the dependency of the region-of-interest feature maps for local representation of the features on global desired features, thereby improving the classification performance of the region-of-interest feature maps.
More specifically, in this embodiment of the present application, the region of interest correction module includes: first, a logarithmic function value of a weighted value of a feature value and one for each position of each of the first to third regions of interest is calculated as a local feature representation for each position of each of the first to third regions of interest. Then, the sum of the feature values of all positions in the global feature map and one is calculated to obtain a logarithmic function value as a global feature representation of the global feature map. Finally, the local feature representation of each position of each interested area in the first interested area to the third interested area is divided by the global feature representation of the global feature map respectively to obtain the corrected first interested area to the third interested area. Accordingly, in a specific example, the formula for performing feature distribution correction on each of the first to third regions of interest is as follows:
Figure 332593DEST_PATH_IMAGE013
wherein
Figure 485357DEST_PATH_IMAGE019
Is as follows
Figure 574535DEST_PATH_IMAGE020
(ii) the region of interest feature map (c)
Figure 103737DEST_PATH_IMAGE021
) A characteristic value of each position of (2), and
Figure 470127DEST_PATH_IMAGE022
representing the global feature map
Figure 56442DEST_PATH_IMAGE023
The eigenvalues of all positions of (a) are summed.
Fig. 3 illustrates a block diagram of a word granularity encoding unit in a scientific and technical information management system according to an embodiment of the present application. As shown in fig. 3, the word granularity encoding unit 240 includes: a local feature representation unit 241 for calculating a logarithmic function value of a feature value and a weighted value of one for each position of each of the first to third regions of interest as a local feature representation for each position of each of the first to third regions of interest; a global feature representation unit 242, configured to calculate a sum value of feature values of all positions in the global feature map and one as a logarithmic function value as a global feature representation of the global feature map; a correcting unit 243, configured to divide the local feature representation of each position of each region of interest in the first to third regions of interest by the global feature representation of the global feature map to obtain the corrected first to third regions of interest.
Specifically, in this embodiment of the application, the feature fusion module 250 and the welding quality determination module 260 are configured to fuse the global feature map and the corrected first to third regions of interest to obtain a classification feature map, and pass the classification feature map through a classifier to obtain a classification result, where the classification result is used to indicate whether the welding quality of the LED lamp stem and the housing meets a preset requirement. That is, in the technical solution of the present application, after the corrected first to third regions of interest are obtained, they are further fused with the global feature map to obtain a classification feature map. Accordingly, in one specific example, first, a position-weighted sum of the corrected first to third regions of interest is calculated to obtain a region-of-interest fusion feature map; then, the global feature map is adjusted to have the same size with the region-of-interest fusion feature map through linear transformation; then, the weighted sum of the region of interest fusion feature map and the global feature map by position is calculated to obtain the classification feature map. Further, the classification characteristic map can be passed through a classifier to obtain a classification result indicating whether the welding quality of the LED lamp core column and the shell meets a preset requirement.
More specifically, in this embodiment of the application, the welding quality determining module is further configured to: the classifier processes the classification feature map to generate a classification result according to the following formula, wherein the formula is as follows:
Figure 773862DEST_PATH_IMAGE024
in which
Figure 965809DEST_PATH_IMAGE025
Representing the projection of the classification feature map as a vector,
Figure 186706DEST_PATH_IMAGE026
to
Figure 212431DEST_PATH_IMAGE027
Is a weight matrix of the fully connected layers of each layer,
Figure 151568DEST_PATH_IMAGE028
to
Figure 22572DEST_PATH_IMAGE029
A bias matrix representing the layers of the fully connected layer.
Fig. 4 illustrates a block diagram of a word granularity encoding unit in a scientific and technical information management system according to an embodiment of the present application. As shown in fig. 4, the feature fusion module 250 includes: a region-of-interest fusion unit 251, configured to calculate a position-weighted sum of the corrected first to third regions of interest to obtain a region-of-interest fusion feature map; a linear transformation unit 252, configured to adjust the global feature map to have the same size as the region of interest fusion feature map through linear transformation; and a fusion unit 253 for calculating a weighted sum of the region-of-interest fusion feature map and the global feature map by location to obtain the classification feature map.
In summary, the LED lamp stem and housing welding apparatus 200 based on the embodiment of the present application is illustrated, which excavates a global feature map with global spatial implicit associated feature information from a welding image of a semi-finished LED lamp after welding through a convolutional neural network model with a spatial attention mechanism, extracts high-dimensional implicit feature information of a region of interest of the welding image from the global feature map, further adjusts local-global characteristics of features of the global feature map, and performs feature fusion to introduce robustness around a minimum loss relative to the global feature information into each region of interest feature map, thereby improving dependency of the region of interest feature map for local representation of the features on global desired features, and improving classification performance of the region of interest feature map. Therefore, the accuracy of detecting the welding quality of the LED lamp core column and the shell can be ensured.
As described above, the LED lamp stem and housing welding device 200 according to the embodiment of the present application can be implemented in various terminal devices, such as a server of LED lamp stem and housing welding device algorithms, and the like. In one example, the LED stem and housing welding apparatus 200 according to embodiments of the present application may be integrated into a terminal device as a software module and/or a hardware module. For example, the LED stem and housing welding device 200 can be a software module in the operating system of the terminal device, or can be an application developed for the terminal device; of course, the LED stem and housing soldering apparatus 200 can also be one of many hardware modules of the terminal device.
Alternatively, in another example, the LED stem and housing welding device 200 and the terminal device may be separate devices, and the LED stem and housing welding device 200 may be connected to the terminal device via a wired and/or wireless network and transmit the interaction information in the agreed data format.
Exemplary method
Fig. 5 illustrates a flow chart of a method of monitoring the weld quality of an LED stem to envelope welding apparatus. As shown in fig. 5, a method for monitoring the welding quality of an LED lamp core column and an outer shell welding device according to an embodiment of the present application includes the steps of: s110, obtaining a welding image of the welded LED lamp semi-finished product; s120, enabling the welding image to pass through a first convolution neural network using a spatial attention mechanism to obtain a global feature map; s130, extracting first to third interested areas corresponding to three welding areas from the global feature map; s140, respectively carrying out feature distribution correction on each region of interest in the first region of interest, the second region of interest, the third region of interest and the fourth region of interest to obtain corrected first region of interest, second region of interest, third region of interest; s150, fusing the global feature map and the corrected first to third interested areas to obtain a classification feature map; and S160, the classification characteristic diagram is processed by a classifier to obtain a classification result, and the classification result is used for indicating whether the welding quality of the LED lamp core column and the shell meets a preset requirement or not.
Fig. 6 illustrates an architecture diagram of a welding quality monitoring method of a welding device of an LED lamp stem and an enclosure according to an embodiment of the present application. As shown in fig. 6, in the network architecture of the welding quality monitoring method of the LED lamp stem and housing welding device, first, the obtained welding image (e.g., P1 as illustrated in fig. 6) is passed through a first convolution neural network (e.g., CNN as illustrated in fig. 6) using a spatial attention mechanism to obtain a global feature map (e.g., F as illustrated in fig. 6); next, first to third regions of interest (for example, F1, F2, and F3 as illustrated in fig. 6) corresponding to three welding regions are extracted from the global feature map; then, feature distribution correction is performed on each of the first to third regions of interest to obtain corrected first to third regions of interest (for example, FC1, FC2, and FC3 as illustrated in fig. 6); then, fusing the global feature map and the corrected first to third regions of interest to obtain a classification feature map (e.g., FC as illustrated in fig. 6); and finally, passing the classification characteristic map through a classifier (such as the classifier illustrated in fig. 6) to obtain a classification result, wherein the classification result is used for indicating whether the welding quality of the LED lamp core column and the shell meets the preset requirement or not.
More specifically, in steps S110 and S120, the method is used for acquiring a welding image of the semi-finished LED lamp after welding, and passing the welding image through a first convolution neural network using a spatial attention mechanism to obtain a global feature map. It should be appreciated that the weld image of the semi-finished welded LED lamp may be used to display the weld image in view of the quality of the weld between the LED stem and the housing. Therefore, in the technical scheme of the application, firstly, a welding image of the semi-finished product of the welded LED lamp is obtained, and then, a convolution neural network model with excellent performance in image feature extraction is used for carrying out local implicit feature extraction on the semi-finished product of the LED lamp. It should be understood that, in the technical solution of the present application, a first convolution neural network of a spatial attention mechanism is used to perform global local implicit feature mining on the welding image to extract a global feature map with high-dimensional implicit associated features, considering that the welding quality is related to the relative positions of the LED lamp stem and the housing in space due to the fact that the welding may be inaccurate due to the position offset of the LED lamp stem during the welding process.
More specifically, in step S130, first to third regions of interest corresponding to three welding regions are extracted from the global feature map. It should be understood that the quality of the welding needs to be determined according to the area of the welding position in consideration of the welding process of the LED lamp stem and the housing, that is, the extraction of the features in the high-dimensional space needs to be focused more on the local implicit features of the welding area. Therefore, in the technical solution of the present application, the first to third regions of interest corresponding to the three welding regions are further extracted from the global feature map. That is, the first to third regions of interest corresponding to the three welding regions are extracted from the global feature map based on the positions of the three welding regions in the welding image. Accordingly, in one particular example, the global feature map may be passed through a target candidate box extraction network to anchor the first through third regions of interest corresponding to the three weld regions from the global feature map.
More specifically, in step S140, feature distribution correction is performed on each of the first to third regions of interest to obtain corrected first to third regions of interest, respectively. It should be understood that, since the region-of-interest feature maps respectively correspond to local regions of the original object, which have a local-global relationship with the global feature map that is a semantic representation of the feature of the original object as a whole, the local-global characteristics of the features of the region-of-interest feature maps are preferably further adjusted before the region-of-interest feature maps are fused. It should be understood that, through the correction of cauchy normalization, robustness around minimized loss with respect to global characterization information can be introduced to each of the region-of-interest feature maps, thereby realizing the clustering of the features of the region-of-interest feature maps locally equivalent to the features of the global feature map on the feature distribution as a whole, that is, improving the dependency of the region-of-interest feature maps for local representation of the features on global desired features, thereby improving the classification performance of the region-of-interest feature maps.
More specifically, in step S150 and step S160, the global feature map and the corrected first to third regions of interest are fused to obtain a classification feature map, and the classification feature map is passed through a classifier to obtain a classification result, where the classification result is used to indicate whether the welding quality of the LED stem and the housing meets a preset requirement. That is, in the technical solution of the present application, after the corrected first to third regions of interest are obtained, they are further fused with the global feature map to obtain a classification feature map. Accordingly, in one specific example, first, a position-weighted sum of the corrected first to third regions of interest is calculated to obtain a region-of-interest fusion feature map; then, the global feature map is adjusted to have the same size with the region-of-interest fusion feature map through linear transformation; then, the weighted sum of the region of interest fusion feature map and the global feature map by position is calculated to obtain the classification feature map. Further, the classification characteristic map can be passed through a classifier to obtain a classification result indicating whether the welding quality of the LED lamp stem and the outer shell meets a preset requirement.
In conclusion, the method for monitoring the welding quality of the welding equipment of the LED lamp core column and the shell based on the embodiment of the application is clarified, the method comprises the steps of mining a global feature map with global spatial implicit associated feature information from a welding image of a semi-finished product of the LED lamp after welding through a convolutional neural network model with a spatial attention mechanism, extracting high-dimensional implicit feature information of the region of interest of the welding image from the global feature map, further adjusting local-global characteristics of the features of the high-dimensional implicit feature information and then fusing the features, to introduce robustness around minimizing loss with respect to global characterizing information to each region of interest feature map, thereby improving the dependency of the region-of-interest feature map for local representation of features on the global desired features and thus improving the classification performance of the region-of-interest feature map. Therefore, the accuracy of detecting the welding quality of the LED lamp core column and the shell can be ensured.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (10)

1. An apparatus for welding an LED stem to a housing, comprising: the welding image acquisition module is used for acquiring a welding image of the welded LED lamp semi-finished product; the welding image global coding module is used for enabling the welding image to pass through a first convolution neural network using a spatial attention mechanism to obtain a global feature map; a welding region extraction module for extracting first to third regions of interest corresponding to three welding regions from the global feature map; the interested region correction module is used for respectively carrying out feature distribution correction on each interested region in the first interested region to the third interested region so as to obtain corrected first interested region to the third interested region; the feature fusion module is used for fusing the global feature map and the corrected first to third interested areas to obtain a classification feature map; and the welding quality judging module is used for enabling the classification characteristic diagram to pass through a classifier to obtain a classification result, and the classification result is used for indicating whether the welding quality of the LED lamp core column and the shell meets a preset requirement or not.
2. The LED stem to envelope welding apparatus of claim 1, wherein the weld image global coding module is configured to: performing, using layers of the first convolutional neural network model, in forward pass of layers, input data: performing convolution processing based on a two-dimensional convolution kernel on the input data to generate a convolution characteristic diagram; pooling the convolved feature map to generate a pooled feature map; performing activation processing on the pooled feature map to generate an activated feature map; performing global average pooling on the activation feature map to obtain a spatial feature matrix; performing convolution processing and activation processing on the spatial feature matrix to generate a weight vector; weighting each feature matrix of the activation feature map by the weight value of each position in the weight vector to obtain a generated feature map; wherein the generated feature map output by the last layer of the first convolutional neural network model is the global feature map.
3. The LED stem and envelope welding apparatus of claim 2, wherein the weld region extraction module is further configured to extract the first to third regions of interest corresponding to the three weld regions from the global feature map based on the positions of the three weld regions in the weld image.
4. The LED stem and envelope welding apparatus of claim 2, wherein the weld region extraction module is further configured to pass the global feature map through a target candidate box extraction network to anchor the first through third regions of interest corresponding to the three weld regions from the global feature map.
5. The LED stem to envelope welding apparatus of claim 4, wherein the region of interest correction module comprises: a local feature representation unit configured to calculate a logarithmic function value of a feature value and a weighted value of one for each position of each of the first to third regions of interest as a local feature representation for each position of each of the first to third regions of interest; a global feature representation unit, configured to calculate a sum value of feature values of all positions in the global feature map and one as a logarithmic function value as a global feature representation of the global feature map; and the correction unit is used for dividing the local feature representation of each position of each interested area in the first interested area to the third interested area by the global feature representation of the global feature map to obtain the corrected first interested area to the third interested area.
6. The LED lamp stem and shell welding device according to claim 5, wherein the feature fusion module is used for calculating a region-of-interest fusion unit for calculating a weighted sum of the corrected first to third regions of interest according to positions to obtain a region-of-interest fusion feature map; the linear transformation unit is used for adjusting the global feature map to be the same as the size of the region-of-interest fusion feature map through linear transformation; and the fusion unit is used for calculating the weighted sum of the region-of-interest fusion feature map and the global feature map according to positions to obtain the classification feature map.
7. The LED lamp stem and housing welding device of claim 6, wherein the welding quality determination module is further configured to the classifier process the classification feature map to generate a classification result according to the following formula:
Figure 278997DEST_PATH_IMAGE001
in which
Figure 671932DEST_PATH_IMAGE002
Representing the projection of the classification feature map as a vector,
Figure 243859DEST_PATH_IMAGE003
to is that
Figure 13232DEST_PATH_IMAGE004
Is a weight matrix of the fully connected layers of each layer,
Figure 987004DEST_PATH_IMAGE005
to
Figure 550840DEST_PATH_IMAGE006
A bias matrix representing the layers of the fully connected layer.
8. A welding quality monitoring method for welding equipment of an LED lamp core column and a shell is characterized by comprising the following steps: the method comprises the steps of obtaining a welding image of a welded LED lamp semi-finished product; passing the welding image through a first convolution neural network using a spatial attention mechanism to obtain a global feature map; extracting first to third regions of interest corresponding to three welding regions from the global feature map; respectively carrying out feature distribution correction on each region of interest in the first region of interest to the third region of interest to obtain corrected first region of interest to third region of interest; fusing the global feature map and the corrected first to third regions of interest to obtain a classification feature map; and the classification characteristic diagram is processed by a classifier to obtain a classification result, and the classification result is used for indicating whether the welding quality of the LED lamp core column and the shell meets a preset requirement or not.
9. The method for monitoring the welding quality of the LED lamp stem and shell welding device of claim 8, wherein the step of passing the welding image through a first convolutional neural network using a spatial attention mechanism to obtain a global feature map comprises the steps of: performing, using layers of the first convolutional neural network model, in forward pass of layers, input data: performing convolution processing based on a two-dimensional convolution kernel on the input data to generate a convolution characteristic diagram; pooling the convolved feature map to generate a pooled feature map; performing activation processing on the pooled feature map to generate an activated feature map; performing global average pooling on the activation feature map to obtain a spatial feature matrix; performing convolution processing and activation processing on the spatial feature matrix to generate a weight vector; weighting each feature matrix of the activation feature map by the weight value of each position in the weight vector to obtain a generated feature map; wherein the generated feature map output by the last layer of the first convolutional neural network model is the global feature map.
10. The method for monitoring the welding quality of the welding equipment of the LED lamp core column and the shell according to claim 9, wherein the step of respectively carrying out feature distribution correction on each region of interest in the first region of interest to the third region of interest to obtain the corrected first region of interest to the third region of interest comprises the following steps: calculating a logarithmic function value of the feature value of each position of each interested area in the first interested area to the third interested area and a weighted value of one as a local feature representation of each position of each interested area in the first interested area to the third interested area; calculating a sum value of the feature values of all positions in the global feature map and one to obtain a logarithmic function value as a global feature representation of the global feature map; dividing the local feature representation of each position of each region of interest in the first region of interest to the global feature representation of the global feature map to obtain the corrected first region of interest to the third region of interest.
CN202210818288.8A 2022-07-13 2022-07-13 Welding equipment for LED lamp core column and shell and welding quality monitoring method thereof Active CN114913401B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210818288.8A CN114913401B (en) 2022-07-13 2022-07-13 Welding equipment for LED lamp core column and shell and welding quality monitoring method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210818288.8A CN114913401B (en) 2022-07-13 2022-07-13 Welding equipment for LED lamp core column and shell and welding quality monitoring method thereof

Publications (2)

Publication Number Publication Date
CN114913401A true CN114913401A (en) 2022-08-16
CN114913401B CN114913401B (en) 2022-09-30

Family

ID=82772607

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210818288.8A Active CN114913401B (en) 2022-07-13 2022-07-13 Welding equipment for LED lamp core column and shell and welding quality monitoring method thereof

Country Status (1)

Country Link
CN (1) CN114913401B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115091211A (en) * 2022-08-22 2022-09-23 徐州康翔精密制造有限公司 Numerical control turning and grinding combined machine tool and production control method thereof
CN115471674A (en) * 2022-09-20 2022-12-13 浙江科达利实业有限公司 Performance monitoring system of new energy vehicle carbon dioxide pipe based on image processing
CN115560274A (en) * 2022-10-14 2023-01-03 慈溪市远辉照明电器有限公司 Easily wiring type tri-proof light
CN116026528A (en) * 2023-01-14 2023-04-28 慈溪市远辉照明电器有限公司 High waterproof safe type tri-proof light
WO2024045244A1 (en) * 2022-08-31 2024-03-07 福建省龙氟新材料有限公司 Energy management control system for anhydrous hydrogen fluoride production and control method therefor

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111080628A (en) * 2019-12-20 2020-04-28 湖南大学 Image tampering detection method and device, computer equipment and storage medium
CN112633141A (en) * 2020-12-21 2021-04-09 南京大渔棠网络科技有限公司 Method for detecting concrete impact resistance based on double attention mechanism
CN112733786A (en) * 2021-01-20 2021-04-30 成都莉娣扬科技有限公司 Detection method for pin connection stability of communication transformer
CN113538331A (en) * 2021-05-13 2021-10-22 中国地质大学(武汉) Metal surface damage target detection and identification method, device, equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111080628A (en) * 2019-12-20 2020-04-28 湖南大学 Image tampering detection method and device, computer equipment and storage medium
CN112633141A (en) * 2020-12-21 2021-04-09 南京大渔棠网络科技有限公司 Method for detecting concrete impact resistance based on double attention mechanism
CN112733786A (en) * 2021-01-20 2021-04-30 成都莉娣扬科技有限公司 Detection method for pin connection stability of communication transformer
CN113538331A (en) * 2021-05-13 2021-10-22 中国地质大学(武汉) Metal surface damage target detection and identification method, device, equipment and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115091211A (en) * 2022-08-22 2022-09-23 徐州康翔精密制造有限公司 Numerical control turning and grinding combined machine tool and production control method thereof
CN115091211B (en) * 2022-08-22 2023-02-28 徐州康翔精密制造有限公司 Numerical control turning and grinding combined machine tool and production control method thereof
WO2024045244A1 (en) * 2022-08-31 2024-03-07 福建省龙氟新材料有限公司 Energy management control system for anhydrous hydrogen fluoride production and control method therefor
CN115471674A (en) * 2022-09-20 2022-12-13 浙江科达利实业有限公司 Performance monitoring system of new energy vehicle carbon dioxide pipe based on image processing
CN115560274A (en) * 2022-10-14 2023-01-03 慈溪市远辉照明电器有限公司 Easily wiring type tri-proof light
CN116026528A (en) * 2023-01-14 2023-04-28 慈溪市远辉照明电器有限公司 High waterproof safe type tri-proof light

Also Published As

Publication number Publication date
CN114913401B (en) 2022-09-30

Similar Documents

Publication Publication Date Title
CN114913401B (en) Welding equipment for LED lamp core column and shell and welding quality monitoring method thereof
CN109635666B (en) Image target rapid detection method based on deep learning
CN113077453B (en) Circuit board component defect detection method based on deep learning
CN112767391A (en) Power grid line part defect positioning method fusing three-dimensional point cloud and two-dimensional image
CN110619623B (en) Automatic identification method for heating of joint of power transformation equipment
JP2019145085A (en) Method, device, and computer-readable medium for adjusting point cloud data acquisition trajectory
US20230281784A1 (en) Industrial Defect Recognition Method and System, Computing Device, and Storage Medium
CN112733672B (en) Three-dimensional target detection method and device based on monocular camera and computer equipment
CN114782423B (en) Forming quality detection system and method for low-voltage coil of dry-type transformer
CN112485709A (en) Method, device, medium and electronic equipment for detecting internal circuit abnormality
CN116385958A (en) Edge intelligent detection method for power grid inspection and monitoring
CN115311618A (en) Assembly quality inspection method based on deep learning and object matching
CN115294063A (en) Method, device, system, electronic device and medium for detecting defect of electronic component
US20040109599A1 (en) Method for locating the center of a fiducial mark
Mo et al. PVDet: Towards pedestrian and vehicle detection on gigapixel-level images
US7266233B2 (en) System and method for measuring an object
Chu et al. OA-BEV: Bringing Object Awareness to Bird's-Eye-View Representation for Multi-Camera 3D Object Detection
Zhai et al. Towards generic image manipulation detection with weakly-supervised self-consistency learning
Jia et al. Autosplice: A text-prompt manipulated image dataset for media forensics
CN115131735A (en) Cooling medium temperature control system and temperature control method after hardware tool heat treatment
CN112798608B (en) Optical detection device and optical detection method for side wall of inner cavity of mobile phone camera support
Weng et al. Development of an adaptive template for fast detection of lithographic patterns of light-emitting diode chips
CN112686155A (en) Image recognition method, image recognition device, computer-readable storage medium and processor
CN112950466A (en) Image splicing method based on semantic object matching
Yang et al. Mixsup: Mixed-grained supervision for label-efficient lidar-based 3d object detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant