CN117173147A - Surface treatment equipment and method for steel strip processing - Google Patents

Surface treatment equipment and method for steel strip processing Download PDF

Info

Publication number
CN117173147A
CN117173147A CN202311209053.XA CN202311209053A CN117173147A CN 117173147 A CN117173147 A CN 117173147A CN 202311209053 A CN202311209053 A CN 202311209053A CN 117173147 A CN117173147 A CN 117173147A
Authority
CN
China
Prior art keywords
feature map
classification
convolution
steel strip
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311209053.XA
Other languages
Chinese (zh)
Inventor
黄志光
童英雄
涂小远
易雨根
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Tiangang Technology Co ltd
Original Assignee
Jiangxi Tiangang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Tiangang Technology Co ltd filed Critical Jiangxi Tiangang Technology Co ltd
Priority to CN202311209053.XA priority Critical patent/CN117173147A/en
Publication of CN117173147A publication Critical patent/CN117173147A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application relates to the field of intelligent detection, and particularly discloses surface treatment equipment for steel strip processing and a method thereof. Therefore, by analyzing the surface image of the steel belt to be detected, the detection precision is improved, and the labor cost is reduced.

Description

Surface treatment equipment and method for steel strip processing
Technical Field
The application relates to the field of intelligent detection, in particular to surface treatment equipment for steel strip machining and a method thereof.
Background
The surface treatment of the strip includes a number of methods and steps such as pickling to remove oxide layers from the strip surface, electroplating, hot-dipping, spraying to prevent corrosion, etc. Meanwhile, whether the treatment condition of the surface of the steel strip is qualified or not needs to be detected. The traditional methods for detecting the surface of the steel strip are visual inspection, hand touch detection, chemical analysis and electrochemical test methods. However, these methods have some drawbacks: 1. subjectivity: visual inspection and hand touch detection are susceptible to human subjective factors, and as a result, erroneous judgment and errors may occur. 2. Time and labor cost are high: the traditional method needs to be carried out manually, and is time-consuming and labor-consuming. 3. Cannot monitor in real time: the traditional method is often carried out under a specific environment, and the treatment condition of the surface of the steel belt cannot be monitored in real time.
Therefore, an optimized surface treatment scheme for steel strip processing is desired.
Disclosure of Invention
The present application has been made to solve the above-mentioned technical problems. The embodiment of the application provides surface treatment equipment for steel strip processing and a method thereof, which use artificial intelligence technology based on a deep neural network to extract and code texture features and color features of a steel strip surface image to be detected so as to obtain a classification label for indicating whether the steel strip surface processing is qualified. Therefore, by analyzing the surface image of the steel belt to be detected, the detection precision is improved, and the labor cost is reduced.
According to one aspect of the present application, there is provided a surface treatment apparatus for processing a steel strip, comprising:
the image acquisition module is used for acquiring an image of the surface of the steel belt to be detected;
the histogram conversion module is used for converting the surface image of the steel belt to be detected from an RGB color space to a YCbCr color space and extracting LBP texture feature histograms of all channels to obtain a multi-channel LBP texture feature histogram;
a channel attention module, configured to pass the multi-channel LBP texture feature histogram through a first convolutional neural network model using a channel attention mechanism to obtain a texture feature map;
The mixed convolution module is used for enabling the surface image of the steel belt to be detected to pass through a second convolution neural network model comprising a plurality of mixed convolution layers so as to obtain a color feature map;
the fusion module is used for fusing the color feature map and the texture feature map to obtain a classification feature map;
the sparsification module is used for performing low-dimensional space probability sparsification on the classification characteristic map to obtain a probability sparsification classification characteristic map;
and the classification result generation module is used for enabling the probability sparse classification characteristic diagram to pass through a classifier to obtain a classification result, and the classification result is used for indicating whether the surface processing of the steel belt is qualified or not.
In the above-described surface treatment apparatus for steel strip processing, the channel attention module is configured to:
input data are respectively carried out in forward transfer of layers by using each layer of the first convolutional neural network:
performing convolution processing on the input data based on a two-dimensional convolution check to generate a convolution feature map;
pooling the convolution feature map to generate a pooled feature map;
performing activation processing on the pooled feature map to generate an activated feature map;
calculating the quotient of the characteristic value average value of the characteristic matrix corresponding to each channel in the activated characteristic diagram and the sum of the characteristic value average values of the characteristic matrices corresponding to all channels as the weighting coefficient of the characteristic matrix corresponding to each channel;
Weighting the feature matrix of each channel by the weighting coefficient of each channel in the activation feature map to generate a channel attention feature map;
wherein the output of the last layer of the first convolutional neural network is the texture feature map.
In the surface treatment apparatus for steel strip processing described above, each of the mixed convolution layers of the second convolutional neural network model includes a parallel first convolutional branch structure, a second convolutional branch structure, a third convolutional branch structure, and a fourth convolutional branch structure, and a multi-scale fusion structure connected with the first to fourth convolutional branch structures, wherein the first convolutional branch uses a first convolutional kernel having a first size, the second convolutional branch uses a second convolutional kernel having a first size and having a first void ratio, the third convolutional branch uses a third convolutional kernel having a first size and having a second void ratio, and the fourth convolutional branch uses a fourth convolutional kernel having a first size and having a third void ratio.
In the above surface treatment apparatus for steel strip processing, the hybrid convolution module is further configured to:
each mixed convolution layer using the second convolutional neural network model performs respective processing on input data in forward transfer of the layer:
Performing convolutional encoding on the surface image of the steel strip to be detected by using a first convolutional check with a first size to obtain a first scale feature map;
performing convolutional encoding on the surface image of the steel strip to be detected by using a second convolutional check with the first void ratio to obtain a second scale feature map;
performing convolutional encoding on the surface image of the steel strip to be detected by using a third convolutional check with a second void ratio to obtain a third scale feature map;
performing convolutional encoding on the surface image of the steel strip to be detected by using a fourth convolutional check with third void ratio to obtain a fourth scale feature map;
wherein the first, second, third, and fourth convolution kernels have the same size, and the second, third, and fourth convolution kernels have different void fractions;
performing aggregation on the first scale feature map, the second scale feature map, the third scale feature map and the fourth scale feature map along a channel dimension to obtain an aggregated feature map;
pooling the aggregate feature map to generate a pooled feature map;
performing activation processing on the pooled feature map to generate an activated feature map;
Wherein the output of the last layer of the convolutional neural network model comprising a plurality of mixed convolutional layers is the color feature map.
In the above surface treatment apparatus for steel strip processing, the fusion module is configured to:
fusing the texture feature map and the color feature map by using the following cascade formula to obtain a classification feature map;
wherein, the cascade formula is:
F=Concat[F 1 ,F 2 ]
wherein F is 1 Representing the texture feature map, F 2 Representing the color feature map, F representing the classification feature map, concat [. Cndot.,]representing a cascading function.
In the above-described surface treatment apparatus for steel strip processing, the thinning module includes:
the probability unit is used for inputting the classification characteristic map into a Softmax activation function to map the characteristic values of all positions in the classification characteristic map into a probability space so as to obtain a probability classification characteristic map;
an affine unit for calculating affine transformation matrices of respective feature matrices along a channel dimension of the probabilistic classification feature map by least squares fitting;
the probability sparsification unit is used for calculating affine probability density distribution of each feature matrix of the probabilistic classification feature map along the channel dimension in a target space through affine transformation matrixes of each feature matrix of the probabilistic classification feature map along the channel dimension and feature values of each position of the probabilistic classification feature map so as to obtain the probabilistic sparse classification feature map.
In the above-described surface treatment apparatus for steel strip processing, the classification result generation module is configured to:
processing the probability sparse classification feature map with the classifier in the following classification formula to obtain the classification result;
wherein, the classification formula is:
O=softmax{(W c ,B c )|Project(F)}
wherein Project (F) represents projecting the probability-sparse classification feature map as a vector, W c Is a weight matrix, B c Representing the bias vector, softmax representing the normalized exponential function, and O representing the classification result.
According to another aspect of the present application, there is also provided a method of detecting a surface treatment apparatus for processing a steel strip, comprising:
acquiring a surface image of a steel belt to be detected;
converting the surface image of the steel belt to be detected from an RGB color space to a YCbCr color space and extracting LBP texture feature histograms of all channels to obtain multi-channel LBP texture feature histograms;
passing the multi-channel LBP texture feature histogram through a first convolutional neural network model using a channel attention mechanism to obtain a texture feature map;
the surface image of the steel belt to be detected passes through a second convolution neural network model comprising a plurality of mixed convolution layers to obtain a color characteristic diagram;
Fusing the color feature map and the texture feature map to obtain a classification feature map;
performing low-dimensional space probability sparsification on the classification feature map to obtain a probability sparsification classification feature map;
and the probability sparse classification characteristic diagram is passed through a classifier to obtain a classification result, and the classification result is used for indicating whether the surface processing of the steel strip is qualified or not.
Compared with the prior art, the surface treatment equipment for processing the steel belt and the method thereof provided by the application use the artificial intelligence technology based on the deep neural network to extract and encode the texture characteristics and the color characteristics of the image of the surface of the steel belt to be detected so as to obtain the classification label for indicating whether the surface processing of the steel belt is qualified. Therefore, by analyzing the surface image of the steel belt to be detected, the detection precision is improved, and the labor cost is reduced.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing embodiments of the present application in more detail with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and together with the embodiments of the application, and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
Fig. 1 illustrates a block diagram of a surface treatment apparatus for processing a steel strip according to an embodiment of the present application.
Fig. 2 illustrates a system architecture diagram of a surface treatment apparatus for processing a steel strip according to an embodiment of the present application.
Fig. 3 illustrates a flowchart of a detection method of a surface treatment apparatus for steel strip processing according to an embodiment of the present application.
Fig. 4 illustrates a block diagram of an electronic device according to an embodiment of the application.
Detailed Description
Hereinafter, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Summary of the application
As described above in the background section, the surface treatment of the steel strip includes a number of methods and steps such as pickling to remove the oxide layer from the surface of the steel strip, electroplating, hot-dipping, spraying to prevent corrosion, etc. Meanwhile, whether the treatment condition of the surface of the steel strip is qualified or not needs to be detected. The traditional methods for detecting the surface of the steel strip are visual inspection, hand touch detection, chemical analysis and electrochemical test methods. However, these methods have some drawbacks: 1. subjectivity: visual inspection and hand touch detection are susceptible to human subjective factors, and as a result, erroneous judgment and errors may occur. 2. Time and labor cost are high: the traditional method needs to be carried out manually, and is time-consuming and labor-consuming. 3. Cannot monitor in real time: the traditional method is often carried out under a specific environment, and the treatment condition of the surface of the steel belt cannot be monitored in real time. Therefore, an optimized surface treatment scheme for steel strip processing is desired.
Aiming at the technical problems, a detection scheme of surface treatment for steel strip processing is provided, wherein an artificial intelligence technology based on a deep neural network is used for extracting and encoding the characteristics of texture and color of a steel strip surface image to be detected, so as to obtain a classification label for indicating whether the steel strip surface processing is qualified. Therefore, by analyzing the surface image of the steel belt to be detected, the detection precision is improved, and the labor cost is reduced.
At present, deep learning and neural networks have been widely used in the fields of computer vision, natural language processing, speech signal processing, and the like. In addition, deep learning and neural networks have also shown levels approaching and even exceeding humans in the fields of image classification, object detection, semantic segmentation, text translation, and the like.
In recent years, the development of deep learning and neural networks provides new solutions and solutions for surface treatment equipment for steel strip processing.
Specifically, first, an image of the surface of a steel strip to be inspected is acquired.
And then, converting the surface image of the steel belt to be detected from an RGB color space to a YCbCr color space and extracting LBP texture characteristic histograms of all channels to obtain a multi-channel LBP texture characteristic histogram. The YCbCr color space separates luminance information (Y channel) from chrominance information (Cb and Cr channels) of an image. The texture characteristics of the steel strip surface are typically related to brightness and less to chromaticity. By converting the image to YCbCr color space, texture features can be better emphasized, reducing crosstalk of the chrominance information. Local Binary Pattern (LBP) is a commonly used texture feature description method that extracts texture information by comparing the gray value magnitude relationship of a pixel point to its surrounding neighborhood pixels. Each channel of the YCbCr color space is respectively extracted with LBP texture characteristic histogram, and the texture characteristic change in different channels can be captured. The LBP texture characteristic histogram of each channel is fused into a multi-channel LBP texture characteristic histogram, and the texture characteristic information of different channels can be comprehensively considered. The fused features have richer expression capability, and can improve the detection accuracy of the surface treatment of the steel strip.
The multi-channel LBP texture feature histogram is then passed through a first convolutional neural network model using a channel attention mechanism to obtain a texture feature map. Channel attention mechanisms are a variation of attention mechanisms that automatically learn weights between channels so that the network can adaptively weight the characteristics of different channels. In a steel strip surface treatment task, the LBP texture features of different channels may have different importance and discrimination. By using the channel attention mechanism, the network can automatically learn the weights of each channel to enhance the attention to important channels and to attenuate the attention to unimportant channels. By using a first convolutional neural network model of the channel attention mechanism, the extraction of texture features with differentiation can be enhanced by learning the weights of each channel. Therefore, the expression capability and the discrimination capability of texture features can be improved, and the detection performance of the surface treatment of the steel strip is further enhanced. The channel weight can be automatically learned through the network, so that the requirements of different samples and different tasks can be met, and the generalization capability and the adaptability of the model are improved.
And simultaneously, the surface image of the steel belt to be detected passes through a second convolution neural network model containing a plurality of mixed convolution layers to obtain a color characteristic diagram. The color information of the surface of the steel strip has important significance for the surface treatment task. The color characteristics can reflect the information of dyeing conditions, stains, coatings and the like on the surface of the steel belt, and are important for detecting and judging the effect and quality of surface treatment. The mixed convolution layer is a convolution operation combining receptive fields of different scales, and features of different scales in an image can be captured. The color features of the steel strip surface image can be effectively extracted through a second convolutional neural network model comprising a plurality of mixed convolutional layers. The mixed convolution layer can carry out convolution operation through convolution check images with different sizes, so that color characteristic changes with different scales are captured. Color features can be extracted and combined layer by layer through a plurality of mixed convolutional layers of the second convolutional neural network model to form a color feature map. This allows better representation of the color information of the steel strip surface and provides a more differentiated representation of the features. The color feature map can be fused with the texture feature map, and the color and texture information of the surface of the steel strip are comprehensively considered, so that the detection accuracy and stability of surface treatment are further improved.
Further, the color feature map and the texture feature map are fused to obtain a classification feature map. Considering the surface treatment task of the steel strip, different characteristics need to be comprehensively considered to carry out classification and judgment. Color features and texture features are two important aspects of the surface of the strip that contain different information that can provide complementary feature expression. The color feature map reflects color information such as dyeing conditions, stains, coatings and the like on the surface of the steel belt, and the texture feature map captures texture changes and detail features of the surface of the steel belt. The two feature maps are fused to comprehensively utilize the advantages of the two feature maps and provide more comprehensive and rich feature representation. By fusing the color feature map and the texture feature map, the information of the color feature map and the texture feature map can be weighted and combined to obtain the classification feature map. Therefore, the distinguishing degree and distinguishing capability of the features can be improved on the basis of keeping color and texture information. The classification characteristic diagram can be used as input for carrying out a subsequent classification model or decision model for classifying and evaluating the surface treatment of the steel strip.
And then, the classification characteristic diagram is passed through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the surface processing of the steel strip is qualified. Therefore, by analyzing the surface image of the steel belt to be detected, the detection precision is improved, and the labor cost is reduced.
In particular, it is contemplated that features extracted from the image of the surface of the steel strip to be inspected may contain rich visual information such as color, texture, shape, etc. These features can be used to distinguish between different classes of image samples. However, due to the complexity and diversity of the images, there may be some redundant or irrelevant features in the classification feature map. These features may be due to background noise, illumination variations, image distortion, or other factors. Furthermore, different classes of image samples may have some features in common, which do not have a large contribution to distinguishing between the different classes of samples. These redundant or uncorrelated features can increase the dimensionality of the feature space and introduce noise and interference that can be a nuisance to classification tasks. In addition, redundant or uncorrelated features may also lead to overfitting problems, reducing the generalization ability of the classification model. Therefore, when feature extraction and feature fusion are performed, feature sparsification is required to be performed on the classified feature map, redundant or irrelevant features are removed, and features having the most distinguished degree and relevance are extracted. Therefore, the dimension of the feature space can be reduced, the generalization capability of the classification model is improved, and the calculation and storage cost is reduced.
Specifically, performing low-dimensional spatial probability sparsification on the classification feature map to obtain a probability sparse classification feature map, including: inputting the classification characteristic map into a Softmax activation function to map characteristic values of all positions in the classification characteristic map into a probability space so as to obtain a probabilistic classification characteristic map; calculating affine transformation matrices of the respective feature matrices along the channel dimension of the probabilistic classification feature map by least squares fitting; and calculating affine probability density distribution of each feature matrix of the probabilistic classification feature map along the channel dimension in a target space through affine transformation matrix of each feature matrix of the probabilistic classification feature map along the channel dimension and feature values of each position of the probabilistic classification feature map to obtain the probabilistic sparse classification feature map.
In the technical scheme of the application, the high-dimensional feature map of the classification feature map is converted into a low-dimensional probability density space through probabilistic affine transformation so as to reduce the computational complexity and the storage cost. And the classification feature map can be mapped to a common target space through probabilistic affine transformation, so that the geometric transformation among various parts of the classification feature map along the channel dimension is eliminated, similar local features are gathered in the space, and different parts can be scattered, so that the discrimination capability of the classifier is enhanced.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
Exemplary System
Fig. 1 illustrates a block diagram of a surface treatment apparatus for processing a steel strip according to an embodiment of the present application. As shown in fig. 1, a surface treatment apparatus 100 for processing a steel strip according to an embodiment of the present application includes: the image acquisition module 110 is used for acquiring an image of the surface of the steel strip to be detected; the histogram conversion module 120 is configured to convert the to-be-detected steel strip surface image from an RGB color space to a YCbCr color space and extract LBP texture feature histograms of the respective channels to obtain a multi-channel LBP texture feature histogram; a channel attention module 130, configured to pass the multi-channel LBP texture feature histogram through a first convolutional neural network model using a channel attention mechanism to obtain a texture feature map; the mixed convolution module 140 is configured to pass the surface image of the steel strip to be detected through a second convolution neural network model that includes a plurality of mixed convolution layers to obtain a color feature map; a fusion module 150, configured to fuse the color feature map and the texture feature map to obtain a classification feature map; the sparsification module 160 is configured to perform low-dimensional spatial probability sparsification on the classification feature map to obtain a probability sparse classification feature map; and a classification result generation module 170, configured to pass the probability sparse classification feature map through a classifier to obtain a classification result, where the classification result is used to indicate whether the steel strip surface processing is qualified.
Fig. 2 illustrates a system architecture diagram of a surface treatment apparatus for processing a steel strip according to an embodiment of the present application. In the system architecture, as shown in fig. 2, first, an image of the surface of the steel strip to be inspected is acquired. And then, converting the surface image of the steel belt to be detected from an RGB color space to a YCbCr color space and extracting LBP texture characteristic histograms of all channels to obtain a multi-channel LBP texture characteristic histogram. The multi-channel LBP texture feature histogram is then passed through a first convolutional neural network model using a channel attention mechanism to obtain a texture feature map. And simultaneously, the surface image of the steel belt to be detected passes through a second convolution neural network model containing a plurality of mixed convolution layers to obtain a color characteristic diagram. And then fusing the color feature map and the texture feature map to obtain a classification feature map. And then, carrying out low-dimensional space probability sparsification on the classification characteristic map to obtain a probability sparsified classification characteristic map. And then, the probability sparse classification characteristic map is passed through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the surface processing of the steel strip is qualified.
In the surface treatment apparatus 100 for processing a steel strip, the image acquisition module 110 is configured to acquire an image of the surface of the steel strip to be detected. As described above in the background section, the surface treatment of the steel strip includes a number of methods and steps such as pickling to remove the oxide layer from the surface of the steel strip, electroplating, hot-dipping, spraying to prevent corrosion, etc. Meanwhile, whether the treatment condition of the surface of the steel strip is qualified or not needs to be detected. The traditional methods for detecting the surface of the steel strip are visual inspection, hand touch detection, chemical analysis and electrochemical test methods. However, these methods have some drawbacks: 1. subjectivity: visual inspection and hand touch detection are susceptible to human subjective factors, and as a result, erroneous judgment and errors may occur. 2. Time and labor cost are high: the traditional method needs to be carried out manually, and is time-consuming and labor-consuming. 3. Cannot monitor in real time: the traditional method is often carried out under a specific environment, and the treatment condition of the surface of the steel belt cannot be monitored in real time. Therefore, an optimized surface treatment scheme for steel strip processing is desired.
Aiming at the technical problems, a detection scheme of surface treatment for steel strip processing is provided, wherein an artificial intelligence technology based on a deep neural network is used for extracting and encoding the characteristics of texture and color of a steel strip surface image to be detected, so as to obtain a classification label for indicating whether the steel strip surface processing is qualified. Therefore, by analyzing the surface image of the steel belt to be detected, the detection precision is improved, and the labor cost is reduced.
At present, deep learning and neural networks have been widely used in the fields of computer vision, natural language processing, speech signal processing, and the like. In addition, deep learning and neural networks have also shown levels approaching and even exceeding humans in the fields of image classification, object detection, semantic segmentation, text translation, and the like.
In recent years, the development of deep learning and neural networks provides new solutions and solutions for surface treatment equipment for steel strip processing.
Specifically, first, an image of the surface of a steel strip to be inspected is acquired.
In the surface treatment apparatus 100 for steel strip processing, the histogram conversion module 120 is configured to convert the surface image of the steel strip to be detected from RGB color space to YCbCr color space and extract the LBP texture feature histograms of the respective channels to obtain a multi-channel LBP texture feature histogram. The YCbCr color space separates luminance information (Y channel) from chrominance information (Cb and Cr channels) of an image. The texture characteristics of the steel strip surface are typically related to brightness and less to chromaticity. By converting the image to YCbCr color space, texture features can be better emphasized, reducing crosstalk of the chrominance information. Local Binary Pattern (LBP) is a commonly used texture feature description method that extracts texture information by comparing the gray value magnitude relationship of a pixel point to its surrounding neighborhood pixels. Each channel of the YCbCr color space is respectively extracted with LBP texture characteristic histogram, and the texture characteristic change in different channels can be captured. The LBP texture characteristic histogram of each channel is fused into a multi-channel LBP texture characteristic histogram, and the texture characteristic information of different channels can be comprehensively considered. The fused features have richer expression capability, and can improve the detection accuracy of the surface treatment of the steel strip.
In the surface treatment apparatus 100 for steel strip working described above, the channel attention module 130 is configured to obtain the texture map by using the multi-channel LBP texture feature histogram through a first convolutional neural network model using a channel attention mechanism. Channel attention mechanisms are a variation of attention mechanisms that automatically learn weights between channels so that the network can adaptively weight the characteristics of different channels. In a steel strip surface treatment task, the LBP texture features of different channels may have different importance and discrimination. By using the channel attention mechanism, the network can automatically learn the weights of each channel to enhance the attention to important channels and to attenuate the attention to unimportant channels. By using a first convolutional neural network model of the channel attention mechanism, the extraction of texture features with differentiation can be enhanced by learning the weights of each channel. Therefore, the expression capability and the discrimination capability of texture features can be improved, and the detection performance of the surface treatment of the steel strip is further enhanced. The channel weight can be automatically learned through the network, so that the requirements of different samples and different tasks can be met, and the generalization capability and the adaptability of the model are improved.
Specifically, in the embodiment of the present application, the channel attention module 130 is configured to: input data are respectively carried out in forward transfer of layers by using each layer of the first convolutional neural network: performing convolution processing on the input data based on a two-dimensional convolution check to generate a convolution feature map; pooling the convolution feature map to generate a pooled feature map; performing activation processing on the pooled feature map to generate an activated feature map; calculating the quotient of the characteristic value average value of the characteristic matrix corresponding to each channel in the activated characteristic diagram and the sum of the characteristic value average values of the characteristic matrices corresponding to all channels as the weighting coefficient of the characteristic matrix corresponding to each channel; weighting the feature matrix of each channel by the weighting coefficient of each channel in the activation feature map to generate a channel attention feature map; wherein the output of the last layer of the first convolutional neural network is the texture feature map.
In the surface treatment apparatus 100 for processing a steel strip, the hybrid convolution module 140 is configured to pass the surface image of the steel strip to be detected through a second convolution neural network model including a plurality of hybrid convolution layers to obtain a color feature map. The color information of the surface of the steel strip has important significance for the surface treatment task. The color characteristics can reflect the information of dyeing conditions, stains, coatings and the like on the surface of the steel belt, and are important for detecting and judging the effect and quality of surface treatment. The mixed convolution layer is a convolution operation combining receptive fields of different scales, and features of different scales in an image can be captured. The color features of the steel strip surface image can be effectively extracted through a second convolutional neural network model comprising a plurality of mixed convolutional layers. The mixed convolution layer can carry out convolution operation through convolution check images with different sizes, so that color characteristic changes with different scales are captured. Color features can be extracted and combined layer by layer through a plurality of mixed convolutional layers of the second convolutional neural network model to form a color feature map. This allows better representation of the color information of the steel strip surface and provides a more differentiated representation of the features. The color feature map can be fused with the texture feature map, and the color and texture information of the surface of the steel strip are comprehensively considered, so that the detection accuracy and stability of surface treatment are further improved.
Specifically, in the embodiment of the present application, each hybrid convolutional layer of the second convolutional neural network model includes a parallel first convolutional branch structure, a second convolutional branch structure, a third convolutional branch structure and a fourth convolutional branch structure, and a multi-scale fusion structure connected with the first to fourth convolutional branch structures, wherein the first convolutional branch uses a first convolutional kernel having a first size, the second convolutional branch uses a second convolutional kernel having a first size and having a first void ratio, the third convolutional branch uses a third convolutional kernel having a first size and having a second void ratio, and the fourth convolutional branch uses a fourth convolutional kernel having a first size and having a third void ratio.
Specifically, in the embodiment of the present application, the hybrid convolution module 140 is further configured to: each mixed convolution layer using the second convolutional neural network model performs respective processing on input data in forward transfer of the layer: performing convolutional encoding on the surface image of the steel strip to be detected by using a first convolutional check with a first size to obtain a first scale feature map; performing convolutional encoding on the surface image of the steel strip to be detected by using a second convolutional check with the first void ratio to obtain a second scale feature map; performing convolutional encoding on the surface image of the steel strip to be detected by using a third convolutional check with a second void ratio to obtain a third scale feature map; performing convolutional encoding on the surface image of the steel strip to be detected by using a fourth convolutional check with third void ratio to obtain a fourth scale feature map; wherein the first, second, third, and fourth convolution kernels have the same size, and the second, third, and fourth convolution kernels have different void fractions; performing aggregation on the first scale feature map, the second scale feature map, the third scale feature map and the fourth scale feature map along a channel dimension to obtain an aggregated feature map; pooling the aggregate feature map to generate a pooled feature map; performing activation processing on the pooled feature map to generate an activated feature map; wherein the output of the last layer of the convolutional neural network model comprising a plurality of mixed convolutional layers is the color feature map.
In the surface treatment apparatus 100 for processing a steel strip, the fusion module 150 is configured to fuse the color feature map and the texture feature map to obtain a classification feature map. Color features and texture features are two important aspects of the surface of the strip that contain different information that can provide complementary feature expression. The color feature map reflects color information such as dyeing conditions, stains, coatings and the like on the surface of the steel belt, and the texture feature map captures texture changes and detail features of the surface of the steel belt. The two feature maps are fused to comprehensively utilize the advantages of the two feature maps and provide more comprehensive and rich feature representation. By fusing the color feature map and the texture feature map, the information of the color feature map and the texture feature map can be weighted and combined to obtain the classification feature map. Therefore, the distinguishing degree and distinguishing capability of the features can be improved on the basis of keeping color and texture information. The classification characteristic diagram can be used as input for carrying out a subsequent classification model or decision model for classifying and evaluating the surface treatment of the steel strip.
Specifically, in the embodiment of the present application, the fusion module 150 is configured to: fusing the texture feature map and the color feature map by using the following cascade formula to obtain a classification feature map; wherein, the cascade formula is:
F=Concat[F 1 ,F 2 ]
Wherein F is 1 Representing the texture feature map, F 2 Representing the colorFeature map, F, represents the classification feature map, concat [. Cndot.,. Cndot.)]Representing a cascading function.
In the surface treatment apparatus 100 for steel strip processing, the sparsification module 160 is configured to perform low-dimensional spatial probability sparsification on the classification characteristic map to obtain a probability sparse classification characteristic map. It is contemplated that features extracted from the image of the surface of the strip to be inspected may contain rich visual information such as color, texture, shape, etc. These features can be used to distinguish between different classes of image samples. However, due to the complexity and diversity of the images, there may be some redundant or irrelevant features in the classification feature map. These features may be due to background noise, illumination variations, image distortion, or other factors. Furthermore, different classes of image samples may have some features in common, which do not have a large contribution to distinguishing between the different classes of samples. These redundant or uncorrelated features can increase the dimensionality of the feature space and introduce noise and interference that can be a nuisance to classification tasks. In addition, redundant or uncorrelated features may also lead to overfitting problems, reducing the generalization ability of the classification model. Therefore, when feature extraction and feature fusion are performed, feature sparsification is required to be performed on the classified feature map, redundant or irrelevant features are removed, and features having the most distinguished degree and relevance are extracted. Therefore, the dimension of the feature space can be reduced, the generalization capability of the classification model is improved, and the calculation and storage cost is reduced.
Specifically, performing low-dimensional spatial probability sparsification on the classification feature map to obtain a probability sparse classification feature map, including: inputting the classification characteristic map into a Softmax activation function to map characteristic values of all positions in the classification characteristic map into a probability space so as to obtain a probabilistic classification characteristic map; calculating affine transformation matrices of the respective feature matrices along the channel dimension of the probabilistic classification feature map by least squares fitting; and calculating affine probability density distribution of each feature matrix of the probabilistic classification feature map along the channel dimension in a target space through affine transformation matrix of each feature matrix of the probabilistic classification feature map along the channel dimension and feature values of each position of the probabilistic classification feature map to obtain the probabilistic sparse classification feature map.
In the technical scheme of the application, the high-dimensional feature map of the classification feature map is converted into a low-dimensional probability density space through probabilistic affine transformation so as to reduce the computational complexity and the storage cost. And the classification feature map can be mapped to a common target space through probabilistic affine transformation, so that the geometric transformation among various parts of the classification feature map along the channel dimension is eliminated, similar local features are gathered in the space, and different parts can be scattered, so that the discrimination capability of the classifier is enhanced.
Specifically, in an embodiment of the present application, the sparsification module includes: the probability unit is used for inputting the classification characteristic map into a Softmax activation function to map the characteristic values of all positions in the classification characteristic map into a probability space so as to obtain a probability classification characteristic map; an affine unit for calculating affine transformation matrices of respective feature matrices along a channel dimension of the probabilistic classification feature map by least squares fitting; the probability sparsification unit is used for calculating affine probability density distribution of each feature matrix of the probabilistic classification feature map along the channel dimension in a target space through affine transformation matrixes of each feature matrix of the probabilistic classification feature map along the channel dimension and feature values of each position of the probabilistic classification feature map so as to obtain the probabilistic sparse classification feature map.
Specifically, in the embodiment of the application, for each feature matrix of the probabilistic classified feature map along the channel dimension, an affine transformation matrix thereof is calculated, that is, a linear transformation matrix of each feature matrix of the probabilistic classified feature map along the channel dimension is mapped from an original space to a target space, by performing least square fitting on coordinates of each feature matrix of the probabilistic classified feature map along the channel dimension, that is, solving the following equation,
Wherein, (x) i ,y i ) Is the original coordinates of the various feature matrices along the channel dimension of the probabilistic classification feature map, (x) i ,y i ) Is the coordinates of the target space, a ij Is the rotation and scaling coefficient of the affine transformation matrix, t x And t y Is the translation coefficient of the affine transformation matrix.
In the above-mentioned surface treatment apparatus 100 for steel strip processing, the classification result generation module 170 is configured to pass the probability sparse classification feature map through a classifier to obtain a classification result, where the classification result is used to indicate whether the steel strip surface processing is qualified. Therefore, by analyzing the surface image of the steel belt to be detected, the detection precision is improved, and the labor cost is reduced.
Specifically, in the embodiment of the present application, the classification result generating module 170 is configured to: processing the probability sparse classification feature map with the classifier in the following classification formula to obtain the classification result; wherein, the classification formula is:
O=softmax{(W c ,B c )|Priject(F)}
wherein Project (F) represents projecting the probability-sparse classification feature map as a vector, W c Is a weight matrix, B c Representing the bias vector, softmax representing the normalized exponential function, and O representing the classification result.
In summary, a surface treatment apparatus for processing a steel strip according to an embodiment of the present application has been elucidated, which uses artificial intelligence technology based on a deep neural network to perform feature extraction and encoding of texture features and color features of an image of the surface of the steel strip to be detected, so as to obtain a classification tag for indicating whether the surface processing of the steel strip is acceptable. Therefore, by analyzing the surface image of the steel belt to be detected, the detection precision is improved, and the labor cost is reduced.
Exemplary method
Fig. 3 illustrates a flowchart of a detection method of a surface treatment apparatus for steel strip processing according to an embodiment of the present application. As shown in fig. 3, the method for detecting the surface treatment device for processing the steel strip according to the embodiment of the application comprises the following steps: s110, acquiring a surface image of the steel strip to be detected; s120, converting the surface image of the steel belt to be detected from an RGB color space to a YCbCr color space and extracting LBP texture feature histograms of all channels to obtain a multi-channel LBP texture feature histogram; s130, the multi-channel LBP texture characteristic histogram is processed through a first convolution neural network model using a channel attention mechanism to obtain a texture characteristic map; s140, passing the surface image of the steel belt to be detected through a second convolutional neural network model comprising a plurality of mixed convolutional layers to obtain a color feature map; s150, fusing the color feature map and the texture feature map to obtain a classification feature map; s160, carrying out low-dimensional space probability sparsification on the classification characteristic map to obtain a probability sparsified classification characteristic map; and S160, passing the probability sparse classification characteristic map through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the surface processing of the steel belt is qualified or not.
Here, it will be understood by those skilled in the art that the specific operations of the respective steps in the above-described detection method of the surface treatment apparatus for steel strip processing have been described in detail in the above description of the surface treatment apparatus for steel strip processing with reference to fig. 1 to 2, and thus, repetitive descriptions thereof will be omitted.
As described above, the surface treatment apparatus 100 for steel strip processing according to the embodiment of the present application can be implemented in various terminal apparatuses, such as a server or the like for surface treatment for steel strip processing. In one example, the surface treatment apparatus 100 for steel strip processing according to an embodiment of the present application may be integrated into a terminal apparatus as one software module and/or hardware module. For example, the surface treatment apparatus 100 for steel strip processing may be a software module in the operating system of the terminal apparatus, or may be an application developed for the terminal apparatus; of course, the steel strip finishing surface treatment apparatus 100 may also be one of a plurality of hardware modules of the terminal equipment.
Alternatively, in another example, the steel strip finishing surface treatment apparatus 100 and the terminal apparatus may be separate apparatuses, and the steel strip finishing surface treatment apparatus 100 may be connected to the terminal apparatus through a wired and/or wireless network and transmit interactive information in a prescribed data format.
In summary, the detection method of the surface treatment device for steel strip processing according to the embodiment of the application has been elucidated, which uses artificial intelligence technology based on a deep neural network to perform feature extraction and coding on texture features and color features of a steel strip surface image to be detected, so as to obtain a classification label for indicating whether the steel strip surface processing is qualified. Therefore, by analyzing the surface image of the steel belt to be detected, the detection precision is improved, and the labor cost is reduced.
Exemplary electronic device
Next, an electronic device according to an embodiment of the present application is described with reference to fig. 4. Fig. 4 is a block diagram of an electronic device according to an embodiment of the application. As shown in fig. 4, the electronic device 10 includes one or more processors 11 and a memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium and the processor 11 may execute the program instructions to perform the functions in the method of detecting a surface treating apparatus for steel strip processing in accordance with the various embodiments of the present application described above and/or other desired functions. Various contents such as an image of the surface of the steel strip to be detected may also be stored in the computer-readable storage medium.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
The input means 13 may comprise, for example, a keyboard, a mouse, etc.
The output device 14 can output various information to the outside, including whether the surface of the steel strip is qualified or not. The output means 14 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Of course, only some of the components of the electronic device 10 that are relevant to the present application are shown in fig. 4 for simplicity, components such as buses, input/output interfaces, etc. are omitted. In addition, the electronic device 10 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer readable storage Medium
In addition to the above-described methods and apparatuses, embodiments of the present application may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform steps in the functions of the detection method of the surface treatment apparatus for steel strip processing according to the various embodiments of the present application described in the above-described "exemplary method" section of the present specification.
The computer program product may write program code for performing operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform steps in the functions in the detection method of the surface treatment apparatus for steel strip processing according to the various embodiments of the present application described in the above "exemplary method" section of the present specification.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not intended to be limiting, and these advantages, benefits, effects, etc. are not to be considered as essential to the various embodiments of the present application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not necessarily limited to practice with the above described specific details.
The block diagrams of the devices, apparatuses, devices, systems referred to in the present application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and methods of the present application, the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent aspects of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the application to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (10)

1. A surface treatment apparatus for processing a steel strip, comprising:
the image acquisition module is used for acquiring an image of the surface of the steel belt to be detected;
The histogram conversion module is used for converting the surface image of the steel belt to be detected from an RGB color space to a YCbCr color space and extracting LBP texture feature histograms of all channels to obtain a multi-channel LBP texture feature histogram;
a channel attention module, configured to pass the multi-channel LBP texture feature histogram through a first convolutional neural network model using a channel attention mechanism to obtain a texture feature map;
the mixed convolution module is used for enabling the surface image of the steel belt to be detected to pass through a second convolution neural network model comprising a plurality of mixed convolution layers so as to obtain a color feature map;
the fusion module is used for fusing the color feature map and the texture feature map to obtain a classification feature map;
the sparsification module is used for performing low-dimensional space probability sparsification on the classification characteristic map to obtain a probability sparsification classification characteristic map;
and the classification result generation module is used for enabling the probability sparse classification characteristic diagram to pass through a classifier to obtain a classification result, and the classification result is used for indicating whether the surface processing of the steel belt is qualified or not.
2. The surface treatment apparatus for steel strip working as claimed in claim 1 wherein the channel attention module is for:
Input data are respectively carried out in forward transfer of layers by using each layer of the first convolutional neural network:
performing convolution processing on the input data based on a two-dimensional convolution check to generate a convolution feature map;
pooling the convolution feature map to generate a pooled feature map;
performing activation processing on the pooled feature map to generate an activated feature map;
calculating the quotient of the characteristic value average value of the characteristic matrix corresponding to each channel in the activated characteristic diagram and the sum of the characteristic value average values of the characteristic matrices corresponding to all channels as the weighting coefficient of the characteristic matrix corresponding to each channel;
weighting the feature matrix of each channel by the weighting coefficient of each channel in the activation feature map to generate a channel attention feature map;
wherein the output of the last layer of the first convolutional neural network is the texture feature map.
3. The surface treatment apparatus for steel strip working as recited in claim 2 wherein each mixed convolution layer of the second convolutional neural network model includes a parallel first convolution branch structure, a second convolution branch structure, a third convolution branch structure, and a fourth convolution branch structure, and a multi-scale fusion structure connected with the first to fourth convolution branch structures, wherein the first convolution branch uses a first convolution kernel having a first size, the second convolution branch uses a second convolution kernel having a first size and having a first void fraction, the third convolution branch uses a third convolution kernel having a first size and having a second void fraction, and the fourth convolution branch uses a fourth convolution kernel having a first size and having a third void fraction.
4. The surface treatment apparatus for steel strip machining according to claim 3, wherein the hybrid convolution module is further configured to:
each mixed convolution layer using the second convolutional neural network model performs respective processing on input data in forward transfer of the layer:
performing convolutional encoding on the surface image of the steel strip to be detected by using a first convolutional check with a first size to obtain a first scale feature map;
performing convolutional encoding on the surface image of the steel strip to be detected by using a second convolutional check with the first void ratio to obtain a second scale feature map;
performing convolutional encoding on the surface image of the steel strip to be detected by using a third convolutional check with a second void ratio to obtain a third scale feature map;
performing convolutional encoding on the surface image of the steel strip to be detected by using a fourth convolutional check with third void ratio to obtain a fourth scale feature map;
wherein the first, second, third, and fourth convolution kernels have the same size, and the second, third, and fourth convolution kernels have different void fractions;
performing aggregation on the first scale feature map, the second scale feature map, the third scale feature map and the fourth scale feature map along a channel dimension to obtain an aggregated feature map;
Pooling the aggregate feature map to generate a pooled feature map;
performing activation processing on the pooled feature map to generate an activated feature map;
wherein the output of the last layer of the convolutional neural network model comprising a plurality of mixed convolutional layers is the color feature map.
5. The surface treatment apparatus for steel strip working as claimed in claim 4 wherein the fusion module is configured to:
fusing the texture feature map and the color feature map by using the following cascade formula to obtain a classification feature map;
wherein, the cascade formula is:
F=Concat[F 1 ,F 2 ]
wherein F is 1 Representing the texture feature map, F 2 Representing the color feature map, F representing the classification feature map, concat [. Cndot.,]representing a cascading function.
6. The surface treatment apparatus for steel strip working as claimed in claim 5 wherein the thinning module comprises:
the probability unit is used for inputting the classification characteristic map into a Softmax activation function to map the characteristic values of all positions in the classification characteristic map into a probability space so as to obtain a probability classification characteristic map;
an affine unit for calculating affine transformation matrices of respective feature matrices along a channel dimension of the probabilistic classification feature map by least squares fitting;
The probability sparsification unit is used for calculating affine probability density distribution of each feature matrix of the probabilistic classification feature map along the channel dimension in a target space through affine transformation matrixes of each feature matrix of the probabilistic classification feature map along the channel dimension and feature values of each position of the probabilistic classification feature map so as to obtain the probabilistic sparse classification feature map.
7. The surface treatment apparatus for steel strip processing as claimed in claim 6, wherein the classification result generation module is configured to:
processing the probability sparse classification feature map with the classifier in the following classification formula to obtain the classification result;
wherein, the classification formula is:
O=softmax{(W c ,B c )|Project(F)}
wherein Project (F) represents projecting the probability-sparse classification feature map as a vector, W c Is a weight matrix, B c Representing the bias vector, softmax representing the normalized exponential function, and O representing the classification result.
8. A method for detecting a surface treatment apparatus for processing a steel strip, comprising:
acquiring a surface image of a steel belt to be detected;
converting the surface image of the steel belt to be detected from an RGB color space to a YCbCr color space and extracting LBP texture feature histograms of all channels to obtain multi-channel LBP texture feature histograms;
Passing the multi-channel LBP texture feature histogram through a first convolutional neural network model using a channel attention mechanism to obtain a texture feature map;
the surface image of the steel belt to be detected passes through a second convolution neural network model comprising a plurality of mixed convolution layers to obtain a color characteristic diagram;
fusing the color feature map and the texture feature map to obtain a classification feature map;
performing low-dimensional space probability sparsification on the classification feature map to obtain a probability sparsification classification feature map;
and the probability sparse classification characteristic diagram is passed through a classifier to obtain a classification result, and the classification result is used for indicating whether the surface processing of the steel strip is qualified or not.
9. The method of detecting a surface treatment apparatus for steel strip machining according to claim 8, wherein the multi-channel LBP texture feature histogram is used to obtain a texture feature map by a first convolutional neural network model using a channel attention mechanism for:
input data are respectively carried out in forward transfer of layers by using each layer of the first convolutional neural network:
performing convolution processing on the input data based on a two-dimensional convolution check to generate a convolution feature map;
pooling the convolution feature map to generate a pooled feature map;
Performing activation processing on the pooled feature map to generate an activated feature map;
calculating the quotient of the characteristic value average value of the characteristic matrix corresponding to each channel in the activated characteristic diagram and the sum of the characteristic value average values of the characteristic matrices corresponding to all channels as the weighting coefficient of the characteristic matrix corresponding to each channel;
weighting the feature matrix of each channel by the weighting coefficient of each channel in the activation feature map to generate a channel attention feature map;
wherein the output of the last layer of the first convolutional neural network is the texture feature map.
10. The method for detecting a surface treatment apparatus for steel strip finishing as claimed in claim 9, wherein the probability-sparse classification feature map is passed through a classifier to obtain a classification result indicating whether the steel strip surface finish is acceptable or not, for:
processing the probability sparse classification feature map with the classifier in the following classification formula to obtain the classification result;
wherein, the classification formula is:
O=softmax{(W c ,B c )|Project(F)}
wherein Project (F) represents projecting the probability-sparse classification feature map as a vector, W c Is a weight matrix, B c Representing the bias vector, softmax representing the normalized exponential function, and O representing the classification result.
CN202311209053.XA 2023-09-19 2023-09-19 Surface treatment equipment and method for steel strip processing Pending CN117173147A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311209053.XA CN117173147A (en) 2023-09-19 2023-09-19 Surface treatment equipment and method for steel strip processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311209053.XA CN117173147A (en) 2023-09-19 2023-09-19 Surface treatment equipment and method for steel strip processing

Publications (1)

Publication Number Publication Date
CN117173147A true CN117173147A (en) 2023-12-05

Family

ID=88937228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311209053.XA Pending CN117173147A (en) 2023-09-19 2023-09-19 Surface treatment equipment and method for steel strip processing

Country Status (1)

Country Link
CN (1) CN117173147A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117689902A (en) * 2024-01-31 2024-03-12 台州航泰航空科技有限公司 Forging processing technology of ball cage

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117689902A (en) * 2024-01-31 2024-03-12 台州航泰航空科技有限公司 Forging processing technology of ball cage

Similar Documents

Publication Publication Date Title
CN109584248B (en) Infrared target instance segmentation method based on feature fusion and dense connection network
Chen et al. Research on recognition of fly species based on improved RetinaNet and CBAM
CN112215201B (en) Method and device for evaluating face recognition model and classification model aiming at image
CN114972213A (en) Two-stage mainboard image defect detection and positioning method based on machine vision
CN108520215B (en) Single-sample face recognition method based on multi-scale joint feature encoder
CN109977834B (en) Method and device for segmenting human hand and interactive object from depth image
Zheng et al. Static Hand Gesture Recognition Based on Gaussian Mixture Model and Partial Differential Equation.
CN117173147A (en) Surface treatment equipment and method for steel strip processing
CN111680757A (en) Zero sample image recognition algorithm and system based on self-encoder
Lin et al. Determination of the varieties of rice kernels based on machine vision and deep learning technology
CN117011274A (en) Automatic glass bottle detection system and method thereof
Ma et al. A hierarchical attention detector for bearing surface defect detection
Borbon et al. Coral health identification using image classification and convolutional neural networks
CN117173154A (en) Online image detection system and method for glass bottle
CN111291712B (en) Forest fire recognition method and device based on interpolation CN and capsule network
CN117372853A (en) Underwater target detection algorithm based on image enhancement and attention mechanism
CN109740682B (en) Image identification method based on domain transformation and generation model
CN116994049A (en) Full-automatic flat knitting machine and method thereof
Peng et al. Contamination classification for pellet quality inspection using deep learning
CN114387524B (en) Image identification method and system for small sample learning based on multilevel second-order representation
Khavalko et al. Classification and Recognition of Medical Images Based on the SGTM Neuroparadigm.
CN115512203A (en) Information detection method, device, equipment and storage medium
CN113361422A (en) Face recognition method based on angle space loss bearing
CN117197487B (en) Immune colloidal gold diagnosis test strip automatic identification system
CN116933041B (en) Force sensor number checking system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination