CN112489033A - Method for detecting cleaning effect of concrete curing box based on classification weight - Google Patents
Method for detecting cleaning effect of concrete curing box based on classification weight Download PDFInfo
- Publication number
- CN112489033A CN112489033A CN202011465952.2A CN202011465952A CN112489033A CN 112489033 A CN112489033 A CN 112489033A CN 202011465952 A CN202011465952 A CN 202011465952A CN 112489033 A CN112489033 A CN 112489033A
- Authority
- CN
- China
- Prior art keywords
- feature map
- classification
- classification weight
- attention
- map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000000694 effects Effects 0.000 title claims abstract description 81
- 238000004140 cleaning Methods 0.000 title claims description 83
- 238000000034 method Methods 0.000 title claims description 53
- 238000010586 diagram Methods 0.000 claims abstract description 39
- 230000007246 mechanism Effects 0.000 claims abstract description 34
- 238000001514 detection method Methods 0.000 claims abstract description 25
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 21
- 230000006870 function Effects 0.000 claims description 38
- 238000004590 computer program Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 9
- 230000004927 fusion Effects 0.000 claims description 4
- 238000012360 testing method Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 8
- 238000013135 deep learning Methods 0.000 description 6
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 3
- 238000007689 inspection Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 102100028449 Arginine-glutamic acid dipeptide repeats protein Human genes 0.000 description 1
- 102100020741 Atrophin-1 Human genes 0.000 description 1
- 101001061654 Homo sapiens Arginine-glutamic acid dipeptide repeats protein Proteins 0.000 description 1
- 101000785083 Homo sapiens Atrophin-1 Proteins 0.000 description 1
- 229910000831 Steel Inorganic materials 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000006798 recombination Effects 0.000 description 1
- 238000005215 recombination Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000010959 steel Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30132—Masonry; Concrete
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Evolutionary Biology (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The application discloses concrete curing box's clean effect's detection method based on categorised weight, it includes: acquiring an image of a concrete curing box to be detected; passing the image to be detected through a convolutional neural network to obtain a characteristic diagram; passing the feature maps through first and second attention mechanism modules, respectively, to obtain first and second attention feature maps; passing the first and second attention feature maps through a Softmax function to obtain first and second classification weight score maps; respectively multiplying the first and second classification weight score maps and the first and second attention feature maps according to position points to obtain first and second classification weight feature maps; fusing the first and second classification weight feature maps to obtain a classification feature map; and passing the classification characteristic graph through a classifier to obtain a classification result.
Description
Technical Field
The present application relates to the field of artificial intelligence, and more particularly, to a method for detecting a cleaning effect of a concrete curing box based on classification weight, a system for detecting a cleaning effect of a concrete curing box based on classification weight, and an electronic device.
Background
Concrete curing box generally places the concrete test block in the box for test the test block. The existing concrete curing box needs to be cleaned after the test is completed on the test block, particularly, the surface of a support frame used for placing the concrete test block in the inner cavity of the concrete curing box is cleaned, and otherwise, the subsequent experimental result is influenced.
The cleaning effect of the existing concrete curing box basically needs to be observed manually, and the manual observation effect is poor due to the fact that the concrete curing box is large in size, deep in depth and free of a good light source.
Therefore, a technical solution for detecting the cleaning effect of the concrete curing box is desired.
At present, deep learning and neural networks have been widely applied in the fields of computer vision, natural language processing, text signal processing, and the like. In addition, deep learning and neural networks also exhibit a level close to or even exceeding that of humans in the fields of image classification, object detection, semantic segmentation, text translation, and the like.
In recent years, deep learning and development of neural networks provide new solutions and schemes for detecting the cleaning effect of the concrete curing box.
Disclosure of Invention
The present application is proposed to solve the above-mentioned technical problems. The embodiment of the application provides a method for detecting the cleaning effect of a concrete curing box based on classification weight, a system for detecting the cleaning effect of the concrete curing box based on classification weight and electronic equipment.
According to an aspect of the present application, there is provided a method of detecting a cleaning effect of a concrete curing box based on a classification weight, including:
acquiring an image of a concrete curing box to be detected, wherein the image of the concrete curing box to be detected comprises an inner wall part and a support frame surface part of the concrete curing box to be detected;
passing the image of the concrete curing box to be detected through a convolutional neural network to obtain a characteristic diagram;
respectively passing the feature maps through a first attention mechanism module and a second attention mechanism module to obtain a first attention feature map and a second attention feature map;
passing the first attention feature map through a Softmax function to calculate a classification weight for each location in the first attention feature map to obtain a first classification weight score map;
passing the second attention feature map through a Softmax function to calculate a classification weight for each location in the second attention feature map to obtain a second classification weight score map;
performing dot multiplication on the first classification weight score map and the first attention feature map according to pixel positions to obtain a first classification weight feature map;
performing dot multiplication on the second classification weight score map and the second attention feature map according to pixel positions to obtain a second classification weight feature map;
fusing the first classification weight characteristic graph and the second classification weight characteristic graph to obtain a classification characteristic graph; and
and passing the classification characteristic diagram through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the cleaning effect of the inner wall of the concrete curing box to be detected and the surface of the support frame is qualified or not.
In the method for detecting the cleaning effect of the concrete curing box based on the classification weight, the step of passing the feature map through a first attention mechanism module and a second attention mechanism module respectively to obtain a first attention feature map and a second attention feature map comprises the following steps: passing the first feature map through a plurality of first convolution layers to obtain a first convolution feature map; performing dot-by-pixel multiplication on the first convolution feature map and the first feature map to obtain the first attention feature map; passing the second feature map through a plurality of second convolutional layers to obtain a second convolutional feature map; and performing dot-by-pixel multiplication on the second convolution feature map and the second feature map to obtain the second attention feature map.
In the method for detecting the cleaning effect of the concrete curing box based on the classification weight, the step of passing the first attention feature map through a Softmax function to calculate the classification weight of each position in the first attention feature map so as to obtain a first classification weight score map comprises the following steps: calculating a classification weight for each location in the first attention feature map to obtain a first classification weight score map with the formula: a classification weight exp (Ai)/Σ exp (Ai), Ai being a feature value of each pixel position in the first attention feature map; wherein passing the second attention feature map through a Softmax function to calculate a classification weight for each location in the second attention feature map to obtain a second classification weight score map, comprising: calculating a classification weight of each position in the second attention feature map to obtain a second classification weight score map according to the following formula: the classification weight exp (Bi)/Σ exp (Bi), Bi is a feature value of each pixel position in the second attention feature map.
In the method for detecting the cleaning effect of the concrete curing box based on the classification weight, the method for fusing the first classification weight characteristic diagram and the second classification weight characteristic diagram to obtain the classification characteristic diagram comprises the following steps: calculating a pixel-wise weighted sum between the first and second classification weight feature maps to obtain the classification feature map.
In the method for detecting the cleaning effect of the concrete curing box based on the classification weight, the classification characteristic map is processed by a classifier to obtain a classification result, and the method comprises the following steps: passing the classified feature map through an encoder to obtain a classified feature vector, the encoder comprising one or more fully-connected layers; and passing the classification feature vector through a Softmax classification function to obtain the classification result.
In the method for detecting the cleaning effect of the concrete curing box based on the classification weight, the convolutional neural network is a depth residual error network.
According to another aspect of the present application, there is provided a system for detecting a cleaning effect of a concrete curing box based on a classification weight, including:
the device comprises an image acquisition unit to be detected, a processing unit and a processing unit, wherein the image acquisition unit is used for acquiring an image of the concrete curing box to be detected, and the image of the concrete curing box to be detected comprises an inner wall part and a support frame surface part of the concrete curing box to be detected;
the characteristic diagram generating unit is used for enabling the image of the concrete curing box to be detected, which is obtained by the image obtaining unit to be detected, to pass through a convolutional neural network so as to obtain a characteristic diagram;
an attention feature map generating unit, configured to pass the first feature map obtained by the feature map generating unit through a first attention mechanism module and a second attention mechanism module, respectively, to obtain a first attention feature map and a second attention feature map;
a first classification weight score map generation unit, configured to pass the first attention feature map obtained by the attention feature map generation unit through a Softmax function to calculate a classification weight for each position in the first attention feature map to obtain a first classification weight score map;
a second classification weight score map generation unit configured to pass the second attention feature map obtained by the attention feature map generation unit through a Softmax function to calculate a classification weight for each position in the second attention feature map to obtain a second classification weight score map;
a first classification weight score map generation unit, configured to perform dot-by-pixel multiplication on the first classification weight score map obtained by the first classification weight score map generation unit and the first attention feature map obtained by the attention feature map generation unit to obtain a first classification weight feature map;
a second classification weight feature map generation unit, configured to multiply the second classification weight score map obtained by the second classification weight score map generation unit by pixel position points with the second attention feature map obtained by the attention feature map generation unit to obtain a second classification weight feature map;
a feature map fusion unit, configured to fuse the first classification weight feature map obtained by the first classification weight feature map generation unit and the second classification weight feature map obtained by the second classification weight feature map generation unit to obtain a classification feature map; and
and the classification result generating unit is used for enabling the classification characteristic diagram obtained by the characteristic diagram fusing unit to pass through a classifier so as to obtain a classification result, and the classification result is used for indicating whether the cleaning effect of the inner wall of the concrete curing box to be detected and the surface of the support frame is qualified or not.
In the above system for detecting a cleaning effect of a concrete curing box based on a classification weight, the attention feature map generating unit includes: a first convolution feature map generation subunit, configured to pass the first feature map through a plurality of first convolution layers to obtain a first convolution feature map; a first point multiplier unit, configured to perform point-by-pixel position multiplication on the first convolution feature map and the first feature map to obtain the first attention feature map; a second convolution feature map generation subunit, configured to pass the second feature map through a plurality of second convolution layers to obtain a second convolution feature map; and the second point multiplication subunit is used for performing point multiplication on the second convolution feature map and the second feature map according to pixel positions to obtain the second attention feature map.
In the above system for detecting a cleaning effect of a concrete curing box based on a classification weight, the first classification weight feature map generating unit is further configured to: calculating a classification weight for each location in the first attention feature map to obtain a first classification weight score map with the formula: a classification weight exp (Ai)/Σ exp (Ai), Ai being a feature value of each pixel position in the first attention feature map; wherein the second classification weight feature map generation unit is further configured to: calculating a classification weight of each position in the second attention feature map to obtain a second classification weight score map according to the following formula: the classification weight exp (Bi)/Σ exp (Bi), Bi is a feature value of each pixel position in the second attention feature map.
In the above system for detecting the cleaning effect of the concrete curing box based on the classification weight, the feature map fusion unit is further configured to: calculating a pixel-wise weighted sum between the first and second classification weight feature maps to obtain the classification feature map.
In the above system for detecting a cleaning effect of a concrete curing box based on a classification weight, the classification result generating unit includes: an encoding subunit, configured to pass the classification feature map through an encoder to obtain a classification feature vector, the encoder comprising one or more fully-connected layers; and the classification subunit is used for enabling the classification feature vector to pass through a Softmax classification function so as to obtain the classification result.
In the system for detecting the cleaning effect of the concrete curing box based on the classification weight, the convolutional neural network is a depth residual error network.
According to still another aspect of the present application, there is provided an electronic apparatus including: a processor; and a memory in which computer program instructions are stored, which, when executed by the processor, cause the processor to perform the method of detecting the cleaning effect of the concrete curing box based on the classification weight as described above.
According to yet another aspect of the present application, there is provided a computer readable medium having stored thereon computer program instructions, which, when executed by a processor, cause the processor to execute the method of detecting the cleaning effect of a concrete curing box based on classification weights as described above.
According to the detection method of the cleaning effect of the concrete curing box based on the classification weight, the detection system of the cleaning effect of the concrete curing box based on the classification weight and the electronic equipment, the images of the inner wall of the concrete curing box and the surface of the support frame are subjected to feature extraction and classification based on the deep learning computer vision technology, and the cleaning effect of the inner wall of the concrete curing box and the surface of the support frame is detected.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 illustrates a scene diagram of a method for detecting the cleaning effect of a concrete curing box based on classification weight according to an embodiment of the present application.
Fig. 2 illustrates a flowchart of a method of detecting a cleaning effect of a concrete curing box based on classification weight according to an embodiment of the present application.
Fig. 3 illustrates an architecture diagram of a method for detecting the cleaning effect of a concrete curing box based on classification weight according to an embodiment of the present application.
Fig. 4 is a flowchart illustrating the feature map is respectively passed through a first attention feature map module and a second attention feature map module to obtain a first attention feature map and a second attention feature map in the method for detecting the cleaning effect of the concrete curing box based on the classification weight according to the embodiment of the application.
Fig. 5 is a flowchart illustrating a method for detecting the cleaning effect of the concrete curing box based on the classification weight according to an embodiment of the present application, wherein the classification feature map is passed through a classifier to obtain a classification result.
FIG. 6 illustrates a block diagram of a system for detecting the effectiveness of cleaning of a concrete curing box based on classification weights in accordance with an embodiment of the present application.
Fig. 7 illustrates a block diagram of an attention feature map generating unit in a detection system of a cleaning effect of a concrete curing box based on classification weight according to an embodiment of the present application.
Fig. 8 illustrates a block diagram of a classification result generation unit in the detection system of the cleaning effect of the concrete curing box based on the classification weight according to the embodiment of the present application.
FIG. 9 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be understood that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and that the present application is not limited by the example embodiments described herein.
Overview of a scene
As described above, the concrete curing box generally places the concrete test block in the box for performing the experiment on the test block. The existing concrete curing box needs to be cleaned after the test is completed on the test block, particularly, the surface of a support frame used for placing the concrete test block in the inner cavity of the concrete curing box is cleaned, and otherwise, the subsequent experimental result is influenced.
The cleaning effect of the existing concrete curing box basically needs to be observed manually, and the manual observation effect is poor due to the fact that the concrete curing box is large in size, deep in depth and free of a good light source.
Therefore, a technical solution for detecting the cleaning effect of the concrete curing box is desired.
The applicant of the present application considers detecting the cleaning effect of the inner wall of the concrete curing box and the surface of the support frame by extracting and classifying the features of the images of the inner wall of the concrete curing box and the surface of the support frame through a computer vision technique based on deep learning.
In practice, the inner wall of the concrete curing box and the surface of the support frame mainly have solidified concrete blocks remained, and the solidified concrete blocks usually do not exist in a single form, so that if the objects needing attention can be identified in the process of extracting the image characteristics and the objects are properly classified, the detection of the cleaning effect of the inner wall of the concrete curing box and the surface of the support frame is greatly facilitated.
Specifically, in the technical scheme of the application, firstly, an image of the concrete curing box is obtained, wherein the image comprises an inner wall part of the concrete curing box and a supporting frame surface part, and then the image is input into a convolutional neural network to obtain a characteristic diagram. Next, the feature maps are respectively passed through a first attention mechanism and a second attention mechanism to extract local features needing attention, specifically, local features of a residual solidified concrete block object possibly in the concrete curing box, the first attention map and the second attention map are respectively passed through a softmax function to calculate classification weights of each position of the attention maps, then the first attention map and the second attention map are multiplied to obtain a first classification weight map and a second classification weight map, and then the first classification weight map and the second classification weight map are fused to obtain a classification feature map.
In this way, after the obtained classification characteristic diagram passes through the classification function, a classification result can be obtained, and the classification result is used for representing the cleaning effect of the inner wall of the concrete curing box and the surface of the support frame.
Based on this, the application provides a method for detecting the cleaning effect of a concrete curing box based on classification weight, which comprises the following steps: acquiring an image of a concrete curing box to be detected, wherein the image of the concrete curing box to be detected comprises an inner wall part and a support frame surface part of the concrete curing box to be detected; passing the image of the concrete curing box to be detected through a convolutional neural network to obtain a characteristic diagram; respectively passing the feature maps through a first attention mechanism module and a second attention mechanism module to obtain a first attention feature map and a second attention feature map; passing the first attention feature map through a Softmax function to calculate a classification weight for each location in the first attention feature map to obtain a first classification weight score map; passing the second attention feature map through a Softmax function to calculate a classification weight for each location in the second attention feature map to obtain a second classification weight score map; performing dot multiplication on the first classification weight score map and the first attention feature map according to pixel positions to obtain a first classification weight feature map; performing dot multiplication on the second classification weight score map and the second attention feature map according to pixel positions to obtain a second classification weight feature map; fusing the first classification weight characteristic graph and the second classification weight characteristic graph to obtain a classification characteristic graph; and enabling the classification characteristic diagram to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the cleaning effect of the inner wall of the concrete curing box to be detected and the surface of the support frame is qualified or not.
Fig. 1 illustrates a scene diagram of a method for detecting the cleaning effect of a concrete curing box based on classification weight according to an embodiment of the present application.
As shown in fig. 1, in this application scenario, first, an image of the concrete curing box to be detected is acquired through a camera (for example, as indicated by C in fig. 1), where the image of the concrete curing box to be detected includes an inner wall portion and a support frame surface portion of the concrete curing box to be detected; then, the image of the concrete curing box to be detected is input into a server (for example, S shown in fig. 1) deployed with a detection algorithm of the cleaning effect of the concrete curing box based on the classification weight, wherein the server can process the image of the concrete curing box to be detected by the detection algorithm of the cleaning effect of the concrete curing box based on the classification weight to generate a detection result for indicating whether the cleaning effect of the inner wall of the concrete curing box to be detected and the surface of the support frame is qualified or not.
Having described the general principles of the present application, various non-limiting embodiments of the present application will now be described with reference to the accompanying drawings.
Exemplary method
Fig. 2 illustrates a flowchart of a method of detecting a cleaning effect of a concrete curing box based on classification weight according to an embodiment of the present application. As shown in fig. 2, the method for detecting the cleaning effect of the concrete curing box based on the classification weight according to the embodiment of the present application includes: s110, acquiring an image of the concrete curing box to be detected, wherein the image of the concrete curing box to be detected comprises the inner wall part of the concrete curing box to be detected and the surface part of a support frame; s120, passing the image of the concrete curing box to be detected through a convolutional neural network to obtain a characteristic diagram; s130, enabling the feature map to pass through a first attention mechanism module and a second attention mechanism module respectively to obtain a first attention feature map and a second attention feature map; s140, passing the first attention feature map through a Softmax function to calculate a classification weight of each position in the first attention feature map so as to obtain a first classification weight score map; s150, passing the second attention feature map through a Softmax function to calculate a classification weight of each position in the second attention feature map so as to obtain a second classification weight score map; s160, performing dot multiplication on the first classification weight score map and the first attention feature map according to pixel positions to obtain a first classification weight feature map; s170, performing dot multiplication on the second classification weight score map and the second attention feature map according to pixel positions to obtain a second classification weight feature map; s180, fusing the first classification weight characteristic graph and the second classification weight characteristic graph to obtain a classification characteristic graph; and S190, passing the classification characteristic diagram through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the cleaning effect of the inner wall of the concrete curing box to be detected and the surface of the support frame is qualified or not.
Fig. 3 illustrates an architecture diagram of a method for detecting the cleaning effect of a concrete curing box based on classification weight according to an embodiment of the present application. As shown in fig. 3, in the network architecture, the acquired concrete curing box image to be detected (e.g., I N0 as illustrated in fig. 3) is first input into a convolutional neural network (e.g., CNN as illustrated in fig. 3) to obtain a feature map (e.g., F0 as illustrated in fig. 3). The signature is then passed through a first attention mechanism module (e.g., ATN1 as illustrated in fig. 3) and a second attention mechanism module (e.g., ATN2 as illustrated in fig. 3), respectively, to obtain a first attention signature (e.g., F1 as illustrated in fig. 3) and a second attention signature (e.g., F2 as illustrated in fig. 3). Next, the first attention feature map is passed through a Softmax function to calculate a classification weight for each location in the first attention feature map to obtain a first classification weight score map (e.g., Fs1 as illustrated in fig. 3), and the second attention feature map is passed through a Softmax function to calculate a classification weight for each location in the second attention feature map to obtain a second classification weight score map (e.g., 2 as illustrated in fig. 3). Next, the first classification weight score map and the first attention feature map are point-by-pixel position multiplied to obtain a first classification weight feature map (e.g., Fw1 as illustrated in fig. 3), and the second classification weight score map and the second attention feature map are point-by-pixel position multiplied to obtain a second classification weight feature map (e.g., Fw2 as illustrated in fig. 3). Then, the first and second classification weight feature maps are fused to obtain a classification feature map (e.g., Fc as illustrated in fig. 3). And finally, passing the classification characteristic diagram through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the cleaning effect of the inner wall of the concrete curing box to be detected and the surface of the support frame is qualified or not.
In step S110, an image of the concrete curing box to be detected is acquired, and the image of the concrete curing box to be detected includes an inner wall portion and a support frame surface portion of the concrete curing box to be detected. As described above, in practice, the inner wall of the concrete curing box and the surface of the support frame are mainly left with solidified concrete blocks, and the solidified concrete blocks are not usually in a single form, so that if the objects to be concerned can be identified in the process of extracting the image features and the objects can be properly classified, the detection of the cleaning effect of the inner wall of the concrete curing box and the surface of the support frame is greatly facilitated.
Therefore, in the technical scheme of this application, when gathering the image of the concrete curing box that awaits measuring, the shooting angle of adjustment camera makes the concrete curing box image that awaits measuring include the interior wall part and the support frame surface part of the concrete curing box that awaits measuring.
In step S120, the concrete curing box image to be detected is passed through a convolutional neural network to obtain a feature map. Namely, the deep convolutional neural network is used for processing the image of the concrete curing box to be detected so as to extract the high-dimensional features in the image of the steel bar to be detected. Those skilled in the art will appreciate that convolutional neural networks have superior performance in extracting local spatial features of an image.
Preferably, in the present embodiment, the convolutional neural network is implemented as a deep residual network, e.g., ResNet 50. Compared with the traditional convolutional neural network, the deep residual error network is an optimized network structure provided on the basis of the traditional convolutional neural network, and mainly solves the problem that the gradient disappears in the training process. The depth residual error network introduces a residual error network structure, the network layer can be made deeper through the residual error network structure, and the problem of gradient disappearance can not occur. The residual error network uses the cross-layer link thought of a high-speed network for reference, breaks through the convention that the traditional neural network only can provide N layers as input from the input layer of the N-1 layer, enables the output of a certain layer to directly cross several layers as the input of the later layer, and has the significance of providing a new direction for the difficult problem that the error rate of the whole learning model is not reduced and inversely increased by superposing multiple layers of networks.
In step S130, the feature maps are respectively passed through a first attention mechanism module and a second attention mechanism module to obtain a first attention feature map and a second attention feature map. That is, the feature map is respectively passed through a first attention mechanism and a second attention mechanism to extract local features to be noticed, namely, an inner wall portion and a support frame surface portion of the concrete curing box to be detected in the image of the concrete curing box to be detected.
Specifically, in one specific example of the present application, the process of passing the feature map through a first attention mechanism module to obtain a first attention feature map includes: first, the first feature map is passed through a plurality of first convolution layers to obtain a first convolution feature map, that is, in the technical solution of the present application, the first attention mechanism module includes a plurality of convolution layers, so as to extract an area of interest in the first feature map through the plurality of convolution layers. Then, the first convolution feature map and the first feature map are dot-by-pixel position to obtain the first attention feature map. That is, the information in the first convolution map is applied to the first feature map so that the region of the first feature map that needs to be focused on is emphasized to obtain the first attention feature map.
Specifically, in this example, the process of passing the feature map through a second attention mechanism module to obtain a second attention feature map includes: first, the second feature map is passed through a plurality of second convolutional layers to obtain a second convolutional feature map, that is, in the technical solution of the present application, the second attention mechanism module includes a plurality of convolutional layers, so as to extract a region of interest in the second feature map through the plurality of convolutional layers. Here, in particular, the plurality of second convolutional layers and the plurality of second convolutional layers have different network structures. Then, the second convolution feature map and the second feature map are subjected to dot-by-pixel position multiplication to obtain the second attention feature map. That is, the information in the second convolution feature map is applied to the second feature map so that the region that needs to be focused on in the first feature map is emphasized to obtain the second attention feature map.
Fig. 4 is a flowchart illustrating the feature map is respectively passed through a first attention feature map module and a second attention feature map module to obtain a first attention feature map and a second attention feature map in the method for detecting the cleaning effect of the concrete curing box based on the classification weight according to the embodiment of the application. As shown in fig. 4, passing the feature map through a first attention mechanism module and a second attention mechanism module respectively to obtain a first attention feature map and a second attention feature map, including: s210, passing the first characteristic diagram through a plurality of first convolution layers to obtain a first convolution characteristic diagram; s220, performing dot multiplication on the first convolution feature map and the first feature map according to pixel positions to obtain the first attention feature map; s230, passing the second feature map through a plurality of second convolution layers to obtain a second convolution feature map; and S240, multiplying the second convolution feature map and the second feature map by pixel position points to obtain the second attention feature map.
In step S140, the first attention feature map is passed through a Softmax function to calculate a classification weight for each position in the first attention feature map to obtain a first classification weight score map. It should be understood that the first attention feature map focuses on local features of the feature map that need to be focused on, namely, features of the inner wall portion or the support frame surface portion of the concrete curing box to be detected in the image of the concrete curing box to be detected. Further, as described above, the inner wall of the concrete curing box and the surface of the support frame mainly have solidified concrete lumps remaining, and such solidified concrete lumps do not usually exist in a single form, so that if these objects to be focused can be identified during the extraction process of the image features and appropriately classified, the detection of the cleaning effect of the inner wall of the concrete curing box and the surface of the support frame is greatly facilitated.
Accordingly, in the embodiment of the present application, the first attention feature map is passed through a Softmax function to calculate a classification weight for each position in the first attention feature map to obtain a first classification weight score map. That is, the first attention feature map is passed through a Softmax function to calculate a probability value that each pixel location in the first attention feature map is attributed to a concrete tag.
More specifically, in the embodiment of the present application, the process of passing the first attention feature map through a Softmax function to calculate a classification weight of each location in the first attention feature map to obtain a first classification weight score map includes: calculating a classification weight for each location in the first attention feature map to obtain a first classification weight score map with the formula: the classification weight is exp (Ai)/Σ exp (Ai), and Ai is a feature value of each pixel position in the first attention feature map.
In step S150, the second attention feature map is passed through a Softmax function to calculate a classification weight for each position in the second attention feature map to obtain a second classification weight score map. That is, the second attention feature map is passed through a Softmax function to calculate a probability value that each pixel location in the second attention feature map is attributed to a concrete tag to obtain the second classification weight score map.
More specifically, in this embodiment of the present application, the process of passing the second attention feature map through a Softmax function to calculate a classification weight for each location in the second attention feature map to obtain a second classification weight score map includes: calculating a classification weight of each position in the second attention feature map to obtain a second classification weight score map according to the following formula: the classification weight exp (Bi)/Σ exp (Bi), Bi is a feature value of each pixel position in the second attention feature map.
In step S160, the first classification weight score map and the first attention feature map are point-by-point multiplied by pixel position to obtain a first classification weight feature map. That is, the information in the first classification weight score map is applied to the first attention feature map to obtain the first classification weight feature map. It should be appreciated that in the first classification weight feature map, the regions belonging to the concrete label are further reinforced.
In step S170, the second classification weight score map and the second attention feature map are point-by-pixel position multiplied to obtain a second classification weight feature map. That is, the information in the second classification weight score map is applied to the second attention feature map to obtain the second classification weight feature map. It should be appreciated that in the second classification weight feature map, the regions belonging to the concrete label are further reinforced.
In step S180, the first classification weight feature map and the second classification weight feature map are fused to obtain a classification feature map. Specifically, in the embodiment of the present application, the first classification weight feature map and the second classification weight feature map are fused in a manner of calculating a weighted sum by pixel position between the first classification weight feature map and the second classification weight feature map to obtain the classification feature map.
In step S180, the classification feature map is passed through a classifier to obtain a classification result, where the classification result is used to indicate whether the cleaning effect of the inner wall of the concrete curing box to be detected and the surface of the support frame is qualified. That is, in the technical solution of the present application, the feature extraction stage and the classification stage are decoupled.
More specifically, in the embodiment of the present application, the classifier includes an encoder configured to encode the classification feature map to map features in the classification feature map into a label space to obtain a classification feature vector. In one particular example, the encoder includes one or more fully-connected layers to leverage information at various locations in the classification feature map through the one or more fully-connected layers. And then, inputting the classification characteristic vector into a classification function to obtain a first probability that the cleaning effect of the image of the concrete curing box to be detected, which belongs to the inner wall of the concrete curing box to be detected and the surface of the support frame, is qualified and a second probability that the cleaning effect of the image of the concrete curing box to be detected, which belongs to the inner wall of the concrete curing box to be detected and the surface of the support frame, is unqualified. And then, generating the classification result based on the first probability and the second probability, wherein the classification result is used for indicating whether the cleaning effect of the inner wall of the concrete curing box to be detected and the surface of the support frame is qualified or not.
Fig. 5 is a flowchart illustrating a method for detecting the cleaning effect of the concrete curing box based on the classification weight according to an embodiment of the present application, wherein the classification feature map is passed through a classifier to obtain a classification result. As shown in fig. 5, in the embodiment of the present application, passing the classification feature map through a classifier to obtain a classification result includes: s310, enabling the classification feature map to pass through an encoder to obtain a classification feature vector, wherein the encoder comprises one or more fully connected layers; and S320, passing the classification feature vector through a Softmax classification function to obtain the classification result.
In summary, the method for detecting the cleaning effect of the concrete curing box based on the classification weight according to the embodiment of the present application is clarified, and the method is based on the deep learning computer vision technology to extract and classify the features of the images of the inner wall of the concrete curing box and the surface of the support frame, so as to detect the cleaning effect of the inner wall of the concrete curing box and the surface of the support frame.
Exemplary System
FIG. 6 illustrates a block diagram of a system for detecting the effectiveness of cleaning of a concrete curing box based on classification weights in accordance with an embodiment of the present application.
As shown in fig. 6, the system 600 for detecting the cleaning effect of the concrete curing box based on the classification weight according to the embodiment of the present application includes: the image acquiring unit 610 to be detected is used for acquiring an image of the concrete curing box to be detected, wherein the image of the concrete curing box to be detected comprises an inner wall part and a supporting frame surface part of the concrete curing box to be detected; the characteristic diagram generating unit 620 is configured to obtain a characteristic diagram from the image of the concrete curing box to be detected, which is obtained by the image obtaining unit 610 to be detected, through a convolutional neural network; an attention feature map generating unit 630, configured to pass the first feature map obtained by the feature map generating unit 620 through a first attention mechanism module and a second attention mechanism module, respectively, to obtain a first attention feature map and a second attention feature map; a first classification weight score map generating unit 640, configured to pass the first attention feature map obtained by the attention feature map generating unit 630 through a Softmax function to calculate a classification weight for each location in the first attention feature map to obtain a first classification weight score map; a second classification weight score map generating unit 650, configured to pass the second attention feature map obtained by the attention feature map generating unit 630 through a Softmax function to calculate a classification weight for each position in the second attention feature map to obtain a second classification weight score map; a first classification weight score map generation unit 660, configured to multiply the first classification weight score map obtained by the first classification weight score map generation unit 640 by pixel position points with the first attention feature map obtained by the attention feature map generation unit 630 to obtain a first classification weight feature map; a second classification weight feature map generating unit 670, configured to multiply the second classification weight score map obtained by the second classification weight score map generating unit 650 by pixel position points with the second attention feature map obtained by the attention feature map generating unit 630 to obtain a second classification weight feature map; a feature map fusing unit 680, configured to fuse the first classification weight feature map obtained by the first classification weight feature map generating unit 660 with the second classification weight feature map obtained by the second classification weight feature map generating unit 670 to obtain a classification feature map; and a classification result generating unit 690, configured to pass the classification feature map obtained by the feature map fusing unit 680 through a classifier to obtain a classification result, where the classification result is used to indicate whether the cleaning effect of the inner wall of the concrete curing box to be detected and the surface of the support frame is qualified.
In an example, in the above-mentioned detection system 600, as shown in fig. 7, the attention feature map generating unit 630 includes: a first convolution feature map generation subunit 631, configured to pass the first feature map through a plurality of first convolution layers to obtain a first convolution feature map; a first point multiplier unit 632, configured to perform point-by-pixel multiplication on the first convolution feature map and the first feature map to obtain the first attention feature map; a second convolution feature map generation subunit 633, configured to pass the second feature map through a plurality of second convolution layers to obtain a second convolution feature map; and a second dot multiplication subunit 634, configured to perform dot multiplication on the second convolution feature map and the second feature map by pixel position to obtain the second attention feature map.
In an example, in the detection system 600, the first classification weight feature map generation unit 660 is further configured to: calculating a classification weight for each location in the first attention feature map to obtain a first classification weight score map with the formula: a classification weight exp (Ai)/Σ exp (Ai), Ai being a feature value of each pixel position in the first attention feature map; wherein the second classification weight feature map generating unit 670 is further configured to: calculating a classification weight of each position in the second attention feature map to obtain a second classification weight score map according to the following formula: the classification weight exp (Bi)/Σ exp (Bi), Bi is a feature value of each pixel position in the second attention feature map.
In an example, in the above detection system 600, the feature map fusion unit 680 is further configured to: calculating a pixel-wise weighted sum between the first and second classification weight feature maps to obtain the classification feature map.
In one example, in the above-mentioned detection system 600, as shown in fig. 8, the classification result generating unit 690 includes: an encoding subunit 691, configured to pass the classification feature map through an encoder to obtain a classification feature vector, the encoder comprising one or more fully connected layers; and a classification subunit 692, configured to pass the classification feature vector through a Softmax classification function to obtain the classification result.
In the system for detecting the cleaning effect of the concrete curing box based on the classification weight, the convolutional neural network is a depth residual error network.
Here, it can be understood by those skilled in the art that the detailed functions and operations of the respective units and modules in the above-described inspection system 600 have been described in detail in the above description of the inspection method for the cleaning effect of the concrete curing box based on the classification weight with reference to fig. 1 to 5, and thus, a repetitive description thereof will be omitted.
As described above, the detection system 600 according to the embodiment of the present application can be implemented in various terminal devices, such as a server for the cleaning effect detection of a concrete curing box, and the like. In one example, the detection system 600 according to the embodiments of the present application may be integrated into the terminal device as a software module and/or a hardware module. For example, the detection system 600 may be a software module in the operating system of the terminal device, or may be an application developed for the terminal device; of course, the detection system 600 may also be one of many hardware modules of the terminal device.
Alternatively, in another example, the detection system 600 and the terminal device may be separate devices, and the detection system 600 may be connected to the terminal device through a wired and/or wireless network and transmit the interaction information according to an agreed data format.
Exemplary electronic device
Next, an electronic apparatus according to an embodiment of the present application is described with reference to fig. 9.
FIG. 9 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application.
As shown in fig. 9, the electronic device 10 includes one or more processors 11 and memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
The input device 13 may include, for example, a keyboard, a mouse, and the like.
The output device 14 can output various information including the detection result to the outside. The output devices 14 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, for simplicity, only some of the components of the electronic device 10 relevant to the present application are shown in fig. 9, and components such as buses, input/output interfaces, and the like are omitted. In addition, the electronic device 10 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the method for detecting the cleaning effectiveness of a concrete curing box based on classification weights according to various embodiments of the present application described in the "exemplary methods" section above of this specification.
The computer program product may be written with program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps in the method for detecting the cleaning effectiveness of a concrete curing box based on classification weights according to various embodiments of the present application described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.
Claims (10)
1. A method for detecting the cleaning effect of a concrete curing box based on classification weight is characterized by comprising the following steps:
acquiring an image of a concrete curing box to be detected, wherein the image of the concrete curing box to be detected comprises an inner wall part and a support frame surface part of the concrete curing box to be detected;
passing the image of the concrete curing box to be detected through a convolutional neural network to obtain a characteristic diagram;
respectively passing the feature maps through a first attention mechanism module and a second attention mechanism module to obtain a first attention feature map and a second attention feature map;
passing the first attention feature map through a Softmax function to calculate a classification weight for each location in the first attention feature map to obtain a first classification weight score map;
passing the second attention feature map through a Softmax function to calculate a classification weight for each location in the second attention feature map to obtain a second classification weight score map;
performing dot multiplication on the first classification weight score map and the first attention feature map according to pixel positions to obtain a first classification weight feature map;
performing dot multiplication on the second classification weight score map and the second attention feature map according to pixel positions to obtain a second classification weight feature map;
fusing the first classification weight characteristic graph and the second classification weight characteristic graph to obtain a classification characteristic graph; and
and passing the classification characteristic diagram through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the cleaning effect of the inner wall of the concrete curing box to be detected and the surface of the support frame is qualified or not.
2. The method for detecting the cleaning effect of the concrete curing box based on the classification weight as claimed in claim 1, wherein the step of passing the feature map through a first attention mechanism module and a second attention mechanism module respectively to obtain a first attention feature map and a second attention feature map comprises the steps of:
passing the first feature map through a plurality of first convolution layers to obtain a first convolution feature map;
performing dot-by-pixel multiplication on the first convolution feature map and the first feature map to obtain the first attention feature map;
passing the second feature map through a plurality of second convolutional layers to obtain a second convolutional feature map; and
and performing dot-by-pixel multiplication on the second convolution feature map and the second feature map to obtain the second attention feature map.
3. The method for detecting the cleaning effect of a concrete curing box according to claim 1,
wherein passing the first attention feature map through a Softmax function to calculate a classification weight for each location in the first attention feature map to obtain a first classification weight score map, comprising:
calculating a classification weight for each location in the first attention feature map to obtain a first classification weight score map with the formula: a classification weight exp (Ai)/Σ exp (Ai), Ai being a feature value of each pixel position in the first attention feature map;
wherein passing the second attention feature map through a Softmax function to calculate a classification weight for each location in the second attention feature map to obtain a second classification weight score map, comprising:
calculating a classification weight of each position in the second attention feature map to obtain a second classification weight score map according to the following formula: the classification weight exp (Bi)/Σ exp (Bi), Bi is a feature value of each pixel position in the second attention feature map.
4. The method for detecting the cleaning effect of the concrete curing box based on the classification weight according to claim 1, wherein the step of fusing the first classification weight feature map and the second classification weight feature map to obtain a classification feature map comprises the steps of:
calculating a pixel-wise weighted sum between the first and second classification weight feature maps to obtain the classification feature map.
5. The method for detecting the cleaning effect of the concrete curing box based on the classification weight as claimed in claim 1, wherein the step of passing the classification feature map through a classifier to obtain a classification result comprises the steps of:
passing the classified feature map through an encoder to obtain a classified feature vector, the encoder comprising one or more fully-connected layers; and
and passing the classification feature vector through a Softmax classification function to obtain the classification result.
6. The method for detecting cleaning effectiveness of a concrete curing box based on classification weight according to claim 1, wherein the convolutional neural network is a depth residual network.
7. A classification weight-based detection system for the cleaning effect of a concrete curing box, comprising:
the device comprises an image acquisition unit to be detected, a processing unit and a processing unit, wherein the image acquisition unit is used for acquiring an image of the concrete curing box to be detected, and the image of the concrete curing box to be detected comprises an inner wall part and a support frame surface part of the concrete curing box to be detected;
the characteristic diagram generating unit is used for enabling the image of the concrete curing box to be detected, which is obtained by the image obtaining unit to be detected, to pass through a convolutional neural network so as to obtain a characteristic diagram;
an attention feature map generating unit, configured to pass the first feature map obtained by the feature map generating unit through a first attention mechanism module and a second attention mechanism module, respectively, to obtain a first attention feature map and a second attention feature map;
a first classification weight score map generation unit, configured to pass the first attention feature map obtained by the attention feature map generation unit through a Softmax function to calculate a classification weight for each position in the first attention feature map to obtain a first classification weight score map;
a second classification weight score map generation unit configured to pass the second attention feature map obtained by the attention feature map generation unit through a Softmax function to calculate a classification weight for each position in the second attention feature map to obtain a second classification weight score map;
a first classification weight score map generation unit, configured to perform dot-by-pixel multiplication on the first classification weight score map obtained by the first classification weight score map generation unit and the first attention feature map obtained by the attention feature map generation unit to obtain a first classification weight feature map;
a second classification weight feature map generation unit, configured to multiply the second classification weight score map obtained by the second classification weight score map generation unit by pixel position points with the second attention feature map obtained by the attention feature map generation unit to obtain a second classification weight feature map;
a feature map fusion unit, configured to fuse the first classification weight feature map obtained by the first classification weight feature map generation unit and the second classification weight feature map obtained by the second classification weight feature map generation unit to obtain a classification feature map; and
and the classification result generating unit is used for enabling the classification characteristic diagram obtained by the characteristic diagram fusing unit to pass through a classifier so as to obtain a classification result, and the classification result is used for indicating whether the cleaning effect of the inner wall of the concrete curing box to be detected and the surface of the support frame is qualified or not.
8. The system for detecting cleaning effect of concrete curing box according to claim 7, wherein said attention feature map generating unit includes:
a first convolution feature map generation subunit, configured to pass the first feature map through a plurality of first convolution layers to obtain a first convolution feature map;
a first point multiplier unit, configured to perform point-by-pixel position multiplication on the first convolution feature map and the first feature map to obtain the first attention feature map;
a second convolution feature map generation subunit, configured to pass the second feature map through a plurality of second convolution layers to obtain a second convolution feature map; and
and the second point multiplication subunit is used for performing point multiplication on the second convolution feature map and the second feature map according to pixel positions to obtain the second attention feature map.
9. The system for detecting cleaning effect of concrete curing box according to claim 7, wherein said first classification weight feature map generating unit is further configured to: calculating a classification weight for each location in the first attention feature map to obtain a first classification weight score map with the formula: a classification weight exp (Ai)/Σ exp (Ai), Ai being a feature value of each pixel position in the first attention feature map;
wherein the second classification weight feature map generation unit is further configured to: calculating a classification weight of each position in the second attention feature map to obtain a second classification weight score map according to the following formula: the classification weight exp (Bi)/Σ exp (Bi), Bi is a feature value of each pixel position in the second attention feature map.
10. An electronic device, comprising:
a processor; and
a memory having stored therein computer program instructions which, when executed by the processor, cause the processor to perform the method of detecting the cleaning effectiveness of a concrete curing box based on classification weights as claimed in any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011465952.2A CN112489033A (en) | 2020-12-13 | 2020-12-13 | Method for detecting cleaning effect of concrete curing box based on classification weight |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011465952.2A CN112489033A (en) | 2020-12-13 | 2020-12-13 | Method for detecting cleaning effect of concrete curing box based on classification weight |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112489033A true CN112489033A (en) | 2021-03-12 |
Family
ID=74917483
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011465952.2A Pending CN112489033A (en) | 2020-12-13 | 2020-12-13 | Method for detecting cleaning effect of concrete curing box based on classification weight |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112489033A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118061345A (en) * | 2024-04-19 | 2024-05-24 | 中国电建集团昆明勘测设计研究院有限公司 | Water spraying maintenance method, device and equipment for precast beam and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109271878A (en) * | 2018-08-24 | 2019-01-25 | 北京地平线机器人技术研发有限公司 | Image-recognizing method, pattern recognition device and electronic equipment |
WO2019153908A1 (en) * | 2018-02-11 | 2019-08-15 | 北京达佳互联信息技术有限公司 | Image recognition method and system based on attention model |
US20190311223A1 (en) * | 2017-03-13 | 2019-10-10 | Beijing Sensetime Technology Development Co., Ltd. | Image processing methods and apparatus, and electronic devices |
CN111046962A (en) * | 2019-12-16 | 2020-04-21 | 中国人民解放军战略支援部队信息工程大学 | Sparse attention-based feature visualization method and system for convolutional neural network model |
CN111738113A (en) * | 2020-06-10 | 2020-10-02 | 杭州电子科技大学 | Road extraction method of high-resolution remote sensing image based on double-attention machine system and semantic constraint |
-
2020
- 2020-12-13 CN CN202011465952.2A patent/CN112489033A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190311223A1 (en) * | 2017-03-13 | 2019-10-10 | Beijing Sensetime Technology Development Co., Ltd. | Image processing methods and apparatus, and electronic devices |
WO2019153908A1 (en) * | 2018-02-11 | 2019-08-15 | 北京达佳互联信息技术有限公司 | Image recognition method and system based on attention model |
CN109271878A (en) * | 2018-08-24 | 2019-01-25 | 北京地平线机器人技术研发有限公司 | Image-recognizing method, pattern recognition device and electronic equipment |
CN111046962A (en) * | 2019-12-16 | 2020-04-21 | 中国人民解放军战略支援部队信息工程大学 | Sparse attention-based feature visualization method and system for convolutional neural network model |
CN111738113A (en) * | 2020-06-10 | 2020-10-02 | 杭州电子科技大学 | Road extraction method of high-resolution remote sensing image based on double-attention machine system and semantic constraint |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118061345A (en) * | 2024-04-19 | 2024-05-24 | 中国电建集团昆明勘测设计研究院有限公司 | Water spraying maintenance method, device and equipment for precast beam and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115203380B (en) | Text processing system and method based on multi-mode data fusion | |
JP2019008778A (en) | Captioning region of image | |
CN108960189B (en) | Image re-identification method and device and electronic equipment | |
CN115375691B (en) | Image-based semiconductor diffusion paper source defect detection system and method thereof | |
CN114782882B (en) | Video target behavior anomaly detection method and system based on multi-modal feature fusion | |
CN108665484B (en) | Danger source identification method and system based on deep learning | |
CN115620303B (en) | Personnel file intelligent management system | |
CN115471216B (en) | Data management method of intelligent laboratory management platform | |
Zhang et al. | Multiple adverse weather conditions adaptation for object detection via causal intervention | |
CN112508041A (en) | Training method of neural network for spray control based on classification result label | |
CN116992396A (en) | Redundancy self-adaptive multi-mode robust fusion learning method and system | |
CN117036778A (en) | Potential safety hazard identification labeling method based on image-text conversion model | |
CN112489033A (en) | Method for detecting cleaning effect of concrete curing box based on classification weight | |
Wang et al. | Transferring CLIP's Knowledge into Zero-Shot Point Cloud Semantic Segmentation | |
CN112960213A (en) | Intelligent package quality detection method using characteristic probability distribution representation | |
CN116247824B (en) | Control method and system for power equipment | |
CN112767342A (en) | Intelligent gas detection method based on double-branch inference mechanism | |
CN112465805A (en) | Neural network training method for quality detection of steel bar stamping and bending | |
CN112819044A (en) | Method for training neural network for target operation task compensation of target object | |
CN112380948A (en) | Training method and system for object re-recognition neural network and electronic equipment | |
CN117274689A (en) | Detection method and system for detecting defects of packaging box | |
CN112418353A (en) | Neural network training method for battery diaphragm abnormity detection | |
CN111127502B (en) | Method and device for generating instance mask and electronic equipment | |
CN112633141A (en) | Method for detecting concrete impact resistance based on double attention mechanism | |
CN112529093A (en) | Method for testing mold cleaning effect based on sample dimension weighting of pre-detection weight |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20240103 Address after: Room 408-8, 4th Floor, Building 2, Haichuang Technology Center, Cangqian Street, Yuhang District, Hangzhou City, Zhejiang Province, 313000 Applicant after: HANGZHOU ZHUILIE TECHNOLOGY Co.,Ltd. Address before: 226200 4th floor, 399 Nanhai Road, Binhai Park, Qidong City, Nantong City, Jiangsu Province Applicant before: Nantong Yunda Information Technology Co.,Ltd. |
|
TA01 | Transfer of patent application right |