CN115546223A - Method and system for detecting loss of fastening bolt of equipment under train - Google Patents
Method and system for detecting loss of fastening bolt of equipment under train Download PDFInfo
- Publication number
- CN115546223A CN115546223A CN202211546124.0A CN202211546124A CN115546223A CN 115546223 A CN115546223 A CN 115546223A CN 202211546124 A CN202211546124 A CN 202211546124A CN 115546223 A CN115546223 A CN 115546223A
- Authority
- CN
- China
- Prior art keywords
- image
- bolt
- detected
- preset reference
- reference image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000007689 inspection Methods 0.000 claims abstract description 33
- 238000001514 detection method Methods 0.000 claims abstract description 28
- 238000003709 image segmentation Methods 0.000 claims abstract description 19
- 230000009466 transformation Effects 0.000 claims abstract description 10
- 230000011218 segmentation Effects 0.000 claims description 18
- 238000003702 image correction Methods 0.000 claims description 7
- 238000012937 correction Methods 0.000 claims description 4
- 238000012790 confirmation Methods 0.000 claims description 2
- 238000013528 artificial neural network Methods 0.000 abstract description 10
- 238000000605 extraction Methods 0.000 abstract description 6
- 238000005286 illumination Methods 0.000 abstract description 3
- 239000013598 vector Substances 0.000 description 16
- 238000012549 training Methods 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 7
- 230000007547 defect Effects 0.000 description 7
- 230000007246 mechanism Effects 0.000 description 7
- 238000012217 deletion Methods 0.000 description 5
- 230000037430 deletion Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 5
- 238000011176 pooling Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 4
- 235000015220 hamburgers Nutrition 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 229920001651 Cyanoacrylate Polymers 0.000 description 3
- 239000004830 Super Glue Substances 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 238000012795 verification Methods 0.000 description 3
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 241001632422 Radiola linoides Species 0.000 description 1
- 229910000831 Steel Inorganic materials 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 208000003464 asthenopia Diseases 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 239000010959 steel Substances 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/0008—Industrial image inspection checking presence/absence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
According to the method and the system for detecting the loss of the fastening bolt of the equipment under the train, the characteristic point extraction and the characteristic matching are directly carried out on the preset reference image and the image to be detected by using the graph neural network of the seed pattern, the accurate extraction of the detection reference point is realized, the correctness of the corrected image obtained by perspective transformation is ensured, meanwhile, the loss of the detection bolt is converted into the detection normal bolt, and the problem that a sample of the loss bolt is difficult to obtain is solved; the foreground effective characteristics of the interested region are highlighted, background interference is eliminated, the train bottom image segmentation and bolt identification precision is improved, and the aim of improving the accuracy of the bolt loss detection result is finally achieved. The inspection robot can extract more characteristic points aiming at the scene with large difference between the image to be detected and the original image caused by pollution, illumination and the like in the train bottom bolt missing recognition environment, can recognize any region of any point position, structurally outputs bolt missing information, and has strong practicability and wide applicability.
Description
Technical Field
The invention relates to a method for detecting the loss of a fastening bolt, in particular to a method and a system for detecting the loss of a fastening bolt of equipment under a train.
Background
As a common mechanical part, the bolt is widely applied to daily life and industrial production and manufacturing, such as automobile manufacturing, rail transit, aerospace and the like, and the health state of the bolt has very important significance for normal and safe operation of equipment.
The rail transit industry is used as a life pulse for national and social economic development and daily travel, and the safety of the rail transit industry is not negligible. In order to ensure the safe operation of the subway, workers need to check the subway state regularly, and one of the checks is to check whether the bolt is lost. However, due to the complexity of the train, the number of fastening bolts of a single train can reach several hundreds, and in addition, the strictness of the rail transportation industry on train maintenance, the detection of bolt loss is very heavy work.
The time cost and the cost of labor that the mode that the tradition was carried out the bolt disappearance and is detected through the manual work are high, and the work of repeatability and complicated environment under the car easily make maintainer produce visual fatigue simultaneously, lead to the wrong examination, miss the inspection, contain all on the low side in the economic nature or the reliability of a large amount of bolt to detecting.
In recent years, with the continuous development of artificial intelligence technology, a structural health detection method based on computer vision receives wide attention from academic circles and industrial circles, for example, rail transit is taken as an example, bolt defects at the bottom of a train are common device defects, and a classification model or a target detection model based on damaged images and normal images is mainly trained by the existing steel structure bolt loss detection method through methods such as traditional machine learning or deep learning, so that the model is applied to detect whether the defect condition that bolts are lost exists in an acquired image area.
Although inspection robots have been used for defect detection in rail transit lines, there are difficulties in identifying defects in bolts at the bottom of trains in rail transit scenarios.
The training of the current target detection model of deep learning needs the defect sample of a large amount of bolt deletions, however the image of bolt deletions is not well collected in rail transit, and the state of deletion is various simultaneously, is difficult to obtain sufficient bolt deletion image sample, and therefore the target detection model precision of training out is lower, and contains background information when the bolt connection image is gathered to the current, has influenced the degree of accuracy of testing result.
Secondly, the existing method generally acquires a deep learning target detection method in a bolt missing area, firstly, a target detection data set is difficult to collect and manufacture, the robustness of the field environment interference target detection is low, meanwhile, the method can only identify a specific area, the data set needs to be collected again for further training aiming at newly added equipment, and the function deployment is limited.
Meanwhile, the equipment at the bottom of the train is shot to change easily due to environmental interference such as illumination, oil stain and dust, different shooting effects are shown, certain positioning errors exist in the robot navigation and the cloud deck, and the reference image and the inspection image of the inspection robot are based on the matching algorithm of the SIFT, SURF, ORB and other features, so that a correct transformation matrix cannot be obtained at any point at the bottom of the train.
Therefore, there is a need for a new method and system for detecting the absence of fastening bolts for equipment under a train.
Disclosure of Invention
In order to solve the defects of the prior art, the invention aims to provide a method and a system for detecting the absence of fastening bolts of equipment under a train.
In order to achieve the above object, the present invention adopts the following technical solutions:
a method for detecting the loss of fastening bolts of equipment under a train comprises the following steps:
s1, acquiring a to-be-detected image and a preset reference image corresponding to a bolt at the same part of equipment to be detected;
s2, carrying out image registration on the image to be detected and a preset reference image, and correcting the image to be detected by utilizing perspective transformation to obtain a corrected image;
s3, identifying and correcting the position of the bolt in the image by adopting an image segmentation model;
and S4, segmenting bolt position information in the mask image based on a preset reference image, and confirming the missing result of the bolt at the bolt point position to be detected based on the bolt position information in the corrected image.
And acquiring the to-be-detected image and the preset reference image corresponding to the bolts at different parts of the to-be-detected equipment by adopting a fixed-point inspection shooting mode.
The image registration in step S2 adopts an SGMNet algorithm.
The image segmentation model in step S3 includes a SegNeXt model.
The bolt missing confirmation in the step S4 includes the following steps:
a1, calculating a circumscribed rectangle of each bolt position in a preset reference image segmentation mask image;
a2, traversing all circumscribed rectangular areas, and presetting an IoU score of each circumscribed rectangle of the reference image and the image to be detected;
and A3, judging that the bolt at the position is missing when the IoU is smaller than a specified threshold value.
The preset reference image in the step S1 is a front view of the bolt.
A train under-train equipment fastening bolt missing detection system comprises an inspection robot unit, an image correction unit, an image segmentation unit and a comparison unit;
the inspection robot is used for acquiring images to be detected and preset reference images corresponding to bolts at different parts of equipment to be detected by adopting a fixed-point inspection shooting mode and transmitting the images to the image correction unit;
the image correction unit is used for carrying out image registration on the image to be detected and a preset reference image and correcting the image to be detected to obtain a corrected image;
the image segmentation unit is used for identifying bolt position information in a segmentation mask image of an image to be detected and a preset reference image;
the comparison unit is used for confirming the bolt missing result of the bolt point position to be detected through the bolt position information in the segmentation mask image of the image to be detected and the preset reference image.
The inspection robot unit is also provided with a bolt information base: including device information corresponding to a preset reference image;
and based on the bolt missing result, the inspection robot unit outputs the equipment information of the missing bolt.
Further, the inspection robot unit outputs missing information of the bolt in the image to be detected.
The invention has the advantages that:
according to the method and the system for detecting the loss of the fastening bolt of the equipment under the train, the characteristic point extraction and the characteristic matching are directly carried out on the preset reference image and the image to be detected by using the graph neural network of the seed pattern, the accurate extraction of the detection reference point is realized, the correctness of the corrected image obtained by perspective transformation is ensured, meanwhile, the problem of detecting the loss of the bolt is converted into the problem of detecting the normal bolt, and the problem that a sample of the lost bolt is difficult to obtain is solved; the effective foreground characteristics of the interested region are highlighted, background interference is eliminated, the train bottom image segmentation and bolt identification precision is improved, and the aim of improving the accuracy of a bolt loss detection result is finally achieved.
According to the detection method and the detection system, aiming at the fact that the difference between an image to be detected and an original image is large due to pollution, illumination and the like in the train bottom bolt missing recognition environment, the SGMNet registration algorithm can extract more feature points in a scene with large difference, meanwhile, the seed map neural network is utilized to improve the feature matching efficiency, the feature matching capability is higher, and the robustness is superior to that of a traditional matching algorithm; the inspection robot can be combined with an inspection robot configuration tool to realize the identification of any point location and any area; the device information can be flexibly bound, the missing information of the bolt can be output in a structured mode, and the device has strong practicability and wide applicability.
Drawings
FIG. 1 is a flow chart of an embodiment of the method of the present invention;
FIG. 2 is a preset reference image (FIG. a) and a patrol inspection image-image to be inspected (FIG. b) acquired by a robot;
FIG. 3 is a segmentation mask diagram of a predetermined reference picture;
FIG. 4 is a diagram of the feature extraction and matching effect of a preset reference image and a patrol inspection image;
FIG. 5 shows the inspection chart correction (chart c) and the segmentation effect (chart d);
fig. 6 is a diagram of a calculation result of the bolt segmentation mask IoU of the preset reference map and the patrol map.
Detailed Description
The invention is described in detail below with reference to the figures and the embodiments.
A loss detection system of fastening bolts of equipment under a train consists of an inspection robot unit, an image correction unit, an image segmentation unit and a comparison unit.
The detection method is shown in fig. 1 and comprises the following steps:
s1, respectively acquiring an image to be detected and a preset reference image corresponding to a bolt at the same part of equipment to be detected by an inspection robot in a fixed-point inspection shooting mode, wherein the image to be detected and the preset reference image are shown in figure 2; the preset reference image is the front visual angle of the bolt, and the details are clear and can reflect the bolt characteristics.
And S2, based on the matching relation between the preset reference image and the image to be detected, correcting the image to be detected by utilizing perspective transformation to obtain a corrected image.
Due to the fact that navigation and holder positioning are possibly inaccurate in the fixed-point shooting process of the inspection robot, uncertain deviations such as dimensions, angles and positions exist between an image to be detected and a preset reference image; therefore, before the bolt segmentation is carried out on the image to be detected, the image to be detected of the part to be detected and a preset reference image need to be subjected to registration operation, the image to be detected is corrected by utilizing perspective transformation to obtain a corrected image, and pixel points at corresponding positions in the two images are further aligned;
and after registration, extracting a corresponding area of the bolt missing area to be detected in the image to be detected and a preset reference image, and using the corresponding area as the bolt missing area to be detected of the image to be detected. The perspective transformation can solve the problem of image visual angle, can also solve the problems of shielding, blurring and exposure in a small number of areas, and is convenient for effectively identifying the bolt in the area to be detected in the later period due to bolt deletion.
The image registration algorithm is SGMNet, a 'seed Module' (feeding Module) is introduced on the basis of SuperGlue, a sparse connection map is used for replacing a complete connection map in SuperGlue, and a seed map neural network with an attention mechanism is designed.
The method comprises the following steps: firstly, gathering feature information of all seed matching points by attention pooling; then, a 'seed Filter' (selecting Filter) is used for carrying out information exchange of matching points in/among the graphs, and the influence of mismatching points is restrained; finally, feature information subjected to sufficient context interaction is mapped back to each matching point by using attention inverse pooling.
SuperGlue regards all key points in the preset reference image and the image to be detected as a node of the graph, and connects all nodes with edges to form a fully connected graph, and this fully connected mode not only brings huge calculation cost, but also shows that the connection weights of many edges are almost 0 according to the experimental result, and these connections are redundant.
Therefore, SGMNet introduces a "seed mechanism" based on supersglue, that is, all matching points are not processed, but some points are selected as seeds (candidate points) from the matching points, and then the seeds are connected to form a map.
Firstly, extracting feature points and performing feature description in the manners of SIFT, ORB, superPoint and the like, and then fusing position information to obtain an initial feature vector.
A and B in the formula (1) respectively represent a preset reference image and an image to be detected,the feature description vector corresponding to the ith matching point in the image I,is the coordinate position of the ith matching point in the image I.
Then constructing a seed module: firstly, seed matching points are generated in initial characteristics by using a nearest neighbor matching method, meanwhile, the reciprocal of the ratio of paths before the nearest neighbor matching and the second nearest neighbor matching corresponding to the same point is used as the reliability score of matching by using a ratio test mode in an SIFT algorithm, and finally, a non-maximum suppression method is used for obtaining better seed points.
The seed module outputs a matching sequenceWhereinAn index list representing matching points in image I. Matching the obtained seed with a point sequence S and a corresponding feature vector、And input to a seed graph neural network SGMNet.
The core methods of SGMNet are weighted attention aggregation methods, attention pooling and seed filters.
The weighted attention clustering method means that m feature vectors for updating in a d-dimensional feature vector space areN feature vectors for processing,The weight vector of (2) is calculated as follows:
wherein,The label is subjected to softmax processing, Q is a linear projection of X and K, and V is a linear projection of Y. After attention aggregation, the updated Xr aggregates the information from Y, while each weight in w reflects the degree of influence of each vector in Y on X.
Attention pooling means that firstly, according to the seed matching sequence S, the feature vectors corresponding to the seed matching points are retrieved from the initial feature vectors, and the following formula (3) is adopted:
then, feature information of the matching points is aggregated in each graph respectively, and the following formula (4) is calculated:
and then splicing the preset reference image and the seed matching point feature vectors contained in the image to be detected along a line, inputting the spliced feature vectors into a multilayer perceptron, and calculating the following formula (5):
output of formula (5)And coding the visual and position context information in the preset reference image and the image to be detected and the characteristic information corresponding to the seed matching point, and taking the characteristic information as input to enter a seed filter.
The seed filter firstly performs information interaction between seed matching points in/between graphs, wherein the graph refers to the information interaction of matching points in a preset reference image and an image to be detected, and the following formula (6) is calculated:
the inter-graph refers to information interaction between the original image and the matching point of the image to be detected, and the following formula (7) is calculated:
the seed filter uses a single context specification branch to calculate the correct matching score gamma corresponding to each seed matching point, and the calculation process is as the following formula (8):
the final output of the seed filter is the feature vector corresponding to the seed matching pointAnd a score y for each pair of matching points.
The last step is an attention inverse pooling operation, matching the seeds with the feature vectors of the pointsUpdating the initial input feature vector tFI by taking the score gamma as a weight to obtain the input of the neural network of the next layer diagram t+1 F I The updated calculation is as follows equation (9):
after N layers of seed graph neural networks are processed, the final output characteristic vector is obtained, N F A , N F B then, the matching matrix M is calculated using the Sinkhorn algorithm, calculating the following formula (10-11):
Although the seed graph neural network has strong potential matching identification capability on the basis of the initial seeds, the matching result with higher accuracy can be obtained by reselecting a seed matching point on the basis of the previous matching and performing the matching again.
Specifically, according to the matching matrix M obtained above, a point with the maximum value in both rows and columns is selected as a candidate point, the first k points with the maximum matching value are selected as seed matching points, and a matching operation is performed again. The network is specifically realized, wherein the seed graph neural network used in the initialization stage is composed of 6-layer networks, and the seed graph neural network used in the optimization stage is composed of 3-layer networks. The loss function of the network contains two parts, as in equation (12):
the calculation process of the lasign is as the following formula (13):
l m represents a set of matching points,/ uA 、l uB The sets of mismatched points in the two graphs are respectively represented, where mismatched means that there are no points in the two graphs that have corresponding matches due to occlusion, angle change, and the like. Lweight represents the classified cross entropy loss of correct matching points and error matching points in the t-th layer network. For a pair of matching points, if its epipolar distance is less than a threshold, it is classified as a correct matching point.
And S3, identifying and correcting the bolt position in the image by adopting an image segmentation model.
According to the bolt image segmentation method based on deep learning, under the adverse conditions of rusting, poor quality of shot images, oil stain, shadow and the like of a to-be-detected area with bolt loss, the bolt foreground can still be stably extracted, and meanwhile, the influence of a complex background on the bolt loss is eliminated.
Firstly, acquiring an image containing a bolt missing area, carrying out interactive data annotation on a bolt in the acquired image, and forming a training set and a verification set; the labeling can be performed by adopting an eiseg tool, and the eiseg tool is a visual interactive image segmentation calibration tool.
Secondly, a copy-site offline data enhancement mode is adopted before model training to randomly shield images, the generalization capability of the segmentation model is improved, meanwhile, an online data enhancement mode is adopted in the model training process, and the adopted data enhancement modes include but are not limited to the following modes: the method comprises the steps of image contrast enhancement, HSV space transformation, scale change, perspective transformation, random rotation and the like, wherein multi-scale coding and multi-layer mlp structure decoding are adopted, so that the method can be better suitable for a small target of a bolt, and the model segmentation precision is improved.
And finally, finishing training when the verification result of the verification set meets the finishing condition, and storing the segmentation model parameters.
The image segmentation model is SegNext, which proposes that convolutional attention is a more efficient way to encode context information than the self-attention mechanism in a Transformer. A standard large-kernel convolution can be implemented by depth-wise conv, depth-wise division conv and pointwise conv instead of using a large-kernel convolution instead of the attention mechanism in the transform model. The split network comprises an encoding and decoding stage, wherein the encoding stage: the large kernel convolution is replaced by a simple element multiplication mode and three parallel multi-scale strip convolutions, and multi-scale features are provided, so that space attention is aroused.
And a decoding stage: global information is further extracted using multi-level features from different stages and using the Hamburger method. The segmentation model can acquire multi-scale context information from local to global, realizes adaptability on space and channels, and aggregates information of various levels from low to high.
The core approach of SegNext is a convolution encoder based on a conventional pyramid structure, using a structure similar to ViT for each building block, but without the attention mechanism of self-attention, but instead using multi-scale convolution attention (MSCA). MSCA has the advantages of large receptive field, multi-scale information, and adaptivity due to the dominance of large convolutional kernel decomposition, multi-branch parallel architecture, and the attention mechanism like VAN. The operation of multiplying the alternative position code by the element-by-element matrix is realized in a concrete way, and the calculation formula is as follows (14-15):
wherein F is the input characteristic, att and Out are the attention map and the output respectively, and Att and Out are multiplied element by element to obtain Out; DW _ Conv is depth-wise convolution, scale _0 is a branch of short cut, and Scale _ i is a branch of 7, 11 and 21 with different scales. A plurality of MSCAs are concatenated to form a convolutional encoder (MSCAN). The design of the MSCAN is similar to that of previous ViT backbones, and is divided into four stages, and each stage samples twice from 1/4 to 1/32.
Since the small convolution field is small and the large convolution parameters are explosive, the large convolution with small parameters is used herein. In a downstream task, the benefit brought by a large kernel convolution is more obvious than that of a small kernel convolution, the small kernel convolution is limited by the receptive field of the small kernel convolution, global information is difficult to capture usually, certain influence is brought in a semantic segmentation task, and identification is poor usually at the boundary of a target object and a background or some large-size targets. CNN can obtain the information of different dimensional characteristics by combining the kernel of different sizes, the convergence time is faster than that of a transform, and the calculated amount can be further reduced by using Depth-wise and asymmetric convolution.
A decoder: the outputs of the last three stages are concat first and then fed into a lightweight Hamburger to further model the global context. The Hamburger structure adopts a matrix decomposition mode to carry out global spatial domain information modeling. The lightweight Hamburger model shows excellent performance in attention-based semantic segmentation and large-scale image generation, wherein the modeling global information of the mechanism has a determining function. The use of a lightweight decoder in conjunction with a powerful convolutional encoder can improve computational efficiency.
The network training process is based on an MMSegmentation toolbox, adamw is used, and a small learning parameter is added to the adamw, so that parameters of the asymmetric convolution of the large kernel are adjusted.
Fig. 3 shows a segmentation mask map of the preset reference image.
Fig. 4 shows a characteristic extraction and matching effect diagram of the preset reference image and the patrol inspection image.
And S4, confirming the missing result of the bolt point position to be detected based on the bolt position information in the preset reference image segmentation mask image and the bolt position information in the correction image.
Fig. 5 shows the patrol map correction (map c) and the segmentation effect (map d).
Firstly, calculating the circumscribed rectangle of each bolt position in the preset reference picture bolt segmentation mask picture,
and traversing all the circumscribed rectangle areas, presetting an IoU score in each circumscribed rectangle of the reference image and the image to be detected, and judging that the bolt at the position is missing when the IoU is smaller than a specified threshold value.
As shown in fig. 6, a preset reference map and a patrol map bolt segmentation mask IoU calculation result map are shown.
S5, a bolt information base configured on the basis of the inspection robot unit: device information corresponding to a preset reference image; based on the bolt missing result, the inspection robot unit outputs equipment information of the missing bolt; and outputting missing information of the bolt in the image to be detected.
The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It should be understood by those skilled in the art that the above embodiments do not limit the present invention in any way, and all technical solutions obtained by using equivalent alternatives or equivalent variations fall within the scope of the present invention.
Claims (9)
1. A method for detecting the loss of fastening bolts of equipment under a train is characterized by comprising the following steps:
s1, acquiring an image to be detected and a preset reference image corresponding to a bolt at the same part of equipment to be detected;
s2, carrying out image registration on the image to be detected and a preset reference image, and correcting the image to be detected by utilizing perspective transformation to obtain a corrected image;
s3, identifying and correcting the position of the bolt in the image by adopting an image segmentation model;
and S4, confirming the missing result of the bolt point position to be detected based on the bolt position information in the preset reference image segmentation mask image and the bolt position information in the correction image.
2. The detection method according to claim 1, characterized in that fixed point patrol inspection shooting is adopted to obtain images to be detected and preset reference images corresponding to bolts at different parts of the equipment to be detected.
3. The detection method according to claim 1, wherein the image registration in step S2 employs an SGMNet algorithm.
4. The detection method according to claim 1, wherein the image segmentation model in step S3 comprises a SegNeXt model.
5. The detection method according to claim 1, wherein the absence confirmation of the bolt in the step S4 includes the steps of:
a1, calculating a circumscribed rectangle of each bolt position in a preset reference image segmentation mask image;
a2, traversing all the circumscribed rectangular areas, and presetting an IoU score of each circumscribed rectangle of the reference image and the image to be detected;
and A3, judging that the bolt at the position is missing when the IoU is smaller than a specified threshold value.
6. The detection method according to claim 1, wherein the preset reference image in step S1 is a front view of a bolt.
7. The detection system suitable for the detection method according to any one of claims 1 to 6, comprising an inspection robot unit, an image correction unit, an image segmentation unit, and a comparison unit;
the inspection robot is used for acquiring images to be detected and preset reference images corresponding to bolts at different parts of equipment to be detected by adopting a fixed-point inspection shooting mode and transmitting the images to the image correction unit;
the image correction unit is used for carrying out image registration on the image to be detected and a preset reference image and correcting the image to be detected to obtain a corrected image;
the image segmentation unit is used for identifying bolt position information in a segmentation mask image of an image to be detected and a preset reference image;
the comparison unit is used for confirming the bolt missing result of the bolt point position to be detected through the bolt position information in the segmentation mask image of the image to be detected and the preset reference image.
8. The inspection system of claim 7, wherein the inspection robot unit is further equipped with a bolt information base: including device information corresponding to a preset reference image;
and based on the bolt missing result, the inspection robot unit outputs the equipment information of the missing bolt.
9. The inspection system according to claim 8, wherein the inspection robot unit outputs missing information of the bolt in the image to be inspected.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211546124.0A CN115546223A (en) | 2022-12-05 | 2022-12-05 | Method and system for detecting loss of fastening bolt of equipment under train |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211546124.0A CN115546223A (en) | 2022-12-05 | 2022-12-05 | Method and system for detecting loss of fastening bolt of equipment under train |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115546223A true CN115546223A (en) | 2022-12-30 |
Family
ID=84722653
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211546124.0A Pending CN115546223A (en) | 2022-12-05 | 2022-12-05 | Method and system for detecting loss of fastening bolt of equipment under train |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115546223A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117095158A (en) * | 2023-08-23 | 2023-11-21 | 广东工业大学 | Terahertz image dangerous article detection method based on multi-scale decomposition convolution |
CN117333491A (en) * | 2023-12-01 | 2024-01-02 | 北京航空航天大学杭州创新研究院 | Steel surface defect detection method and system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6830707B1 (en) * | 2020-01-23 | 2021-02-17 | 同▲済▼大学 | Person re-identification method that combines random batch mask and multi-scale expression learning |
CN112419299A (en) * | 2020-12-04 | 2021-02-26 | 中冶建筑研究总院(深圳)有限公司 | Bolt loss detection method, device, equipment and storage medium |
CN112419298A (en) * | 2020-12-04 | 2021-02-26 | 中冶建筑研究总院(深圳)有限公司 | Bolt node plate corrosion detection method, device, equipment and storage medium |
CN112419297A (en) * | 2020-12-04 | 2021-02-26 | 中冶建筑研究总院(深圳)有限公司 | Bolt looseness detection method, device, equipment and storage medium |
-
2022
- 2022-12-05 CN CN202211546124.0A patent/CN115546223A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6830707B1 (en) * | 2020-01-23 | 2021-02-17 | 同▲済▼大学 | Person re-identification method that combines random batch mask and multi-scale expression learning |
CN112419299A (en) * | 2020-12-04 | 2021-02-26 | 中冶建筑研究总院(深圳)有限公司 | Bolt loss detection method, device, equipment and storage medium |
CN112419298A (en) * | 2020-12-04 | 2021-02-26 | 中冶建筑研究总院(深圳)有限公司 | Bolt node plate corrosion detection method, device, equipment and storage medium |
CN112419297A (en) * | 2020-12-04 | 2021-02-26 | 中冶建筑研究总院(深圳)有限公司 | Bolt looseness detection method, device, equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
HONGKAI CHEN: "Learning to Match Features with Seeded Graph Matching Network" * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117095158A (en) * | 2023-08-23 | 2023-11-21 | 广东工业大学 | Terahertz image dangerous article detection method based on multi-scale decomposition convolution |
CN117095158B (en) * | 2023-08-23 | 2024-04-26 | 广东工业大学 | Terahertz image dangerous article detection method based on multi-scale decomposition convolution |
CN117333491A (en) * | 2023-12-01 | 2024-01-02 | 北京航空航天大学杭州创新研究院 | Steel surface defect detection method and system |
CN117333491B (en) * | 2023-12-01 | 2024-03-15 | 北京航空航天大学杭州创新研究院 | Steel surface defect detection method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111340797B (en) | Laser radar and binocular camera data fusion detection method and system | |
CN110992349A (en) | Underground pipeline abnormity automatic positioning and identification method based on deep learning | |
CN115546223A (en) | Method and system for detecting loss of fastening bolt of equipment under train | |
CN113205466A (en) | Incomplete point cloud completion method based on hidden space topological structure constraint | |
CN111832484A (en) | Loop detection method based on convolution perception hash algorithm | |
CN115482195B (en) | Train part deformation detection method based on three-dimensional point cloud | |
CN114049356B (en) | Method, device and system for detecting structure apparent crack | |
CN113344852A (en) | Target detection method and device for power scene general-purpose article and storage medium | |
CN116229052B (en) | Method for detecting state change of substation equipment based on twin network | |
CN115147418B (en) | Compression training method and device for defect detection model | |
CN108133235A (en) | A kind of pedestrian detection method based on neural network Analysis On Multi-scale Features figure | |
CN113111875A (en) | Seamless steel rail weld defect identification device and method based on deep learning | |
CN111563896A (en) | Image processing method for catenary anomaly detection | |
CN112288700A (en) | Rail defect detection method | |
CN114092478B (en) | Anomaly detection method | |
CN114648669A (en) | Motor train unit fault detection method and system based on domain-adaptive binocular parallax calculation | |
Le et al. | Surface defect detection of industrial parts based on YOLOv5 | |
CN114549414A (en) | Abnormal change detection method and system for track data | |
CN116778346B (en) | Pipeline identification method and system based on improved self-attention mechanism | |
CN117422695A (en) | CR-deep-based anomaly detection method | |
CN114596244A (en) | Infrared image identification method and system based on visual processing and multi-feature fusion | |
CN112800934A (en) | Behavior identification method and device for multi-class engineering vehicle | |
CN112215301A (en) | Image straight line detection method based on convolutional neural network | |
CN116206222A (en) | Power transmission line fault detection method and system based on lightweight target detection model | |
CN115880660A (en) | Track line detection method and system based on structural characterization and global attention mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20221230 |