CN116757990A - Railway fastener defect online detection and identification method based on machine vision - Google Patents

Railway fastener defect online detection and identification method based on machine vision Download PDF

Info

Publication number
CN116757990A
CN116757990A CN202310029435.8A CN202310029435A CN116757990A CN 116757990 A CN116757990 A CN 116757990A CN 202310029435 A CN202310029435 A CN 202310029435A CN 116757990 A CN116757990 A CN 116757990A
Authority
CN
China
Prior art keywords
image
fastener
pixel
edge
formula
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310029435.8A
Other languages
Chinese (zh)
Inventor
梁楠
陈晓雷
宋晓辉
张培
孙超伟
张伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Academy Of Sciences Institute Of Applied Physics Co ltd
Original Assignee
Henan Academy Of Sciences Institute Of Applied Physics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Academy Of Sciences Institute Of Applied Physics Co ltd filed Critical Henan Academy Of Sciences Institute Of Applied Physics Co ltd
Priority to CN202310029435.8A priority Critical patent/CN116757990A/en
Publication of CN116757990A publication Critical patent/CN116757990A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Quality & Reliability (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a machine vision-based railway fastener defect online detection and identification method, and belongs to the technical field of railways. The method comprises the following steps: initializing a system, setting a camera triggering mechanism, acquiring a two-dimensional image of a high-speed railway fastener area, setting triggering conditions for shooting by a camera at the next time according to a priori value, performing edge detection on the acquired image of the next frame, performing template matching with a standard image after translation, finding out an optimal pixel, and correcting a preset shooting triggering condition, wherein the mechanism can ensure that a fastener to be detected is positioned in an optimal imaging area; preprocessing an image, namely preprocessing a shot two-dimensional image, firstly performing median filtering, and then performing bilateral filtering on the image subjected to the median filtering to improve the imaging effect; detecting the defects of the fastener, performing binarization image segmentation on the preprocessed image, and then performing template matching to judge whether the fastener to be detected has defects; and (5) identifying the defect of the fastener. And identifying the defective fastener image by adopting a light-weight neural network. The invention can fully reduce the cost and the calculated amount of the existing fastener detection and identification method, make up the defects of the existing detection method, and has lower cost and strong practicability.

Description

Railway fastener defect online detection and identification method based on machine vision
Technical Field
The invention relates to the technical field of railway equipment overhaul, in particular to a railway fastener defect online detection and identification method based on machine vision.
Background
Railway fasteners are an important component of railway systems for securing rails to ties and their primary function is to secure the rails to the ties to prevent their migration. Whether the railway fastener is normal or not will directly influence the railway operation safety. Although the technology of railway fasteners has made a great breakthrough in recent years, with the development of railways in the direction of high speed and heavy load, under the action of various factors, the situations of fastener fracture, fastener dislocation, fastener loss and the like still occur, or visual appearance damage such as surface cracks, partial defects and the like may occur. The defect problem of the fastener can influence the normal operation of the vehicle, and in extreme cases, serious safety accidents such as derailment of the train and the like can be caused, so that huge losses of lives and properties are caused. Therefore, the method for regularly checking the health condition of the railway fasteners and effectively evaluating the health condition of the railway fasteners and timely maintaining or replacing the fasteners with defects is an important task for daily maintenance of the railway. Because the existing railway has high utilization rate, the time of the skylight capable of executing maintenance is increasingly shortened, and how to accurately judge the working condition of the fastener in a limited time becomes a realistic problem of a track maintenance department.
In order to maintain the safety of railway transportation, a large amount of manpower and material resources are input in China to patrol the railway, the traditional manual patrol method is still adopted at present to check the fastener industrial control, and although the railway safety is maintained to the greatest extent, the patrol efficiency is low and the manpower cost input is large.
With the development of machine vision technology, a part of departments adopt machine vision detection technology based on three-dimensional images to detect surface defects of fasteners. The defects are very obvious, and the following problems exist in the application in the field of fastener detection:
the operation cost of the existing track detection vehicle is extremely high, the vehicle-mounted computer loaded by the track detection vehicle is difficult to realize an image processing algorithm with high complexity, and the fastener faults are difficult to detect in real time and efficiently;
the existing detection algorithm based on visual images is difficult to effectively and accurately position the fastener, and most of the detection algorithms are used for indirectly deducing the position of the fastener area by detecting the position relationship between the sleeper and the steel rail. Such methods can make the fastener positioning inaccurate enough and difficult to distinguish whether there is a small displacement of the fastener itself.
The high-speed railway generally has the characteristic of high bridge-tunnel ratio, particularly in tunnels, the traditional positioning method and the signal transmission method are difficult to realize, and the faults are difficult to accurately position;
the existing track detection vehicle generally adopts a three-dimensional visual imaging detection scheme, utilizes light and a camera to form three-dimensional visual imaging, and identifies defects according to the depth information of a detected fastener.
Disclosure of Invention
Therefore, the invention aims to provide the on-line detection and identification method for the defects of the railway fastener based on machine vision, which can independently operate on a low-cost fastener detection vehicle on one hand and can be integrated in the existing track detection vehicle to operate as a functional module on the other hand.
In order to achieve the above purpose, the invention adopts the following technical scheme: a railway fastener defect online detection and identification method based on machine vision comprises the following steps:
s1) initializing a system, and setting a camera trigger mechanism, wherein the system comprises the following specific steps:
s11) calibrating the initial position of the fastener detection vehicle image:
the camera is subjected to nodding test to align the fastener, and the position for acquiring an ideal fastener image is taken as a reference position; coaxially installing an encoder on a track detection wheel shaft for detecting the actual running angle of the wheels of the track detection vehicle; the center distance between adjacent fasteners of a single track is set as a fixed value, and the effective radius of wheels of the track detection vehicle is set as follows; if the wheel runs for a circle, the actually measured pulse number of the encoder is that, the triggering condition of the camera shooting is that the pulse number of the encoder is detected, and the following conditions are satisfied:
/>
in the formula, the detection principle is that the turning radius of the rail can be regarded as infinity relative to the size of the rail detection vehicle, namely, the consistency of the central positions of adjacent rail fasteners is assumed to be good. In the actual program design, the track turning radius, that is, the actual difference between the distances between the adjacent fasteners on the left side and the right side of the rail, is considered, and the following formula can be adopted for calculation:
/>
the correction quantity when the trigger condition of shooting next time is represented can be preset according to the known road condition information, or on-line self-adaptive adjustment is carried out according to the offset quantity detected by the image edge, the left and right paths are sampled simultaneously, the fastener to be detected is guaranteed to be shot completely, and the consistency of shooting results is good;
s12) continuously correcting the triggering condition of the camera after the fastener detects the running of the vehicle:
in order to detect the offset degree of the acquired image in real time, the contour features of the whole fastener are guaranteed to be contained in the subsequent acquired image, the Canny edge detection algorithm is adopted to carry out edge extraction on the image acquired by the fastener detection vehicle, and the correction value is determined according to the pixel offset. The correction algorithm needs four steps of Gaussian denoising, image gradient and direction calculation, non-maximum value suppression and edge determination:
s12-1) carrying out Gaussian denoising on an original fastener image to be detected;
(3)
for a two-dimensional image, it is necessary to discretize a continuous gaussian function represented by equation (3) for calculating gaussian kernels of different sizes. For one (2)k+1)*(2kThe calculation method of the element of the Gaussian kernel with the size of +1) is shown as a formula (4):
(4)
wherein i and j respectively represent the coordinate positions of elements in the Gaussian kernel, the value of sigma varies with the size of the Gaussian kernel, and the calculation method is shown as formula (5):
(5)
s12-2) carrying out image gradient and direction calculation on the image after Gaussian denoising;
s12-3) after the amplitude and the direction of the image gradient are obtained, carrying out non-maximum suppression on the image;
traversing each pixel point in the image, judging whether the current pixel point is the maximum value with the same gradient direction in the local area, determining whether to inhibit the point according to the judging result, and sequentially removing all non-edge points to achieve the refining effect on the edge;
determining pixel pointsCThe gray value of (2) is within the 8-value neighborhoodg1、g2、g3 sum ofg4 is four pixels therein) is the maximum value, and the straight line in fig. 2 represents a pixelCThe local maximum is located on this line, i.e. the determinationCWhether the gray value of the dot is greater thanTmp1 pointTmpGray value of 2 points. If it isCThe point gray value is smaller than any one of the two points, which is describedCThe points being other than local maxima and being excludedCThe points are edge points. Wherein the method comprises the steps ofTmp1 andTmpthe pixel values of the two points 2 are required to be obtained by interpolation calculation according to the pixels of the two adjacent points, and the interpolation calculation is shown as a formula (11) and a formula (12) respectively:
(11)
(12)
s12-4) carrying out edge extraction on the image subjected to non-maximum suppression;
after non-maximum suppression is completed, an image formed by gradient local minima is obtained, and a plurality of discrete pixel points are displayed on the image. Therefore, the isolated pixel points are removed by adopting a double-threshold method, and the real edge points are connected. Given two thresholds, a high threshold TH and a low threshold TL, respectively. If the gradient value of the current edge information is larger than the high threshold value, the current edge pixel is marked as a strong edge; if the gradient value of the current edge information is between the low threshold value and the high threshold value, marking the current edge pixel as a virtual edge needing to be reserved; if the gradient value of the current edge information is less than the low threshold, the current edge pixel should be suppressed. Wherein the virtual edge needs to further find whether there are strong edge pixels in its 8 neighborhoods, if so, it is preserved, otherwise it is suppressed. The finally retained pixels are the edge information of the image;
s12-5) judging an edge extraction result, and solving a trigger condition offset;
taking edge information extracted from the fastener image shot at the 1 st time as a reference value, and carrying out up-and-down translation on the image shot at the 2 nd time by taking 10 pixels as a unit to obtain an image sequence, wherein the image sequence represents image edge information obtained by up-shifting 10 pixels; representing image edge information acquired by downshifting 10 pixels; and by analogy, finding out the image with the highest matching degree through the method in the step 4, recording the offset of the image, and marking the offset as a correction basis, thereby ensuring the consistency of the fastener image acquired by the camera at the triggering moment;
s2) image preprocessing:
based on the consistency of high-speed railway fastener imaging pretreatment under the full consideration different backgrounds, the positioning and matching effects of the high-speed railway fasteners in the images are guaranteed, the images are required to be pretreated, and the method specifically comprises the following steps:
s21) median filtering: aiming at the irregular texture formed by the main frosting of the iron backing plate, and the high-frequency noise points distributed irregularly exist in the texture, the effect of binary segmentation can be greatly influenced, the step adopts median filtering to the original image, and the high-frequency signals represented by the salt-and-pepper noise are removed;
s22) bilateral filtering: the image after middle finger filtering is subjected to bilateral filtering, and after median filtering, high-frequency noise points in the original image are effectively restrained, but the image is blurred to a certain extent. In order to achieve the segmentation effect, the background needs to be blurred, and edge information in the image needs to be kept at the same time of blurring, so that after median filtering, bilateral filtering is adopted on the image. The bilateral filter simultaneously considers the spatial information and the pixel difference information of the pixels, and the larger the pixel difference is, the smaller the filtering weight of the position in the filtering core is, so that the edge information of the pixel change in the image is effectively protected. The output pixel value of the bilateral filter depends on the weighted combination of the neighborhood pixel values, and the calculation of the output pixel value is shown as a formula (13);
(13)
in the middle of (a)k,l) The representation is positioned at @i,j) Neighborhood of pixel points.w(i,j,k,l) Representing a weight coefficient whose value depends on the product of the domain kernel and the value domain kernel, the calculation method is as shown in the formula (14) -formula (16):
(14)
(15)
(16)
the bilateral filter consists of two functions, wherein one function is to determine the filter coefficient by the geometric space distance, the other function is to determine the filter coefficient by the pixel difference value, and the image after median filtering is processed by the bilateral filter; through the processing, the background is further blurred, the high-frequency noise point in the image is almost eliminated, and the edge of the image is effectively reserved;
s3) fault detection:
the step is fault detection, namely judging the preprocessed image, and judging whether the shot fastener contains fault information or not. Together comprising the following sequence: (1) Performing self-adaptive binarization segmentation on the preprocessed original image to remove background areas except high-speed rail fasteners; (2) Template matching is carried out around the segmented image, so that the positioning of the high-speed rail fastener is realized; (3) Returning to the original gray level image for similarity calculation so as to judge whether defects exist or not:
s31) adaptive binarized image segmentation: if the light source in the scene is stable and the target in the image is single, the standard OSTU method is adopted for image segmentation, but the actual scene can not fully meet the precondition, the interference of reflection angles of different positions of the surface of the target fastener, the shaking or movement offset of the light source and other factors can easily cause that even illumination is uniform, pixels of different areas of the same target in the image are divided into different categories, so that the target can not be completely segmented; therefore, the OTSU Ojin method is improved by combining a morphological method during binary segmentation, and the binary segmentation effect of the OTSU is effectively improved;
calculating the optimal threshold of the current image according to the Ojin methodTThen give threshold valueTAdding and subtracting a bias valuebGenerates a larger threshold valueT high And a smaller thresholdT low As shown in the formula (17) and the formula (18):
(17)
(18)
respectively usingT high T low Performing binarization segmentation on an input image to generate two binarization segmentation graphsf high Andf low for a pair off high Performing morphological dilation operation and then imagef low And performing AND operation to obtain a final binarized segmentation image. In the low-threshold segmentation map, although the fastener can be completely segmented, if the illumination is uneven, a larger area in the fastener area is segmented into a background, while in the high-threshold segmentation map, although the segmented fastener is relatively complete, the segmented fastener is difficult to distinguish from the background, and the segmented fastener are combined by using morphological expansion operation, so that the complete fastener image segmentation which can be obviously distinguished from the background can be realized;
the essence of the binarization segmentation method is that pixel values in an image are divided intoL 1L 2L 3 Three stages of the method, namely three stages of the method,L 1L 2L 3 is defined as formula (19):
(19)
the binarization is performedL 1 Is set to 0, will belong toL 3 255, belonging toL 2 The pixels of the pixel (B) need to be judged by combining with the spatial information when the pixel belongs to the group consisting ofL 3 And (2) to the pixel of the pixel patternL 3 The class pixels are equally set to 255 and divided into the foreground. The algorithm can realize a better segmentation effect on the image of a single target;
s32) template matching: the template matching adopts a sliding window method, a window with the same size as the template image is slid in the image to be matched, the comparison between the image to be matched and the content in the window is taken out, the similarity between each window and the template image is calculated, and finally the position of the sub-image which is most similar to the content of the template image on the image to be matched is obtained through global comparison. The function used to calculate the similarity of two captured images is called a distance function, which is a major factor affecting the performance of the template matching algorithm. The algorithm adopts variance as a main basis for matching, and the calculation is shown as a formula (20):
(20)
in the middle of,R(x,y) The point is used for matching the imagesx,y) Similarity of template window at beginningx’,y’) For the coordinates in the template window,T(x’,y’) Is the middle point of the template imagex’,y’) Is used for the display of the display panel,I(x+x’,y+y’) The point of the images to be matched is #x+x’,y+y’) Is a pixel value of (a).R(x,y) The smaller the value of (2) the higher the similarity, whenR(x,y) The best match is 0. Meanwhile, similarity calculation in the form of square error is supported, as shown in formula (21):
(21)
the calculation formula using the correlation coefficient is shown in formula (22):
(22)
s33) calculating the difference degree between the area where the fastener is positioned and the template, and judging. After the binarized segmentation image is obtained, template matching is carried out on the image, so that the specific position of the high-speed rail fastener can be positioned. After the specific position of the fastener is obtained, the area where the fastener is positioned can be calculated on the gray level image, and finally the difference degree between the area where the fastener is positioned and the template is calculated by utilizing the normalized variance on the gray level image, if the difference is larger than a fixed threshold value, the fastener is determined to be defective, the fault classification of the next step is needed, and otherwise, the fastener is a normal fastener;
s4) fastener defect identification: after the steps are completed, the detected defective fastener is classified and judged. Considering the relative position fixation between the fastener detection vehicle and the high-speed railway fastener, adopting a neural network model including but not limited to a YOLOv5s lightweight neural network model; performing fault classification on the defect fastener image processed in the step S3) through a YOLOv5 model, wherein the specific position and the fault type of the fault in the fastener can be detected in the step;
thus, the on-line defect detection of the fastener is completed, and when the fastener detection vehicle continues to move, photographing and detection are repeatedly triggered.
Further, in step S12-2), the image gradient and direction of the image after gaussian denoising are calculated by: the gradient of the filtered image is calculated by adopting a Sobel operator, wherein the Sobel operator is a discrete differential operator for edge detection and comprises two groups of different convolution factors, and as shown in a formula (6), the input image can be respectively subjected to horizontal and longitudinal plane convolution:
(6)
the process of calculating the gradient by the Sobel operator: the convolution factor shown in the formula (6) is adopted to carry out convolution operation on the image, and the calculation of the transverse gradient and the longitudinal gradient is respectively shown in the formula (7) and the formula (8):
(7)
(8)
in the method, in the process of the invention,representing the convolution operation of the image, f (x, y) represents the pixel value of the input two-dimensional image with coordinates (x, y). From the obtained formulas (7) and (8), the gradient magnitude G and gradient direction θ of the image are obtained, and the formulas (9) and (10) are calculated as follows:
(9)
(10)
in step S4), the YOLOv5 model is composed of a backhaul network, a Neck network and a Head network, the backhaul network uses a CSPDarknet53, a mich activation function and a Dropblock method, the convolution operation is used to downsample the input 3-channel image, then the image is subjected to the maximum pooling operation of 1*1, 5*5, 9*9 and 13 x 13 to extract features, and finally the feature images are combined by a Concat and input into the Neck network; to obtain higher image fusion characteristics, a Neck network is generally inserted between a backhaul network and an output network, and an SPP module and an FPN+PAN method are adopted in the Neck network; the SPP module performs pooling operation through convolution kernels with different sizes, and then combines feature result graphs with different sizes, so that the most important context features are obviously separated; the FPN+PAN mode is to transfer and fuse high-level characteristic information from top to bottom in an up-sampling mode to obtain a characteristic diagram capable of being predicted, and accordingly fault classification is carried out on the defect fastener image processed in the step S3) through a YOLOv5 model, and therefore the specific position and the fault type of faults in the fastener are detected.
The beneficial effects of the invention are as follows: only two-dimensional images are adopted for discrimination, so that the defects of fracture, dislocation, loss and the like of the fastener are overcome, meanwhile, the positioning accuracy of the fastener can be remarkably improved, and the detection rate and the recognition rate of the defects of the fastener are improved. The method has the advantages that:
the detection cost is low, the detection instantaneity is good, and the detection efficiency is high; the position of the fastener is effectively and accurately positioned, and the defect that whether the fastener has tiny displacement or not is difficult to distinguish by the existing algorithm is overcome; the fault can be accurately positioned; the defect that texture features are easy to lose by adopting three-dimensional visual imaging detection is avoided.
Aiming at the problems, an online detection and identification method capable of accurately detecting macroscopic defects such as deformation, defect, crack and the like of the fastener is researched, the positioning of a fault fastener can be realized, the defects of the existing rail inspection vehicle in the aspects of fastener detection function and performance are overcome, and the technical problem to be solved urgently is solved, and the online detection and identification method has important economic value and social value.
Drawings
The technical features of the present invention will be further described with reference to the accompanying drawings and examples.
FIG. 1 is a schematic diagram of the magnitude and direction of an image gradient of the present invention.
Fig. 2 is a schematic view of image non-maximum suppression according to the present invention.
Fig. 3 is a flow chart of an adaptive thresholding algorithm according to the present invention.
Fig. 4 is a flow chart of the present invention.
Detailed Description
Referring to fig. 1-4, in one embodiment of the present invention, a method for detecting and identifying defects of railway fasteners on line based on machine vision is disclosed, as shown in fig. 1, comprising the steps of:
s1: initializing a system, and setting a camera trigger mechanism:
in this step, two sequences are included: the method comprises the steps of (1) detecting the initial position calibration of a vehicle image by a fastener; (2) After the fastener detects the running of the vehicle, the triggering condition of the camera is continuously corrected.
S11, calibrating initial position of fastener detection vehicle image
The camera is prone to align the fasteners and is set to a position where an image of the desired fastener can be obtained. An encoder is coaxially mounted on the rail detection wheel axle for detecting the actual operating angle of the wheels of the rail detection vehicle. If the wheel runs for a circle, the actually measured pulse number of the encoder is that, the triggering condition of the camera shooting is that the pulse number of the encoder is detected, and the following conditions are satisfied:
/>
in the formula, the natural number is adopted. In the program design, the track turning radius, the tiny sliding of wheels, the vibration of a camera and the tiny change of the position of the clip, namely, the actual difference of the distances between adjacent clips on the left side and the right side of the rail, can be calculated by adopting the following formula:
/>
the correction quantity when the trigger condition of photographing next time is represented in the formula can be preset according to the known road condition information or can be subjected to online self-adaptive adjustment according to the offset detected by the image edge, the left and right paths are sampled simultaneously, the fastener to be detected is guaranteed to be completely photographed, and the consistency of photographing results is good.
And S12, continuously correcting the triggering condition of the camera after the fastener detects the running of the vehicle.
Further, the implementation method of step S12 includes the following steps:
the correction algorithm comprises four steps of Gaussian denoising, image gradient and direction calculation, non-maximum value suppression and edge determination.
S12-1, gaussian denoising is carried out on an original fastener image to be detected;
(3)
for the original two-dimensional image, it is necessary to discretize the continuous gaussian function shown in equation (3) for calculating gaussian kernels of different sizes. For one (2)k+1)*(2kThe calculation method of the element of the Gaussian kernel with the size of +1) is shown as a formula (4):
(4)
where i and j represent the coordinate positions of the elements in the gaussian kernel, respectively. The value of sigma varies with the size of the gaussian kernel, and the calculation method is shown in formula (5).
(5)
S12-2, performing image gradient and direction calculation on the Gaussian denoised image;
then, the gradient of the filtered image is calculated using the Sobel operator. The Sobel operator is a discrete differential operator for edge detection, and comprises two groups of different convolution factors, as shown in formula (6), and can respectively carry out horizontal and longitudinal plane convolution on an input image:
(6)
the process of calculating the gradient by the Sobel operator: the convolution factor shown in the formula (6) is adopted to carry out convolution operation on the image, and the calculation of the transverse gradient and the longitudinal gradient is respectively shown in the formula (7) and the formula (8):
(7)
(8)
in the method, in the process of the invention,the convolution operation of the representation image, f (x, y) representing the input two-dimensional image with coordinates (x, y)Pixel values. From the obtained formulas (7) and (8), the gradient magnitude G and gradient direction θ of the image are obtained, and the formulas (9) and (10) are calculated as follows:
(9)
(10)
the gradient magnitude and gradient direction of the calculated image are shown in fig. 2.
S12-3, after the amplitude and the direction of the image gradient are obtained, carrying out non-maximum suppression on the image;
after the amplitude and the direction of the image gradient are obtained, the method needs to carry out non-maximum suppression on the image, namely, each pixel point in the image is traversed, whether the current pixel point is the maximum value with the same gradient direction in the local area is judged, and whether the point is suppressed is determined according to the judging result. All non-edge points are sequentially removed, the edge refinement effect is achieved, and the image non-maximum suppression is shown in fig. 3.
Determining pixel pointsCThe gray value of (2) is within the 8-value neighborhoodg1、g2、g3 sum ofg4 is four pixels therein) is the maximum value, and the straight line in fig. 2 represents a pixelCThe local maximum is located on this line, i.e. the determinationCWhether the gray value of the dot is greater thanTmp1 pointTmpGray value of 2 points. If it isCThe point gray value is smaller than any one of the two points, which is describedCThe points being other than local maxima and being excludedCThe points are edge points. Wherein the method comprises the steps ofTmp1 andTmpthe pixel values of the two points 2 are required to be obtained by interpolation calculation according to the pixels of the two adjacent points, and the interpolation calculation is shown as a formula (11) and a formula (12) respectively:
(11)
(12)
s12-4, carrying out edge extraction on the image subjected to non-maximum suppression;
after non-maximum suppression is completed, an image formed by gradient local minima is obtained, and a plurality of discrete pixel points are displayed on the image. Therefore, the isolated pixel points are removed by adopting a double-threshold method, and the real edge points are connected. Given two thresholds, a high threshold TH and a low threshold TL, respectively. If the gradient value of the current edge information is larger than the high threshold value, the current edge pixel is marked as a strong edge; if the gradient value of the current edge information is between the low threshold value and the high threshold value, marking the current edge pixel as a virtual edge needing to be reserved; if the gradient value of the current edge information is less than the low threshold, the current edge pixel should be suppressed. Wherein the virtual edge needs to further find whether there are strong edge pixels in its 8 neighborhoods, if so, it is preserved, otherwise it is suppressed. The pixels that are finally retained are the edge information of the image.
S12-5, judging an edge extraction result, and solving a trigger condition offset;
taking edge information extracted from the fastener image shot at the 1 st time as a reference value, and carrying out up-and-down translation on the image shot at the 2 nd time by taking 10 pixels as a unit to obtain an image sequence, wherein the image sequence represents image edge information obtained by up-shifting 10 pixels; representing image edge information acquired by downshifting 10 pixels; and by analogy, finding out the image with the highest matching degree through the method in the step 4, recording the offset of the image, and recording the offset as a correction basis. The purpose of this step is to ensure that the camera acquires the consistency of the fastener image at the moment of triggering.
It should be noted that the method is a mechanism for reasonably triggering the camera shutter, and considering the actual calculation amount, engineering experience and priori knowledge can be fused, so that the calculation workload is further reduced.
S2: image preprocessing
The consistency of the imaging pretreatment of the high-speed railway fasteners under different backgrounds is fully considered, and the positioning and matching effects of the high-speed railway fasteners in the images are guaranteed. Together comprising the following sequence: (1) median filtering; (2) bilateral filtering.
S21: the method aims at the problem that the iron backing plate is mainly an irregular texture formed by frosting, and high-frequency noise points irregularly distributed in the texture can greatly influence the effect of binary segmentation. The method comprises the steps of firstly adopting median filtering to an original image to remove high-frequency signals such as salt and pepper noise.
S22: and carrying out bilateral filtering on the image subjected to the middle finger filtering. The calculation of the output pixel value is shown in formula (13):
(13)
in the middle of (a)k,l) The representation is positioned at @i,j) Neighborhood of pixel points.w(i,j,k,l) Representing a weight coefficient whose value depends on the product of the domain kernel and the value domain kernel, the calculation method is as shown in the formula (14) -formula (16):
(14)
(15)
(16)
the bilateral filter is composed of two functions, one of which is to determine the filter coefficients from the geometric spatial distance and the other of which is to determine the filter coefficients from the pixel difference. And the image after median filtering is subjected to bilateral filter.
Through the processing, the background is further blurred, high-frequency noise points in the image are almost eliminated, and the edges of the image are effectively reserved.
S3: and judging whether the shot fastener contains fault information or not by judging the preprocessed image.
Further, the implementation method of the step S3 includes the following steps:
s31, self-adaptive binarization image segmentation:
the invention combines morphology method to improve traditional OTSU Ojin method, and improves the binarization segmentation effect of OTSU. The algorithm flow chart is shown in fig. 4. Calculating the optimal threshold of the current image according to the Ojin methodTThen give threshold valueTAdding and subtracting a bias valuebGenerates a larger threshold valueT high And a smaller thresholdT low As shown in the formula (17) and the formula (18):
(17)
(18)
respectively usingT high T low Performing binarization segmentation on an input image to generate two binarization segmentation graphsf high Andf low for a pair off high Performing morphological dilation operation and then imagef low And performing AND operation to obtain a final binarized segmentation image. The essence of the binarization segmentation method is that pixel values in an image are divided intoL 1L 2L 3 Three stages of the method, namely three stages of the method,L 1L 2L 3 is defined as formula (19):
(19)
the binarization is performedL 1 Is set to 0, will belong toL 3 255, belonging toL 2 The pixels of the pixel (B) need to be judged by combining with the spatial information when the pixel belongs to the group consisting ofL 3 And (2) to the pixel of the pixel patternL 3 The like pixels are set to 255 and dividedSplit into foreground. The algorithm can achieve a better segmentation effect on the image of a single target.
S32, carrying out template matching by adopting a sliding window method, sliding a window with the same size as the template image in the image to be matched, taking out the comparison between the image to be matched and the content in the window, calculating the similarity between each window and the template image, and finally solving the position of the sub-image on the image to be matched, which is most similar to the content of the template image, through global comparison. The algorithm adopts variance as a main basis for matching, and the calculation is shown as a formula (20):
(20)
in the middle of,R(x,y) The point is used for matching the imagesx,y) Similarity of template window at beginningx’,y’) For the coordinates in the template window,T(x’,y’) Is the middle point of the template imagex’,y’) Is used for the display of the display panel,I(x+x’,y+y’) The point of the images to be matched is #x+x’,y+y’) Is a pixel value of (a).
S33, calculating the difference degree between the area where the fastener is located and the template, and judging. After the binarized segmentation image is obtained, template matching is carried out on the image, so that the specific position of the high-speed rail fastener can be positioned. After the specific position of the fastener is obtained, the area where the fastener is located can be calculated on the gray level image, and finally the normalized variance is utilized on the gray level image to calculate the degree of difference between the area where the fastener is located and the template, if the difference is larger than a fixed threshold value, the fastener is determined to be defective, the fault classification of the next step is needed, and otherwise, the fastener is a normal fastener.
S4: fastener defect identification
After the steps are finished, a YOLOv5s lightweight neural network model is adopted to classify and judge the detected defective fastener.
The foregoing disclosure is merely illustrative of the present invention, but the embodiments of the present invention are not limited thereto, and any variations that can be considered by one skilled in the art should fall within the scope of the present invention.

Claims (3)

1. The railway fastener defect on-line detection and identification method based on machine vision is characterized by comprising the following steps of:
s1) initializing a system, and setting a camera trigger mechanism, wherein the system comprises the following specific steps:
s11) calibrating the initial position of the fastener detection vehicle image:
the camera is subjected to nodding test to align the fastener, and the position for acquiring an ideal fastener image is taken as a reference position; coaxially installing an encoder on a track detection wheel shaft for detecting the actual running angle of the wheels of the track detection vehicle; the center distance between adjacent fasteners of a single track is set as a fixed value, and the effective radius of wheels of the track detection vehicle is set as follows; if the wheel runs for a circle, the actually measured pulse number of the encoder is that, the triggering condition of the camera shooting is that the pulse number of the encoder is detected, and the following conditions are satisfied:
the detection principle of the natural number is that the turning radius of the rail can be regarded as infinity relative to the size of the rail detection vehicle, namely, the consistency of the central positions of adjacent rail fasteners is assumed to be good; in the actual program design, the track turning radius, that is, the actual difference between the distances between the adjacent fasteners on the left side and the right side of the rail, is considered, and the following formula can be adopted for calculation:
the correction quantity when the trigger condition of shooting next time is represented can be preset according to the known road condition information, or on-line self-adaptive adjustment is carried out according to the offset quantity detected by the image edge, the left and right paths are sampled simultaneously, the fastener to be detected is guaranteed to be shot completely, and the consistency of shooting results is good;
s12) continuously correcting the triggering condition of the camera after the fastener detects the running of the vehicle:
in order to detect the offset degree of the acquired image in real time, the contour features of the integral fasteners are guaranteed to be contained in the subsequent acquired image, an image acquired by a fastener detection vehicle is subjected to edge extraction by adopting a Canny edge detection algorithm, and a correction value is determined according to the pixel offset; the correction algorithm needs four steps of Gaussian denoising, image gradient and direction calculation, non-maximum value suppression and edge determination:
s12-1) carrying out Gaussian denoising on an original fastener image to be detected;
(3)
for a two-dimensional image, the continuous Gaussian function shown in the formula (3) needs to be discretized for calculating Gaussian kernels with different sizes; for one (2)k+1)*(2kThe calculation method of the element of the Gaussian kernel with the size of +1) is shown as a formula (4):
(4)
wherein i and j respectively represent the coordinate positions of elements in the Gaussian kernel, the value of the element varies with the size of the Gaussian kernel, and the calculation method is shown as formula (5):
(5)
s12-2) carrying out image gradient and direction calculation on the image after Gaussian denoising;
s12-3) after the amplitude and the direction of the image gradient are obtained, carrying out non-maximum suppression on the image;
traversing each pixel point in the image, judging whether the current pixel point is the maximum value with the same gradient direction in the local area, determining whether to inhibit the point according to the judging result, and sequentially removing all non-edge points to achieve the refining effect on the edge;
determining pixel pointsCThe gray value of (2) is within the 8-value neighborhoodg1、g2、g3 sum ofg4 is four pixels therein) is the maximum value, and the straight line in fig. 2 represents a pixelCThe local maximum is located on this line, i.e. the determinationCWhether the gray value of the dot is greater thanTmp1 pointTmpGray value of 2 points; if it isCThe point gray value is smaller than any one of the two points, which is describedCThe points being other than local maxima and being excludedCThe points are edge points; wherein the method comprises the steps ofTmp1 andTmpthe pixel values of the two points 2 are required to be obtained by interpolation calculation according to the pixels of the two adjacent points, and the interpolation calculation is respectively shown in the formula (11) and the formula (12);
(11)
(12)
s12-4) carrying out edge extraction on the image subjected to non-maximum suppression;
after non-maximum suppression is completed, an image formed by gradient local minima is obtained, and a plurality of discrete pixel points are displayed on the image; therefore, the isolated pixel points are removed by adopting a double-threshold method, and the real edge points are connected; giving two thresholds, namely a high threshold TH and a low threshold TL; if the gradient value of the current edge information is larger than the high threshold value, the current edge pixel is marked as a strong edge; if the gradient value of the current edge information is between the low threshold value and the high threshold value, marking the current edge pixel as a virtual edge needing to be reserved; if the gradient value of the current edge information is less than the low threshold value, the current edge pixel should be suppressed; the virtual edge needs to further search whether strong edge pixels exist in the 8 neighborhood of the virtual edge, if so, the virtual edge is reserved, otherwise, the virtual edge is restrained; the finally retained pixels are the edge information of the image;
s12-5) judging an edge extraction result, and solving a trigger condition offset;
taking edge information extracted from the fastener image shot at the 1 st time as a reference value, and carrying out up-and-down translation on the image shot at the 2 nd time by taking 10 pixels as a unit to obtain an image sequence, wherein the image sequence represents image edge information obtained by up-shifting 10 pixels; representing image edge information acquired by downshifting 10 pixels; and by analogy, finding out the image with the highest matching degree through the method in the step 4, recording the offset of the image, and marking the offset as a correction basis, thereby ensuring the consistency of the fastener image acquired by the camera at the triggering moment;
s2) image preprocessing:
based on the consistency of high-speed railway fastener imaging pretreatment under the full consideration different backgrounds, the positioning and matching effects of the high-speed railway fasteners in the images are guaranteed, the images are required to be pretreated, and the method specifically comprises the following steps:
s21) median filtering: aiming at the irregular texture formed by the main frosting of the iron backing plate, and the high-frequency noise points distributed irregularly exist in the texture, the effect of binary segmentation can be greatly influenced, the step adopts median filtering to the original image, and the high-frequency signals represented by the salt-and-pepper noise are removed;
s22) bilateral filtering: double-sided filtering is carried out on the image after the middle finger filtering, after the median filtering, high-frequency noise points in the original image are effectively restrained, but the image is blurred to a certain extent; in order to realize the segmentation effect, the background needs to be blurred, and the edge information in the image needs to be reserved at the same time of blurring, so that after median filtering, bilateral filtering is adopted for the image; the bilateral filter simultaneously considers the spatial information and the pixel difference information of the pixels, and the larger the pixel difference is, the smaller the filtering weight of the position in the filtering core is, so that the edge information of the pixel change in the image is effectively protected; the output pixel value of the bilateral filter depends on the weighted combination of the neighborhood pixel values, and the calculation of the output pixel value is shown as a formula (13);
(13)
in the middle of (a)k, l) The representation is positioned at @i, j) Neighborhood pixel points of the pixel points;w(i, j, k, l) Representing a weight coefficient whose value depends on the product of the domain kernel and the value domain kernel, the calculation method is as shown in the formula (14) -formula (16):
(14)
(15)
(16)
the bilateral filter consists of two functions, wherein one function is to determine the filter coefficient by the geometric space distance, the other function is to determine the filter coefficient by the pixel difference value, and the image after median filtering is processed by the bilateral filter; through the processing, the background is further blurred, the high-frequency noise point in the image is almost eliminated, and the edge of the image is effectively reserved;
s3) fault detection:
the step is fault detection, namely judging the preprocessed image, and judging whether the shot fastener contains fault information or not; together comprising the following sequence: (1) Performing self-adaptive binarization segmentation on the preprocessed original image to remove background areas except high-speed rail fasteners; (2) Template matching is carried out around the segmented image, so that the positioning of the high-speed rail fastener is realized; (3) Returning to the original gray level image for similarity calculation so as to judge whether defects exist or not:
s31) adaptive binarized image segmentation: if the light source in the scene is stable and the target in the image is single, the standard OSTU method is adopted for image segmentation, but the actual scene can not fully meet the precondition, the interference of reflection angles of different positions of the surface of the target fastener, the shaking or movement offset of the light source and other factors can easily cause that even illumination is uniform, pixels of different areas of the same target in the image are divided into different categories, so that the target can not be completely segmented; therefore, the OTSU Ojin method is improved by combining a morphological method during binary segmentation, and the binary segmentation effect of the OTSU is effectively improved;
calculating the optimal threshold of the current image according to the Ojin methodTThen give threshold valueTAdding and subtracting a bias valuebGenerates a larger threshold valueT high And a smaller thresholdT low As shown in the formula (17) and the formula (18):
(17)
(18)
respectively usingT high T low Performing binarization segmentation on an input image to generate two binarization segmentation graphsf high Andf low for a pair off high Performing morphological dilation operation and then imagef low Performing AND operation to obtain a final binarized segmentation image; in the low threshold segmentation map, although the fastener can be completely segmented, if the illumination is uneven, a larger area in the fastener area is segmented into a background, while in the high threshold segmentation map, although the segmented fastener is relatively complete, the fastener is difficult to be distinguished from the background, and the fastener is expanded by morphological operationAfter combination, the method can realize complete image segmentation of the fastener which can be obviously distinguished from the background;
the essence of the binarization segmentation method is that pixel values in an image are divided intoL 1L 2L 3 Three stages of the method, namely three stages of the method,L 1L 2L 3 is defined as formula (19):
(19)
the binarization is performedL 1 Is set to 0, will belong toL 3 255, belonging toL 2 The pixels of the pixel (B) need to be judged by combining with the spatial information when the pixel belongs to the group consisting ofL 3 And (2) to the pixel of the pixel patternL 3 The class pixels are set to 255 and are divided into the foreground; the algorithm can realize a better segmentation effect on the image of a single target;
s32) template matching: the template matching adopts a sliding window method, a window with the same size as the template image is slid in the image to be matched, the comparison between the image to be matched and the content in the window is taken out, the similarity between each window and the template image is calculated, and finally the position of the sub-image which is most similar to the content of the template image on the image to be matched is calculated through global comparison; the function for calculating the similarity of two photographed images is called a distance function, which is a main factor affecting the performance of a template matching algorithm; the algorithm adopts variance as a main basis for matching, and the calculation is shown as a formula (20):
(20)
in the middle of,R(x, y) The point is used for matching the imagesx, y) Similarity of template window at beginningx’, y’) For the coordinates in the template window,T(x’, y’) Is the middle point of the template imagex’, y’) Is used for the display of the display panel,I(x+x’, y+y’) The point of the images to be matched is #x+x’, y+y’) Pixel values of (2);R(x, y) The smaller the value of (2) the higher the similarity, whenR(x, y) When 0, the matching is the best matching; meanwhile, similarity calculation in the form of square error is supported, as shown in formula (21):
(21)
the calculation formula using the correlation coefficient is shown in formula (22):
(22)
s33) calculating the difference degree between the area where the fastener is positioned and the template, and judging; after a binarization segmentation image is obtained, template matching is carried out on the image, so that the specific position of the high-speed rail fastener can be positioned; after the specific position of the fastener is obtained, the area where the fastener is positioned can be calculated on the gray level image, and finally the difference degree between the area where the fastener is positioned and the template is calculated by utilizing the normalized variance on the gray level image, if the difference is larger than a fixed threshold value, the fastener is determined to be defective, the fault classification of the next step is needed, and otherwise, the fastener is a normal fastener;
s4) fastener defect identification: after the steps are finished, classifying and judging the detected defective fasteners; considering the relative position fixation between the fastener detection vehicle and the high-speed railway fastener, adopting a neural network model including but not limited to a YOLOv5s lightweight neural network model; performing fault classification on the defect fastener image processed in the step S3) through a YOLOv5 model, wherein the specific position and the fault type of the fault in the fastener can be detected in the step;
thus, the on-line defect detection of the fastener is completed, and when the fastener detection vehicle continues to move, photographing and detection are repeatedly triggered.
2. The machine vision-based railway fastener defect online detection and identification method according to claim 1, wherein the method comprises the following steps: in step S12-2), the image after gaussian denoising is subjected to image gradient and direction calculation by: the gradient of the filtered image is calculated by adopting a Sobel operator, wherein the Sobel operator is a discrete differential operator for edge detection and comprises two groups of different convolution factors, and as shown in a formula (6), the input image can be respectively subjected to horizontal and longitudinal plane convolution:
(6)
the process of calculating the gradient by the Sobel operator: the convolution factor shown in the formula (6) is adopted to carry out convolution operation on the image, and the calculation of the transverse gradient and the longitudinal gradient is respectively shown in the formula (7) and the formula (8):
(7)
(8)
in the formula, the convolution operation of the image is represented,f(x, y) represents a pixel value of (x, y) coordinates in the input two-dimensional image; from the obtained formulas (7) and (8), the gradient amplitude of the image is obtainedGAnd gradient direction, as shown in the formula (9) and the formula (10):
(9)
(10)。
3. the machine vision-based railway fastener defect online detection and identification method according to claim 1, wherein the method comprises the following steps: in step S4), the YOLOv5 model is composed of a backhaul network, a Neck network and a Head network, the backhaul network uses a CSPDarknet53, a mich activation function and a Dropblock method, the convolution operation is used to downsample the input 3-channel image, then the image is subjected to the maximum pooling operation of 1*1, 5*5, 9*9 and 13 x 13 to extract features, and finally the feature images are combined by a Concat and input into the Neck network; to obtain higher image fusion characteristics, a Neck network is generally inserted between a backhaul network and an output network, and an SPP module and an FPN+PAN method are adopted in the Neck network; the SPP module performs pooling operation through convolution kernels with different sizes, and then combines feature result graphs with different sizes, so that the most important context features are obviously separated; the FPN+PAN mode is to transfer and fuse high-level characteristic information from top to bottom in an up-sampling mode to obtain a characteristic diagram capable of being predicted, and accordingly fault classification is carried out on the defect fastener image processed in the step S3) through a YOLOv5 model, and therefore the specific position and the fault type of faults in the fastener are detected.
CN202310029435.8A 2023-01-09 2023-01-09 Railway fastener defect online detection and identification method based on machine vision Pending CN116757990A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310029435.8A CN116757990A (en) 2023-01-09 2023-01-09 Railway fastener defect online detection and identification method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310029435.8A CN116757990A (en) 2023-01-09 2023-01-09 Railway fastener defect online detection and identification method based on machine vision

Publications (1)

Publication Number Publication Date
CN116757990A true CN116757990A (en) 2023-09-15

Family

ID=87959614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310029435.8A Pending CN116757990A (en) 2023-01-09 2023-01-09 Railway fastener defect online detection and identification method based on machine vision

Country Status (1)

Country Link
CN (1) CN116757990A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993731A (en) * 2023-09-27 2023-11-03 山东济矿鲁能煤电股份有限公司阳城煤矿 Shield tunneling machine tool bit defect detection method based on image
CN117197700A (en) * 2023-11-07 2023-12-08 成都中轨轨道设备有限公司 Intelligent unmanned inspection contact net defect identification system
CN117409003A (en) * 2023-12-14 2024-01-16 四川宏亿复合材料工程技术有限公司 Detection method for backing plate of rail damping fastener
CN117437279A (en) * 2023-12-12 2024-01-23 山东艺达环保科技有限公司 Packing box surface flatness detection method and system
CN117541585A (en) * 2024-01-09 2024-02-09 浙江双元科技股份有限公司 Method and device for detecting exposed foil defect of lithium battery pole piece
CN117576416A (en) * 2024-01-15 2024-02-20 北京阿丘机器人科技有限公司 Workpiece edge area detection method, device and storage medium
CN117671545A (en) * 2024-01-31 2024-03-08 武汉华测卫星技术有限公司 Unmanned aerial vehicle-based reservoir inspection method and system
CN117745723A (en) * 2024-02-20 2024-03-22 常熟理工学院 Chip wire bonding quality detection method, system and storage medium
CN117893553A (en) * 2024-03-15 2024-04-16 宝鸡鼎钛金属有限责任公司 Image processing titanium metal segmentation method and system
CN118015000A (en) * 2024-04-09 2024-05-10 陕西思诺特精密科技有限公司 Surface defect detection method for guide rail based on image processing

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993731B (en) * 2023-09-27 2023-12-19 山东济矿鲁能煤电股份有限公司阳城煤矿 Shield tunneling machine tool bit defect detection method based on image
CN116993731A (en) * 2023-09-27 2023-11-03 山东济矿鲁能煤电股份有限公司阳城煤矿 Shield tunneling machine tool bit defect detection method based on image
CN117197700A (en) * 2023-11-07 2023-12-08 成都中轨轨道设备有限公司 Intelligent unmanned inspection contact net defect identification system
CN117197700B (en) * 2023-11-07 2024-01-26 成都中轨轨道设备有限公司 Intelligent unmanned inspection contact net defect identification system
CN117437279B (en) * 2023-12-12 2024-03-22 山东艺达环保科技有限公司 Packing box surface flatness detection method and system
CN117437279A (en) * 2023-12-12 2024-01-23 山东艺达环保科技有限公司 Packing box surface flatness detection method and system
CN117409003A (en) * 2023-12-14 2024-01-16 四川宏亿复合材料工程技术有限公司 Detection method for backing plate of rail damping fastener
CN117409003B (en) * 2023-12-14 2024-02-20 四川宏亿复合材料工程技术有限公司 Detection method for backing plate of rail damping fastener
CN117541585B (en) * 2024-01-09 2024-04-16 浙江双元科技股份有限公司 Method and device for detecting exposed foil defect of lithium battery pole piece
CN117541585A (en) * 2024-01-09 2024-02-09 浙江双元科技股份有限公司 Method and device for detecting exposed foil defect of lithium battery pole piece
CN117576416A (en) * 2024-01-15 2024-02-20 北京阿丘机器人科技有限公司 Workpiece edge area detection method, device and storage medium
CN117576416B (en) * 2024-01-15 2024-05-14 北京阿丘机器人科技有限公司 Workpiece edge area detection method, device and storage medium
CN117671545A (en) * 2024-01-31 2024-03-08 武汉华测卫星技术有限公司 Unmanned aerial vehicle-based reservoir inspection method and system
CN117671545B (en) * 2024-01-31 2024-04-19 武汉华测卫星技术有限公司 Unmanned aerial vehicle-based reservoir inspection method and system
CN117745723A (en) * 2024-02-20 2024-03-22 常熟理工学院 Chip wire bonding quality detection method, system and storage medium
CN117745723B (en) * 2024-02-20 2024-05-10 常熟理工学院 Chip wire bonding quality detection method, system and storage medium
CN117893553A (en) * 2024-03-15 2024-04-16 宝鸡鼎钛金属有限责任公司 Image processing titanium metal segmentation method and system
CN117893553B (en) * 2024-03-15 2024-05-31 宝鸡鼎钛金属有限责任公司 Image processing titanium metal segmentation method and system
CN118015000A (en) * 2024-04-09 2024-05-10 陕西思诺特精密科技有限公司 Surface defect detection method for guide rail based on image processing

Similar Documents

Publication Publication Date Title
CN116757990A (en) Railway fastener defect online detection and identification method based on machine vision
CN111145161B (en) Pavement crack digital image processing and identifying method
CN111310558B (en) Intelligent pavement disease extraction method based on deep learning and image processing method
CN112434695B (en) Upper pull rod fault detection method based on deep learning
CN106780486B (en) Steel plate surface defect image extraction method
Liang et al. Defect detection of rail surface with deep convolutional neural networks
CN110827235B (en) Steel plate surface defect detection method
CN112734761B (en) Industrial product image boundary contour extraction method
CN103295225B (en) Train bogie edge detection method under the conditions of low-light
CN109489724A (en) A kind of tunnel safe train operation environment comprehensive detection device and detection method
CN110288571B (en) High-speed rail contact net insulator abnormity detection method based on image processing
CN108647593A (en) Unmanned plane road surface breakage classification and Detection method based on image procossing and SVM
CN103077387A (en) Method for automatically detecting carriage of freight train in video
CN115035000B (en) Road dust image identification method and system
CN109558877B (en) KCF-based offshore target tracking algorithm
CN110705553A (en) Scratch detection method suitable for vehicle distant view image
CN112053385B (en) Remote sensing video shielding target tracking method based on deep reinforcement learning
CN115984806B (en) Dynamic detection system for road marking damage
CN109993741B (en) Steel rail welding seam contour automatic positioning method based on K-means clustering
CN111178111A (en) Two-dimensional code detection method, electronic device, storage medium and system
CN116524269A (en) Visual recognition detection system
Saha et al. Analysis of Railroad Track Crack Detection using Computer Vision
CN112508908B (en) Method for detecting disconnection fault of sanding pipe joint of motor train unit based on image processing
CN115541591A (en) Method and system for detecting abrasion edge of carbon pantograph slider of train
CN113592853A (en) Method for detecting surface defects of protofilament fibers based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination