CN118134915A - Intelligent detection method for contact wire clip slotter - Google Patents

Intelligent detection method for contact wire clip slotter Download PDF

Info

Publication number
CN118134915A
CN118134915A CN202410545567.0A CN202410545567A CN118134915A CN 118134915 A CN118134915 A CN 118134915A CN 202410545567 A CN202410545567 A CN 202410545567A CN 118134915 A CN118134915 A CN 118134915A
Authority
CN
China
Prior art keywords
image
representing
sparse
wire clamp
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410545567.0A
Other languages
Chinese (zh)
Other versions
CN118134915B (en
Inventor
苏茂才
林仁辉
廖峪
李林宽
张威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nobicam Artificial Intelligence Technology Chengdu Co ltd
Original Assignee
Nobicam Artificial Intelligence Technology Chengdu Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nobicam Artificial Intelligence Technology Chengdu Co ltd filed Critical Nobicam Artificial Intelligence Technology Chengdu Co ltd
Priority to CN202410545567.0A priority Critical patent/CN118134915B/en
Publication of CN118134915A publication Critical patent/CN118134915A/en
Application granted granted Critical
Publication of CN118134915B publication Critical patent/CN118134915B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of contact networks, and particularly relates to an intelligent detection method for a contact wire clip slotter phenomenon. The method comprises the following steps: step 1: acquiring an original image of the overhead line system through an overhead line system suspension state monitoring device arranged on an overhead line system operation vehicle, and preprocessing the original image to remove noise and obtain a preprocessed image; step 2: aiming at the preprocessed image, detecting the contact net wire clamp to obtain a rectangular frame of the contact net wire clamp; then, detecting wire clip removal grooves of images corresponding to the rectangular frames of the contact wire clips by using a preset target detection model to obtain detection results; step 3: and visualizing the detection result in the original image. The invention combines deep learning, image processing and feature extraction technology to realize high-efficiency and accurate detection of the contact wire clip slotter. The method has the advantages of high accuracy, high efficiency, strong adaptability and the like.

Description

Intelligent detection method for contact wire clip slotter
Technical Field
The invention belongs to the technical field of contact networks, and particularly relates to an intelligent detection method for a contact wire clip slotter phenomenon.
Background
The railway contact net is taken as an important component of the railway electrification system and bears the key task of supplying power to the electric locomotive. The contact net wire clamp is a key component for keeping contact between the contact net wire and the supporting cross beam, and the stable operation of the contact net system and the safe passing of the train are directly influenced by the good state of the contact net wire clamp. However, in the operation process of the overhead contact system, due to the influence of external environment, equipment aging and other factors, the overhead contact system clamp can have the problems of groove removal and the like, even the overhead contact system is broken down when serious, the normal operation of railway transportation is influenced, and even the safety of trains and passengers is threatened.
In the past, the detection of catenary clamps has generally relied on manual inspection or periodic maintenance, with the following problems: traditional contact net fastener detects mainly relies on artifical inspection, needs personnel to ascend a height the operation, has the security risk, and inefficiency, can't satisfy the ever-increasing demand of railway transportation. The manual inspection needs to consume a large amount of manpower and material resources, has a long period, can only detect limited areas, and cannot realize full coverage and timely finding of problems. Because the main basis of manual inspection is the experience and subjective judgment of inspection personnel, certain uncertainty and limitation exist, and the problem of fine wire clamps is easily ignored, so that potential safety hazards are caused. Intelligent monitoring cannot be achieved: the traditional contact net wire clamp detection method cannot realize intelligent monitoring and data analysis, is difficult to cope with complex and changeable wire clamp states, and limits the intelligent level of a railway contact net system.
In order to solve the problems of the traditional contact net wire clamp detection, in recent years, some intelligent detection methods based on technologies such as image processing, machine learning, deep learning and the like are gradually proposed and applied. The method utilizes the camera equipment to acquire the contact net image, analyzes the image data through an algorithm, realizes automatic detection and diagnosis of the contact net wire clamp state, and achieves certain effect. However, the prior art still has several disadvantages: the existing intelligent detection method has the problems of false alarm and missing alarm when processing the contact network image and identifying the wire clamp state, and the reliability of the detection result is affected. Part of the prior art has poor adaptability to complex and changeable contact net scenes, such as illumination condition change, shielding and other conditions, and is easy to be interfered by external environments, so that the detection result is unstable. The processing speed of part of intelligent detection methods is slower, the requirements of real-time monitoring cannot be met, and timely grasping and processing of the state of the contact network cable clamp are affected. Some algorithms in the prior art have complex processing flow of image data, require a large amount of computing resources and time, and are not efficient enough.
Disclosure of Invention
The invention mainly aims to provide an intelligent detection method for the contact wire clip slotter, which combines deep learning, image processing and feature extraction technologies to realize efficient and accurate detection of the contact wire clip slotter. The method has the advantages of high accuracy, high efficiency, strong adaptability and the like, can effectively improve the safety and reliability of the railway contact net, and has positive promotion effect on the normal operation of railway transportation.
In order to solve the technical problems, the invention provides an intelligent detection method for the phenomenon of contact network cable clip slotter, which comprises the following steps:
Step 1: acquiring an original image of the overhead line system through an overhead line system suspension state monitoring device arranged on an overhead line system operation vehicle, and preprocessing the original image to remove noise and obtain a preprocessed image;
Step 2: aiming at the preprocessed image, detecting the contact net wire clamp to obtain a rectangular frame of the contact net wire clamp; then, detecting wire clip removal grooves of images corresponding to the rectangular frames of the contact wire clips by using a preset target detection model to obtain detection results;
Step 3: and visualizing the detection result in the original image.
Further, the step 2: aiming at the preprocessing image, the method for detecting the contact network wire clamp to obtain the rectangular frame of the contact network wire clamp comprises the following steps:
Step 2.1: through sparse coding dictionary learning, the preprocessed image is expressed as a linear combination of sparse coefficients;
Step 2.2: reconstructing the dictionary by using the sparse coefficient matrix to obtain a sparse reconstructed image block;
Step 2.3: performing block matching on the sparse reconstructed image block and the preprocessed image to find out an image sub-block related to the wire clamp characteristic; extracting wire clamp features from the matched image sub-blocks;
Step 2.4: performing sparse coding on the extracted wire clamp characteristics to obtain sparse coefficients of the wire clamp characteristics; reconstructing the wire clamp characteristics by using the sparse coefficient to obtain a reconstructed wire clamp characteristic vector;
step 2.5: performing wire clamp detection on the reconstructed wire clamp feature vector by using a classifier; and according to the detection result, positioning the position of the wire clamp in the preprocessing image to obtain the rectangular frame of the contact wire clamp.
Further, in step 2.1, the preprocessed image is represented as a linear combination of sparse coefficients by sparse coding dictionary learning using the following formula:
Wherein, Image block matrix representing a preprocessed image,/>Representing dictionary matrix,/>A matrix of sparse coefficients is represented,Representing sparse noise matrix,/>The first sparsity is sparse and is a set value; /(I)The second sparsity is sparse and is a set value; The third sparsity is sparse and is a set value; /(I) Representing the Frobenius norm; /(I)Representing a core norm, the core norm of the matrix being the sum of its singular values; by solving the minimization problem, a sparse coefficient matrix/>As a linear combination of sparse coefficients.
Further, in step 2.2, the dictionary is reconstructed by using the sparse coefficient matrix according to the following formula, so as to obtain a sparsely reconstructed image block:
In step 2.3, the sparse reconstructed image block is block matched with the preprocessed image by solving the following minimization problem, and the image sub-block related to the wire clamp feature is found:
Wherein, Representing image blocks/>, in a preprocessed imageImage sub-block, image block/>Is of a size/>Matrix of/>Representing the side length of the image block; /(I)Sub-blocks representing sparsely reconstructed image blocks, sparsely reconstructed image blocks and/>The same dimension; /(I)Representing a matching weight matrix, is of size/>Is a matrix of (a); /(I)Representing euclidean norms; /(I)Representing a first regularization parameter; /(I)Representing a second regularization parameter; /(I)And/>Are index of subscripts, and the values are 1 to/>Is an integer of (a).
Further, the method for extracting the clip features from the matched image blocks in step 2.3 includes: carrying out graying treatment on the matched image subblocks; at each pixel location of an image sub-block, a pixel is calculatedThe difference of each neighborhood pixel is obtainedA sequence of difference values; converting the difference sequence into binary numbers, and obtaining binary codes according to step functions; converting the obtained binary code string into decimal numbers, namely LBP characteristic values of the pixels; the pixels of the whole image sub-block are subjected to the calculation to obtain an LBP characteristic diagram which is used as a wire clamp characteristic/>
Further, the following minimization problem is solved again to obtain a sparse coefficient vector as the sparse coefficient of the wire clamp characteristic:
Wherein, Representing the characteristics of the wire clamp, is a wire clamp with the size of/>Column vector of/>Representing a dimension of the feature; /(I)Dictionary matrix representing clip features is of size/>Matrix of/>Representing the number of basis vectors in the dictionary; /(I): The sparse coefficient vector is expressed as a vector of the size/>Column vector of/>Representing the number of basis vectors in the dictionary; /(I)Representing the residual matrix, which is of size/>For capturing errors in the sparse coding process; /(I): Representing the first order/>, of a matrixA norm; based on the obtained sparse coefficient vector, the sparse coefficient of the wire clamp characteristic is obtained by solving the following minimization problem:
Wherein, A dictionary matrix representing the characteristics of the clips; the following formula is used, the wire clamp characteristic is reconstructed by using the sparse coefficient, and the reconstructed wire clamp characteristic vector/>
Further, in step 2.5, performing wire clip detection on the reconstructed wire clip feature vector by using a classifier based on a multi-core support vector machine; the multi-core support vector machine is trained by the following process: given training sampleWherein/>Is a feature vector,/>Is a corresponding tag,/>; Defining an objective function of the multi-core support vector machine as follows:
The constraint conditions are as follows:
;/>
Wherein, Is the dimension of the feature vector,/>Is/>Weight vector of elements of each dimension,/>Is/>Weights of elements of the dimensions,/>Is a regularization parameter,/>Is a relaxation variable,/>Is the number of training samples; /(I)Biasing for a multi-core support vector machine; tag/>Comprising the following steps: yes or no.
Further, the target detection model in the step 2 is obtained through training in the following process: using a convolutional neural network in deep learning as a target detection model, and predicting an image corresponding to a rectangular frame of a contact network wire clamp to obtain probability distribution of wire clamp ungrooving; considering the case of multi-scale and multi-level feature fusion, using a deep network structure with multiple convolution layers and pooling layers; the object detection model is expressed using the following formula:
Wherein, Representing an input image,/>Tag indicating wire clip removal,/>And/>Respectively representing a weight matrix and a bias matrix of the model,/>Representing a sigmoid function,/>Representing a modified linear unit.
Further, the target detection model calculates a loss function between a prediction result and a real label by using a cross entropy loss function, and simultaneously introduces a regularization term to prevent overfitting; the loss function is expressed using the following formula:
Wherein, Representing the number of training samples,/>Represents the/>Input image of individual samples,/>Is a regularization parameter which is a function of the data,Representing a hierarchy in the network.
Further, a random gradient descent optimization algorithm is used for minimizing a loss function of the target detection model and updating model parameters:
Wherein, Is learning rate,/>And/>Gradient of the loss function with respect to the weight matrix and bias, respectively; the weight matrix before updating; /(I) The updated weight matrix; /(I)Is the bias before update; /(I)Is the updated bias.
The intelligent detection method for the contact network cable clamp slotter has the following beneficial effects: firstly, an intelligent detection method is adopted, an original image of the overhead contact system is obtained through an overhead contact system suspension state monitoring device arranged on an overhead contact system operation vehicle, and is preprocessed, so that noise is removed, and a clear preprocessed image is obtained. Compared with the traditional manual inspection or simple image processing method, the method can acquire the image information of the overhead contact system more accurately, and provides a reliable data basis for the follow-up wire clamp groove removal detection. Secondly, the invention adopts a sparse coding dictionary learning method in the aspect of contact network wire clamp detection, and the preprocessed images are expressed as linear combination of sparse coefficients. The method can effectively extract the characteristic information in the image and combines the sparse reconstruction technology to realize the accurate detection of the wire clamp groove removal. Compared with the traditional image processing method, the sparse coding dictionary learning method can be better adapted to complex image background and illumination changes, and the detection accuracy and robustness are improved. In addition, the invention also introduces a multi-scale and multi-level feature fusion technology, and realizes the detection of the contact wire clip slop through a deep learning model. The deep learning model can automatically learn and extract the characteristics in the image, and combines multi-level characteristic representation to realize accurate detection of wire clamp groove removal under a complex scene. Compared with the traditional machine learning method, the deep learning model has stronger characterization capability and adaptability, and can better adapt to the wire clamp groove removal detection requirements in different scenes. In addition, the cross entropy loss function and regularization term are introduced to train and optimize the target detection model, so that the generalization capability and robustness of the model are improved. The cross entropy loss function can effectively measure the difference between the model prediction result and the real label, and meanwhile, regularization items are introduced, so that the complexity of the model can be controlled, the occurrence of the over-fitting phenomenon is prevented, and the generalization capability and stability of the model are improved. Finally, the invention adopts a random gradient descent optimization algorithm to minimize the loss function of the target detection model, updates the model parameters, and further improves the training efficiency and convergence speed of the model. The random gradient descent algorithm can enable the loss function to be gradually reduced by iteratively updating the model parameters, so that a target detection model with high accuracy and strong generalization capability is trained.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a method for intelligently detecting a contact network cable clamp slotter phenomenon according to an embodiment of the present invention.
Detailed Description
The method of the present invention will be described in further detail with reference to the accompanying drawings.
Example 1: referring to fig. 1, an intelligent detection method for a contact wire clip ungrooving phenomenon, the method includes:
Step 1: acquiring an original image of the overhead line system through an overhead line system suspension state monitoring device arranged on an overhead line system operation vehicle, and preprocessing the original image to remove noise and obtain a preprocessed image;
Step 2: aiming at the preprocessed image, detecting the contact net wire clamp to obtain a rectangular frame of the contact net wire clamp; then, detecting wire clip removal grooves of images corresponding to the rectangular frames of the contact wire clips by using a preset target detection model to obtain detection results;
Step 3: and visualizing the detection result in the original image.
Specifically, in step 1, the original image of the contact net is displayedMultipath wavelet transformation is carried out, and the multipath wavelet transformation is decomposed into different scales/>And position/>Wavelet coefficients of (c) are determined. The adopted multipath wavelet transformation formula is as follows:
Wherein, Expressed in scale/>And position/>Wavelet coefficient at,/>Representing the pixel values of the original image,/>Is a multipath wavelet function. For wavelet coefficient/>And (3) performing soft threshold denoising, wherein the formula is as follows:
Wherein, Representing denoised wavelet coefficients,/>Is a soft threshold,/>Is a sign function. For wavelet coefficient/>And (3) performing hard threshold denoising treatment, wherein the formula is as follows:
Wherein, Representing denoised wavelet coefficients,/>Is a hard threshold. Wavelet coefficient after denoisingInverse multipath wavelet transform is performed to reconstruct the denoised image. The inverse multipath wavelet transform is formulated as:
Wherein, Representing a reconstructed denoised image,/>Is an inverse multipath wavelet function. Optimizing soft threshold/>, by cross-validation or other optimization algorithmAnd hard threshold/>To achieve the best denoising effect. Through the steps, the high-complexity denoising pretreatment based on multipath wavelet is realized, noise in an original image can be effectively removed, and the accuracy and stability of a subsequent detection algorithm are improved.
The overhead line system suspension state monitoring device (4C) is formed by carrying out continuous fixed-point automatic snap imaging on geometrical parameters and important parts of the contact network by tens of sets of high-speed and high-definition industrial cameras and a background operation control system which are arranged on an overhead line system operation vehicle, acquiring image data and static geometrical parameters of the overhead line system, and carrying out 'one-rod one-grade' type storage and management. And on the basis of automatic identification and manual analysis, comparing, screening and the like are performed on the defects, so that a classification statistical report is generated, and comprehensive and reliable technical guarantee is provided for inspection and maintenance of the overhead contact system.The imaging detection device has the advantages that the imaging photo quantity of each pillar is about 44, the photo quantity of the dual-cantilever pillar is more than that of the dual-cantilever pillar, a great deal of workload is undoubtedly brought to manual analysis, visual fatigue easily occurs in the manual analysis process, and defects cannot be accurately analyzed and judged. The analysis mode can be optimized, for example, modular analysis is adopted, an internal equipment image in an imaging range is divided into a flat cantilever rod insulator, a cantilever support, a carrier cable base, a positioner, an inclined cantilever rod insulator and the like according to each part area, then manual analysis is carried out, and equipment defects can be conveniently, accurately and rapidly analyzed. In addition, defects of the prior manual analysis can be imported into an analysis software problem library through improving the software function, intelligent analysis of the software is realized through deep learning, the manual analysis workload is reduced, and the human resources are further released.
In step 3, first, according to the detection result obtained in step 2, the position information of the target in the original image is determined. Such position information may be given in the form of rectangular boxes representing the position and size of the object in the image. The detected target position is marked or drawn on the original image using image processing or drawing tools. Common labeling methods include drawing rectangular boxes around the target, adding text descriptions, and the like. This can intuitively display the position and shape of the detected object in the original image. The original image after labeling is displayed together with the detected target position, and is usually realized through image display software or an image processing library. The visualized image may be saved or displayed in real time in a user interface of the monitoring system for the operator to observe and analyze the detection results.
The benefits of visualizing the detection result in the original image are mainly represented by the following aspects: the visualized result is directly displayed on the original image, so that an operator can intuitively see the position and the shape of the detected target in the image, and the rapid understanding and analysis are facilitated. The visual result can be fed back to operators in real time, so that the operators can take corresponding measures in time, and the real-time performance and response speed of the system are improved. The visual result can be used as an intelligent auxiliary tool to help operators to judge the severity of the phenomenon of the contact net wire clamp ungrooving more accurately, and guide the repair work.
Example 2: the step 2: aiming at the preprocessing image, the method for detecting the contact network wire clamp to obtain the rectangular frame of the contact network wire clamp comprises the following steps:
Step 2.1: through sparse coding dictionary learning, the preprocessed image is expressed as a linear combination of sparse coefficients;
The core assumption of sparse coding dictionary learning is that a signal can be represented by a small number of basic elements (atoms in the dictionary). This means that although the signals may be high-dimensional in the original space, their representation can be achieved by a very small number of basic elements. This assumption holds in many natural signals, such as natural image and audio signals, which typically contain only a small number of important structures or features. The goal of sparse coding dictionary learning is to learn a dictionary (or set of atoms) so that the input signal can be represented by a linear combination of atoms in the dictionary. This dictionary is typically a matrix in which each column represents an atom and each row represents a signal sample. For a given signal, a compact representation of the signal can be obtained by finding the most appropriate combination of atoms in the dictionary. In dictionary learning, the goal of sparse coding is to find an optimal sparse coefficient vector for a given input signal so that the input signal can be represented by a linear combination of these coefficients. This means that most coefficients are zero or close to zero, with few non-zero coefficients contributing significant information. This sparse nature makes the representation of the signal more compact and efficient.
Step 2.2: reconstructing the dictionary by using the sparse coefficient matrix to obtain a sparse reconstructed image block;
In sparse coding dictionary learning, for a given input signal, a sparse coefficient matrix obtained after sparse coding is an important intermediate result. The matrix contains sparse coefficients corresponding to each input signal, describing sparse representation of the signals in the dictionary. The aim of the sparse coefficient reconstruction is to reconstruct the dictionary by using the sparse coefficient matrix, so as to obtain a sparse reconstructed image block. Specifically, for each input signal, the sparse coefficients in the sparse coefficient matrix are linearly combined with the atoms in the dictionary to reconstruct the original signal. For a given input signal, each column in its sparse coefficient matrix represents a sparse representation of the signal in the dictionary. Thus, by linearly combining each sparse coefficient with the corresponding dictionary atom, a corresponding reconstructed signal may be obtained. In step 2.2, after the dictionary is reconstructed by using the sparse coefficient matrix, a sparsely reconstructed image block can be obtained. These image blocks may be local parts of the reconstructed signal or the result of the reconstruction of the entire signal, depending on the method of signal analysis and processing selected.
Step 2.3: performing block matching on the sparse reconstructed image block and the preprocessed image to find out an image sub-block related to the wire clamp characteristic; extracting wire clamp features from the matched image sub-blocks;
In sparsely reconstructed image blocks, a variety of different image content and features may be included. Through the process of image block matching, image sub-blocks related to the clip features can be found therefrom. The matching process generally adopts a local similarity matching method, such as mean square error or correlation coefficient. Once the image sub-blocks associated with the clip features are found, the next task is to extract the features of the clip from these image sub-blocks. These features may include features in the shape, texture, color, etc. of the clip. Common feature extraction methods include gray level co-occurrence matrix (GLCM), direction gradient Histogram (HOG), local Binary Pattern (LBP), etc. Techniques for image block matching and feature extraction have found wide application in the fields of computer vision and image processing. The method can be used for tasks such as target detection of the image, similarity matching of the image, feature extraction of the image and the like, and deep understanding and processing of image content are realized by carrying out local analysis and feature extraction on the image. The innovativeness of the image block matching and the feature extraction in the step 2.3 is mainly embodied in the requirement of the wire clamp groove removal detection, and the information related to the wire clamp features can be effectively extracted from the sparse reconstructed image block by a local similarity matching and feature extraction method, so that an effective basis is provided for subsequent wire clamp detection and positioning. The method can fully utilize the local information and the characteristics of the image, and improves the accuracy and the reliability of wire clamp detection.
Step 2.4: performing sparse coding on the extracted wire clamp characteristics to obtain sparse coefficients of the wire clamp characteristics; reconstructing the wire clamp characteristics by using the sparse coefficient to obtain a reconstructed wire clamp characteristic vector;
In this step, the extracted clip features are represented as linear combinations of sparse coefficients to obtain the sparse coefficients of the clip features. This means that the clip features can be represented with a smaller number of coefficients, mostly zero or close to zero, i.e. sparse. Sparse coding processes typically involve finding an optimal set of coefficients by minimizing reconstruction errors or maximizing sparsity so that the clip features can be represented by a linear combination of these coefficients. Common sparse coding methods include LASSO, OMP, OMP-2, and the like. Once the sparse coefficients of the wire clamp features are obtained, the next task is to reconstruct the wire clamp features using these sparse coefficients to obtain reconstructed wire clamp feature vectors. This process typically involves linear combination of atoms in the original clip feature by sparse coefficients to yield a reconstructed clip feature vector. Sparse coding and reconstruction processes of wire clamp features have wide application in the fields of image processing and pattern recognition. The method can be used for tasks such as feature extraction, signal compression, signal denoising and the like, and high-efficiency processing and analysis of signals and features are realized through sparse representation and reconstruction. The sparse coding and reconstruction process in the step 2.4 can fully utilize the sparse property of the wire clamp characteristics, and realize the efficient representation and reconstruction of the wire clamp characteristics. By selecting a proper sparse coding method and dictionary reconstruction algorithm, key information in the wire clamp characteristics can be extracted and represented in a compact mode, and effective characteristic representation is provided for subsequent wire clamp detection and classification. The method can improve the accuracy and reliability of wire clamp detection, and has higher innovation and practicability.
Step 2.5: performing wire clamp detection on the reconstructed wire clamp feature vector by using a classifier; and according to the detection result, positioning the position of the wire clamp in the preprocessing image to obtain the rectangular frame of the contact wire clamp.
In this step, the reconstructed wire clip feature vector is subjected to wire clip detection by using a classifier in machine learning. The classifier is used for judging whether the feature vector belongs to the wire clamp or not according to the input feature vector. Common classifiers include support vector machines, random forests, convolutional neural networks, and the like. The classifier needs to be trained before it can be applied. The training process typically involves training a classifier model using labeled and non-clipped samples, using feature vectors from feature extraction. The goal of the training is to enable the classifier to accurately distinguish between the feature vectors of the clamps and the non-clamps. Once the classifier is trained, the reconstructed wire clamp feature vector can be input into the classifier for wire clamp detection. The classifier gives out the detection result of the wire clamp according to the input feature vector, namely judges whether the feature vector represents the wire clamp or not. According to the wire clamp detection result given by the classifier, the position of the wire clamp in the preprocessed image can be further positioned. If the classifier judges that a certain feature vector represents a wire clip, the image area corresponding to the feature vector contains the wire clip. According to the information, the position of the wire clamp in the original image can be determined, and the rectangular frame of the wire clamp of the overhead line system can be obtained. The innovation in the step 2.5 is that the classifier in machine learning is applied to the wire clamp detection task, so that the automatic detection of the wire clamp is realized. Through the trained classifier, the characteristic vector of the wire clamp can be rapidly and accurately judged, so that the high-efficiency detection and positioning of the contact wire clamp are realized. The method not only improves the accuracy and reliability of wire clamp detection, but also reduces the burden of manual operation, and has higher innovation and practicability.
Example 3: in step 2.1, the preprocessed image is represented as a linear combination of sparse coefficients by sparse coding dictionary learning using the following formula:
Wherein, Image block matrix representing a preprocessed image,/>Representing dictionary matrix,/>A matrix of sparse coefficients is represented,Representing sparse noise matrix,/>The first sparsity is sparse and is a set value; /(I)The second sparsity is sparse and is a set value; The third sparsity is sparse and is a set value; /(I) Representing the Frobenius norm; /(I)Representing a core norm, the core norm of the matrix being the sum of its singular values; by solving the minimization problem, a sparse coefficient matrix/>As a linear combination of sparse coefficients.
Specifically, sparse coding dictionary learning is an important signal processing technology, and is widely applied to the fields of image processing, pattern recognition and the like. The formula describes the optimization objective of the sparse coding dictionary learning problem, and by minimizing the reconstruction error and adding regularization terms, the representation of the preprocessed image as a linear combination of sparse coefficients is achieved. First, in the formulaImage block matrix representing a preprocessed image,/>Representing dictionary matrix,/>Representing a sparse coefficient matrix,/>Representing a sparse noise matrix. These variables form the fundamental component of the sparse coding dictionary learning problem. The optimization objective includes three parts: reconstructing error terms, sparsity constraints, and low rank constraints of the dictionary. Reconstruction error term/>By sparse coefficient matrix/>And dictionary matrix/>Error in reconstructing the preprocessed image. The sparsity constraint is divided into two parts, namely L1 norms of the sparse coefficientsAnd L1 norm of sparse noise/>. These two partial constraints enable the sparse coefficient matrix/>And sparse noise matrix/>With a high sparsity, i.e. most coefficients are zero. Low rank constraint of dictionary/>The dictionary matrix/>, is causedThe low rank allows strong correlation between basis vectors in the dictionary, thereby better capturing structural information of the image. Finally, by adjusting regularization parameters/>Can reconstruct the relationship between the error, the sparsity constraint, and the low rank constraint of the dictionary. In this way, the optimization process can obtain a proper sparse coefficient matrix/>Thereby realizing sparse representation of the preprocessed image. In general, the formula describes an optimization objective that represents the preprocessed image as a linear combination of sparse coefficients through sparse coding dictionary learning techniques. By minimizing reconstruction errors and adding sparsity constraints and low rank constraints of the dictionary, better sparse representation and dictionary learning results can be obtained, thereby realizing effective representation and analysis of the preprocessed image.
Example 4: in the step 2.2, the dictionary is reconstructed by using a sparse coefficient matrix to obtain a sparse reconstructed image block according to the following formula:
In step 2.3, the sparse reconstructed image block is block matched with the preprocessed image by solving the following minimization problem, and the image sub-block related to the wire clamp feature is found:
Wherein, Representing image blocks/>, in a preprocessed imageImage sub-block, image block/>Is of a size/>Matrix of/>Representing the side length of the image block; /(I)Sub-blocks representing sparsely reconstructed image blocks, sparsely reconstructed image blocks and/>The same dimension; /(I)Representing a matching weight matrix, is of size/>Is a matrix of (a); /(I)Representing euclidean norms; /(I)Representing a first regularization parameter; /(I)Representing a second regularization parameter; /(I)And/>Are index of subscripts, and the values are 1 to/>Is an integer of (a).
Specifically, in the formulaRepresenting image blocks in a preprocessed image,/>Is of a size/>Representing a matrix of pixels of an image block. And/>Sub-blocks representing sparsely reconstructed image blocks, and/>Having the same dimensions. />, in the formulaRepresenting matching all possible image sub-blocks, finding the matching result that minimizes the objective function. The process is to select image sub-blocks in a sliding window in the preprocessed image and match them with corresponding sparsely reconstructed image sub-blocks. First item/>Represents the sum of squares of the match errors, where/>Is a weighting matrix. The effect of this term is to measure the pre-processed image sub-block/>And sparsely reconstructed image subblocks/>The difference between them is measured by euclidean norms. Second item/>Is the first regularization term for weighting matrixRegularization is performed to prevent overfitting. Here, the Frobenius norm is used to penalize the weighting matrix, causing the weighting matrix to tend to stabilize. Third item/>The second regularization term penalizes the size of the sparse reconstructed image sub-block, and avoids excessively increasing the amplitude of the sparse reconstructed image sub-block. This term helps to maintain a reasonable amplitude of the sparsely reconstructed image sub-blocks, preventing over-matching situations. The goal of the overall minimization problem is by adjusting the weighting matrix/>Finding a matching result that minimizes the objective function, i.e. finding the best matching way, to minimize the matching error between the preprocessed image sub-block and the sparsely reconstructed image sub-block. This process can be solved by various optimization algorithms, such as gradient descent, coordinate descent, and the like. Finally, solving the weighting matrix/>The importance degree of each pixel in the matching process is reflected, so that the optimal matching between the sparse reconstructed image sub-block and the preprocessed image sub-block is realized, and the image sub-block related to the wire clamp characteristic is found. The formula provides a new method for matching the preprocessing image sub-block and the sparse reconstruction image sub-block by minimizing the matching error and introducing the regularization term. By adjusting the weighting matrix/>The image sub-blocks related to the wire clamp characteristics can be effectively found, and an effective basis is provided for subsequent wire clamp detection. The method fully utilizes the local information and sparse representation characteristics of the image, and improves the accuracy and reliability of wire clamp detection.
Example 5: the method for extracting the wire clip features from the matched image blocks in step 2.3 comprises the following steps: carrying out graying treatment on the matched image subblocks; at each pixel location of an image sub-block, a pixel is calculatedThe difference of each neighborhood pixel is obtainedA sequence of difference values; converting the difference sequence into binary numbers, and obtaining binary codes according to step functions; converting the obtained binary code string into decimal numbers, namely LBP characteristic values of the pixels; the pixels of the whole image sub-block are subjected to the calculation to obtain an LBP characteristic diagram which is used as a wire clamp characteristic/>
Specifically, first, for an image block to be processed, typically, a local area related to a clip in a pre-processed image, a graying process is performed first. This step converts the color image into a gray scale image, making the subsequent processing simpler. Next, processing is performed for each pixel position. At each pixel location, a neighborhood region centered on the pixel is defined, typically usingAnd uniformly distributed neighborhood pixels. Then, the gray difference between the pixel and the neighborhood pixel is calculated to obtainA sequence of differences. Next, each difference value is compared with the gray value of the center pixel, and the comparison result is converted into a binary number sequence. If the gray value of the neighborhood pixel is greater than or equal to the gray value of the center pixel, the value of the corresponding binary number array position is 1, otherwise, the value is 0. The resulting binary number array is then encoded with a step function to obtain a more compact representation. Common step functions are signed functions, symbolized functions, etc. And then, converting the coded binary sequence into decimal numbers, namely the LBP characteristic value of the pixel. Thus, the LBP feature value for each pixel location in the image block is obtained. And repeating the steps, and processing each pixel position in the image block to obtain the LBP characteristic map of the whole image block. This feature map reflects the texture information for each pixel location in the image block and helps describe the texture and detail features of the image block. Finally, taking the obtained LBP characteristic diagram as a wire clamp characteristic/>For representing texture information of the image block. Wire clamp feature/>The texture structure and detail characteristics of the image block can be reflected, and effective characteristic representation is provided for subsequent wire clamp detection.
Example 6: and solving the following minimization problem to obtain a sparse coefficient vector as a sparse coefficient of the wire clamp characteristic:
the objective function consists of two parts, the first part being a data item and the second part being a sparsity constraint item. Wherein the data item Representing the characteristics of the wire clamp/>Combined with linearity/>Plus residual/>The square error between the two, i.e. the sum of squares of the reconstruction errors. While sparsity constraint term/>Then it is to sparse coefficient vector/>And residual vector/>Sparsity penalty,/>Representation/>Norms, table/>First order/>, of matrixNorms. Data item represents the wire clip feature/>And by linear combination/>Plus residual/>Error between the resulting reconstructed results. By minimizing this term, the clip features can be enabled to be mapped to dictionary matrix/>And sparse coefficient vector/>Linear reconstruction while accounting for reconstruction errors. The sparsity constraint term comprises two parts, namely a sparse coefficient vector/>And residual vector/>And performing sparsity punishment. The introduction of sparsity constraints may enable sparse coefficient vectors/>And residual vector/>With fewer non-zero elements, resulting in a more sparse representation. This helps to improve the generalization ability and stability of the model, while important information of the line clamp characteristics can be extracted. By solving this minimization problem, an optimal sparse coefficient vector/>And residual vectorThereby realizing the sparse coefficient and reconstruction of the wire clamp characteristics. This process may be implemented by various optimization algorithms, such as gradient descent, coordinate descent, and the like. The minimization problem combines the data item and the sparsity constraint item, and the sparsity constraint is introduced while linear reconstruction is considered, so that the sparse coefficient of the line clamp characteristic can be better extracted, and certain robustness and generalization capability are realized. The method is helpful for describing the characteristics and the structure of the wire clamp more accurately, and provides powerful support for subsequent wire clamp detection and classification tasks.
Wherein,Representing the characteristics of the wire clamp, is a wire clamp with the size of/>Column vector of/>Representing a dimension of the feature; /(I)Dictionary matrix representing clip features is of size/>Matrix of/>Representing the number of basis vectors in the dictionary; /(I): The sparse coefficient vector is expressed as a vector of the size/>Column vector of/>Representing the number of basis vectors in the dictionary; /(I)Representing the residual matrix, which is of size/>For capturing errors in the sparse coding process; /(I): Representing the first order/>, of a matrixA norm; based on the obtained sparse coefficient vector, the sparse coefficient of the wire clamp characteristic is obtained by solving the following minimization problem:
Wherein, A dictionary matrix representing the characteristics of the clips; the following formula is used, the wire clamp characteristic is reconstructed by using the sparse coefficient, and the reconstructed wire clamp characteristic vector/>
The objective function of this problem consists of two parts, the first part being the data item and the second part being the sparsity constraint item. Wherein the data itemRepresenting the characteristics of the wire clamp/>And by linear combination/>The square of the euclidean distance between the obtained reconstruction results, i.e. the sum of squares of the reconstruction errors. While sparsity constraint term/>Then it is to sparse coefficient vector/>Sparsity penalty,/>Is a penalty factor,/>Representation/>Norms. Data item represents the wire clip feature/>And by linear combination/>The square of the euclidean distance between the resulting reconstruction results. By minimizing this term, the clip features can be enabled to be mapped to dictionary matrix/>And sparse coefficient vector/>Linear reconstruction while accounting for reconstruction errors. Sparsity constraint term vs. sparse coefficient vector/>Performing sparsity penalty such that the sparse coefficient vector/>With fewer non-zero elements. By adjusting the punishment coefficient, the strength of sparsity constraint can be controlled, and further sparse representation is obtained. This helps to improve the generalization ability and stability of the model, while important information of the line clamp characteristics can be extracted. By solving this minimization problem, an optimal sparse coefficient vector/>Thereby realizing the sparse coefficient of the wire clamp characteristic. This problem can be solved by various optimization algorithms, such as gradient descent, coordinate descent, etc. This minimization problem takes advantage of the resulting sparse coefficient vector/>The sparse coefficient of the wire clamp characteristic is further obtained by minimizing the reconstruction error and introducing the sparsity constraint. The method can better extract important information of the wire clamp characteristics, and has certain robustness and generalization capability. This helps to more accurately characterize and structure the wire clamps, providing powerful support for wire clamp detection and classification tasks.
Example 7: in the step 2.5, performing wire clamp detection on the reconstructed wire clamp feature vector by using a classifier based on a multi-core support vector machine; the multi-core support vector machine is trained by the following process: given training sampleWhereinIs a feature vector,/>Is a corresponding tag,/>; Defining an objective function of the multi-core support vector machine as follows:
The constraint conditions are as follows:
Wherein, Is the dimension of the feature vector,/>Is/>Weight vector of elements of each dimension,/>Is/>Weights of elements of the dimensions,/>Is a regularization parameter,/>Is a relaxation variable,/>Is the number of training samples; /(I)Biasing for a multi-core support vector machine; tag/>Comprising the following steps: yes or no.
Specifically, the objective function includes two parts: regularization term and error term. Regularization terms are used to control the complexity of the model, while error terms are used to measure classification errors. By optimizing this objective function, a model can be obtained that classifies the feature of the wire clamp. The regularization term is in the form ofHere/>Is a weight vector of the feature vector. The effect of this term is to limit the complexity of the model, avoiding overfitting the training data. In particular, it does this by punishing the size of the weight vector in the model. When the weight vector is larger, the value of the regularization term will also increase, making the optimization algorithm more prone to select smaller weight vectors to reduce the complexity of the model. The form of the error term is/>Wherein/>Is a regularization parameter,/>Is a relaxation variable. The effect of this term is to measure the classification error, i.e. the degree to which the training sample is misclassified. The value of the error term increases as the number of misclassified samples increases, but by adjusting the regularization parameter/>The degree of influence of the error term on the overall objective function can be controlled. Greater/>The values will result in a model that is more focused on misclassified samples, but smaller/>The value is then more concerned with the complexity of the model. Constraints ensure that training samples are correctly classified and limit the scope of model parameters. First constraint/>Ensure that the distance from the training sample to the hyperplane is greater than or equal to 1 minus the relaxation variable/>Thereby ensuring that the samples are correctly classified. Second constraint/>Limiting the range of values of the relaxation variables to non-negative ensures that they can act as relaxations, but not so large as to affect the performance of the model. In general, the multi-core support vector machine method of embodiment 7 learns a model that classifies the feature of the line clamp by optimizing the objective function and satisfying the constraint condition. The model can effectively process high-dimensional data and nonlinear relations, and accuracy and robustness of wire clamp detection are improved. By reasonably setting parameters and selecting proper kernel functions, the method can be better suitable for different types of wire clamp detection tasks, and can cope with various complex scenes and data distribution.
Example 8: the target detection model in the step 2 is obtained through training in the following process: using a convolutional neural network in deep learning as a target detection model, and predicting an image corresponding to a rectangular frame of a contact network wire clamp to obtain probability distribution of wire clamp ungrooving; considering the case of multi-scale and multi-level feature fusion, using a deep network structure with multiple convolution layers and pooling layers; the object detection model is expressed using the following formula:
Wherein, Representing an input image,/>Tag indicating wire clip removal,/>And/>Respectively representing a weight matrix and a bias matrix of the model,/>Representing a sigmoid function,/>Representing a modified linear unit.
Specifically, the formula represents the output probability of the target detection model, namely the probability distribution of wire clip ungrooving. Let the principle of each part be explained step by step.
First of all,And representing an input image, namely an image corresponding to the rectangular frame of the contact net wire clamp. The image is subjected to a series of convolution layers and pooling layers, and characteristic information in the image is gradually extracted. The convolution layer is used for extracting local features in the image, and the pooling layer is used for reducing the dimension and the calculated amount of the features and keeping main feature information. Next, the features are transformed linearly and/> through the hidden layerThe function is activated. The linear transformation of the hidden layer is through a weight matrix/>And bias vector/>Realized by the method. The linear transformation maps the input features to a higher dimensional space and introduces a nonlinear transformation that enhances the expressive power of the features. /(I)The activation function increases nonlinearity in the whole network, solves the problem of gradient disappearance in the traditional neural network, and enables the model to be more stable and effective. Finally, the probability distribution of the wire clip ungrooved is obtained through the linear transformation of the output layer and the sigmoid function. The linear transformation of the output layer is through a weight matrix/>And bias vector/>Realized by the method. The sigmoid function maps the result of the linear transformation to a probability space of 0 to 1, representing the probability of clip ungrooving. Therefore, the result output by the model can intuitively indicate whether the wire clamp is out of the groove or not, and intelligent detection of the wire clamp out of the groove is facilitated. The whole process involves convolutional neural network technology in deep learning, which has achieved great success in the fields of image recognition and target detection. Through a large amount of image data and a complex network structure, the CNN can learn complex features in the image, and can efficiently classify and identify the image. The object detection model in embodiment 8 utilizes the technology to realize automatic detection of the line clamping and releasing groove, and provides effective guarantee for safe operation of the railway contact net. By setting a threshold value, the probability of adding and removing the groove is compared with the threshold value, and if the probability exceeds the threshold value, the groove is removed from the wire clamp.
Example 9: the target detection model calculates a loss function between a prediction result and a real label by using a cross entropy loss function, and simultaneously introduces a regularization term to prevent overfitting; the loss function is expressed using the following formula:
Wherein, Representing the number of training samples,/>Represents the/>Input image of individual samples,/>Is a regularization parameter,/>Representing a hierarchy in the network.
In particular, the cross entropy loss function is a common loss function used to measure the difference between the classification model predictions and the true labels. For classification problems, such as wire clip ungrooving detection, the cross entropy loss function can well describe the difference between the probability distribution of the model output and the real labels. Specifically, for each sample, the cross entropy loss when its true label is positive and negative is calculated and summed to get the overall loss. In the formula (i) the formula (ii),Represents the/>True tag of individual samples,/>Representing the corresponding input image. /(I)Representation model pair/>The predicted probability of each sample, i.e. the probability distribution of the clip ungrooved. The cross entropy loss function measures the prediction accuracy of the model by comparing the prediction result of the model with the real label. When the prediction result of the model is consistent with the real label, the smaller the value of the cross entropy loss function is, the better the value is; when the predicted result is inconsistent with the real label, the larger the value of the loss function is, the better. By minimizing the cross entropy loss function, a model with accurate prediction can be trained, so that intelligent detection of wire clamp groove removal is realized. In addition to the cross entropy loss function, regularization terms are introduced into the formula. Regularization terms are introduced to prevent model overfitting. Overfitting refers to the phenomenon that a model performs well on a training set, but does not perform well on a test set, i.e., the model learns the features of the training set excessively, ignoring the generality of the data. To solve the over-fitting problem, the complexity of the model is limited by regularization terms. In the formula,/>Is a regularization parameter for controlling the weight of the regularization term in the overall loss function. /(I)Representing the/>, in a networkThe square of the Frobenius norm of the weight matrix of a layer. By minimizing regularization terms, the size of model parameters can be limited, and the complexity of the model can be reduced, so that the generalization capability of the model can be improved. In summary, the loss function in embodiment 9 combines the cross entropy loss function and the regularization term, and by minimizing the loss function, a target detection model with strong generalization capability and accurate prediction can be trained for intelligent detection of wire clip de-grooving. The model can play an important role in the safe operation of the railway contact net, and the accuracy and the robustness of wire clamp groove removal detection are improved.
Example 10: and (3) using a random gradient descent optimization algorithm to minimize a loss function of the target detection model and updating model parameters:
Wherein, Is learning rate,/>And/>Gradient of the loss function with respect to the weight matrix and bias, respectively; the weight matrix before updating; /(I) The updated weight matrix; /(I)Is the bias before update; /(I)Is the updated bias.
In particular, in deep learning, the training process of the model usually involves a loss function for measuring the difference between the model prediction result and the real label. In embodiment 10, the loss function employs a cross entropy loss function and introduces a regularization term. The cross entropy loss function is used to measure the difference between the model's predicted outcome for each sample and the true label, while the regularization term is used to control the complexity of the model, preventing overfitting. In each iteration, the SGD randomly selects a sample from the training set and calculates the gradient of the sample to the loss function. Then, the parameters of the model are updated according to the opposite direction of the gradient such that the loss function gradually decreases in the direction of gradient descent. Thus, through multiple iterations, the parameters of the model are continually adjusted, gradually approaching the optimal solution. The specific procedure for parameter updating is given in the formula. For a weight matrixAnd bias vector/>They are updated in each iteration according to the gradient of the loss function. The updated step size is defined by the learning rate/>The larger the learning rate, the larger the magnitude of the parameter update and vice versa. Thus, by continuously iteratively updating the parameters, the model can gradually converge to the minimum of the loss function, thereby achieving the training purpose. Notably, the learning rate in the SGD algorithm is an important super-parameter that directly affects the convergence speed and performance of the model. Selecting a suitable learning rate is critical to training the model, and too large or too small a learning rate may result in unstable training or too slow convergence.
While specific embodiments of the present invention have been described above, it will be understood by those skilled in the art that these specific embodiments are by way of example only, and that various omissions, substitutions, and changes in the form and details of the methods and systems described above may be made by those skilled in the art without departing from the spirit and scope of the invention. For example, it is within the scope of the present invention to combine the above-described method steps to perform substantially the same function in substantially the same way to achieve substantially the same result. Accordingly, the scope of the invention is limited only by the following claims.

Claims (10)

1. An intelligent detection method for the phenomenon of contact wire clip ungrooving is characterized by comprising the following steps:
Step 1: acquiring an original image of the overhead line system through an overhead line system suspension state monitoring device arranged on an overhead line system operation vehicle, and preprocessing the original image to remove noise and obtain a preprocessed image;
Step 2: aiming at the preprocessed image, detecting the contact net wire clamp to obtain a rectangular frame of the contact net wire clamp; then, detecting wire clip removal grooves of images corresponding to the rectangular frames of the contact wire clips by using a preset target detection model to obtain detection results;
Step 3: and visualizing the detection result in the original image.
2. The intelligent detection method for the contact network cable clamp de-grooving phenomenon according to claim 1, wherein the step 2: aiming at the preprocessing image, the method for detecting the contact network wire clamp to obtain the rectangular frame of the contact network wire clamp comprises the following steps:
Step 2.1: through sparse coding dictionary learning, the preprocessed image is expressed as a linear combination of sparse coefficients;
Step 2.2: reconstructing the dictionary by using the sparse coefficient matrix to obtain a sparse reconstructed image block;
Step 2.3: performing block matching on the sparse reconstructed image block and the preprocessed image to find out an image sub-block related to the wire clamp characteristic; extracting wire clamp features from the matched image sub-blocks;
Step 2.4: performing sparse coding on the extracted wire clamp characteristics to obtain sparse coefficients of the wire clamp characteristics; reconstructing the wire clamp characteristics by using the sparse coefficient to obtain a reconstructed wire clamp characteristic vector;
step 2.5: performing wire clamp detection on the reconstructed wire clamp feature vector by using a classifier; and according to the detection result, positioning the position of the wire clamp in the preprocessing image to obtain the rectangular frame of the contact wire clamp.
3. The intelligent detection method for the contact network cable clamp de-grooving phenomenon according to claim 2, wherein the following formula is used in the step 2.1, and the preprocessed image is represented as a linear combination of sparse coefficients through sparse coding dictionary learning:
Wherein, Image block matrix representing a preprocessed image,/>Representing dictionary matrix,/>Representing a sparse coefficient matrix,/>Representing sparse noise matrix,/>The first sparsity is sparse and is a set value; /(I)The second sparsity is sparse and is a set value; /(I)The third sparsity is sparse and is a set value; /(I)Representing the Frobenius norm; /(I)Representing a core norm, the core norm of the matrix being the sum of its singular values; by solving the minimization problem, a sparse coefficient matrix/>As a linear combination of sparse coefficients.
4. The intelligent detection method for the contact network cable clamp de-grooving phenomenon according to claim 3, wherein in the step 2.2, the dictionary is reconstructed by using a sparse coefficient matrix to obtain a sparse reconstructed image block by the following formula:
In step 2.3, the sparse reconstructed image block is block matched with the preprocessed image by solving the following minimization problem, and the image sub-block related to the wire clamp feature is found:
Wherein, Representing image blocks/>, in a preprocessed imageImage sub-block, image block/>Is of a size ofMatrix of/>Representing the side length of the image block; /(I)Sub-blocks representing sparsely reconstructed image blocks, sparsely reconstructed image blocks and/>The same dimension; /(I)Representing a matching weight matrix, is of size/>Is a matrix of (a); /(I)Representing euclidean norms; /(I)Representing a first regularization parameter; /(I)Representing a second regularization parameter; /(I)And/>Are index of subscripts, and the values are 1 to/>Is an integer of (a).
5. The intelligent detection method for the wire clip ungrooving phenomenon of the contact network cable according to claim 4, wherein the method for extracting the wire clip feature from the matched image block in the step 2.3 comprises the following steps: carrying out graying treatment on the matched image subblocks; at each pixel location of an image sub-block, a pixel is calculatedThe difference of each neighborhood pixel is obtainedA sequence of difference values; converting the difference sequence into binary numbers, and obtaining binary codes according to step functions; converting the obtained binary code string into decimal numbers, namely LBP characteristic values of the pixels; the pixels of the whole image sub-block are subjected to the calculation to obtain an LBP characteristic diagram which is used as a wire clamp characteristic/>
6. The intelligent detection method for the contact network cable clamp de-grooving phenomenon according to claim 5, wherein the following minimization problem is solved again to obtain a sparse coefficient vector as a sparse coefficient of cable clamp characteristics:
Wherein, Representing the characteristics of the wire clamp, is a wire clamp with the size of/>Column vector of/>Representing a dimension of the feature; /(I)Dictionary matrix representing clip features is of size/>Matrix of/>Representing the number of basis vectors in the dictionary; /(I): The sparse coefficient vector is expressed as a vector of the size/>Column vector of/>Representing the number of basis vectors in the dictionary; /(I)Representing the residual matrix, which is of size/>For capturing errors in the sparse coding process; /(I): Representing first order of matrixA norm; based on the obtained sparse coefficient vector, the sparse coefficient of the wire clamp characteristic is obtained by solving the following minimization problem:
Wherein, A dictionary matrix representing the characteristics of the clips; the following formula is used, the wire clamp characteristic is reconstructed by using the sparse coefficient, and the reconstructed wire clamp characteristic vector/>
7. The intelligent detection method for the wire clip ungrooving phenomenon of the contact network according to claim 6, wherein in the step 2.5, the wire clip detection is performed on the reconstructed wire clip feature vector by using a classifier based on a multi-core support vector machine; the multi-core support vector machine is trained by the following process: given training sampleWherein/>Is a feature vector,/>Is a corresponding tag,/>; Defining an objective function of the multi-core support vector machine as follows:
The constraint conditions are as follows:
Wherein, Is the dimension of the feature vector,/>Is/>Weight vector of elements of each dimension,/>Is/>Weights of elements of the dimensions,/>Is a regularization parameter,/>Is a relaxation variable,/>Is the number of training samples; /(I)Biasing for a multi-core support vector machine; tag/>Comprising the following steps: yes or no.
8. The intelligent detection method for the contact network cable clamp de-grooving phenomenon according to claim 7, wherein the target detection model in the step 2 is obtained through training in the following process: using a convolutional neural network in deep learning as a target detection model, and predicting an image corresponding to a rectangular frame of a contact network wire clamp to obtain probability distribution of wire clamp ungrooving; considering the case of multi-scale and multi-level feature fusion, using a deep network structure with multiple convolution layers and pooling layers; the object detection model is expressed using the following formula:
Wherein, Representing an input image,/>Tag indicating wire clip removal,/>And/>Respectively representing a weight matrix and a bias matrix of the model,/>Representing a sigmoid function,/>Representing a modified linear unit.
9. The intelligent detection method for the contact network cable clamp slotter of claim 8, wherein the target detection model calculates a loss function between a prediction result and a real tag by using a cross entropy loss function, and simultaneously introduces a regularization term to prevent overfitting; the loss function is expressed using the following formula:
Wherein, Representing the number of training samples,/>Represents the/>Input image of individual samples,/>Is a regularization parameter,/>Representing a hierarchy in the network.
10. The intelligent detection method for the contact network cable clamp de-slotting phenomenon according to claim 9, wherein a random gradient descent optimization algorithm is used for minimizing a loss function of a target detection model and updating model parameters:
Wherein, Is learning rate,/>And/>Gradient of the loss function with respect to the weight matrix and bias, respectively; the weight matrix before updating; /(I) The updated weight matrix; /(I)Is the bias before update; /(I)Is the updated bias.
CN202410545567.0A 2024-05-06 2024-05-06 Intelligent detection method for contact wire clip slotter Active CN118134915B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410545567.0A CN118134915B (en) 2024-05-06 2024-05-06 Intelligent detection method for contact wire clip slotter

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410545567.0A CN118134915B (en) 2024-05-06 2024-05-06 Intelligent detection method for contact wire clip slotter

Publications (2)

Publication Number Publication Date
CN118134915A true CN118134915A (en) 2024-06-04
CN118134915B CN118134915B (en) 2024-07-16

Family

ID=91243079

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410545567.0A Active CN118134915B (en) 2024-05-06 2024-05-06 Intelligent detection method for contact wire clip slotter

Country Status (1)

Country Link
CN (1) CN118134915B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120251013A1 (en) * 2011-03-31 2012-10-04 Fatih Porikli Method for Compressing Textured Images
CN110503614A (en) * 2019-08-20 2019-11-26 东北大学 A kind of Magnetic Resonance Image Denoising based on sparse dictionary study
CN112907449A (en) * 2021-02-22 2021-06-04 西南大学 Image super-resolution reconstruction method based on deep convolution sparse coding
CN115953584A (en) * 2023-01-30 2023-04-11 盐城工学院 End-to-end target detection method and system with learnable sparsity
CN116843686A (en) * 2023-08-31 2023-10-03 成都考拉悠然科技有限公司 Method and device for detecting defects of wire clamps and nuts of contact net locator
CN117314900A (en) * 2023-11-28 2023-12-29 诺比侃人工智能科技(成都)股份有限公司 Semi-self-supervision feature matching defect detection method
CN117698438A (en) * 2022-09-15 2024-03-15 Ip传输控股公司 Method for controlling operation of a vehicle system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120251013A1 (en) * 2011-03-31 2012-10-04 Fatih Porikli Method for Compressing Textured Images
CN110503614A (en) * 2019-08-20 2019-11-26 东北大学 A kind of Magnetic Resonance Image Denoising based on sparse dictionary study
CN112907449A (en) * 2021-02-22 2021-06-04 西南大学 Image super-resolution reconstruction method based on deep convolution sparse coding
CN117698438A (en) * 2022-09-15 2024-03-15 Ip传输控股公司 Method for controlling operation of a vehicle system
CN115953584A (en) * 2023-01-30 2023-04-11 盐城工学院 End-to-end target detection method and system with learnable sparsity
CN116843686A (en) * 2023-08-31 2023-10-03 成都考拉悠然科技有限公司 Method and device for detecting defects of wire clamps and nuts of contact net locator
CN117314900A (en) * 2023-11-28 2023-12-29 诺比侃人工智能科技(成都)股份有限公司 Semi-self-supervision feature matching defect detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JIANJIE WU 等: "An automatic text generation algorithm of technical disclosure for catenary construction based on knowledge element model", 《ADVANCED ENGINEERING INFORMATICS》, 30 April 2023 (2023-04-30), pages 1 - 22 *
王佳祺: "基于卷积神经网络与稀疏编码的接触网关键部件及异物检测的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, 15 October 2018 (2018-10-15), pages 138 - 782 *

Also Published As

Publication number Publication date
CN118134915B (en) 2024-07-16

Similar Documents

Publication Publication Date Title
CN108765412B (en) Strip steel surface defect classification method
CN111832608B (en) Iron spectrum image multi-abrasive particle identification method based on single-stage detection model yolov3
CN116579616B (en) Risk identification method based on deep learning
Zheng et al. Tire defect classification using a deep convolutional sparse-coding network
CN117123131B (en) Petroleum aid production equipment and method thereof
CN109241870B (en) Coal mine underground personnel identity identification method based on gait identification
CN115294563A (en) 3D point cloud analysis method and device based on Transformer and capable of enhancing local semantic learning ability
CN114067286A (en) High-order camera vehicle weight recognition method based on serialized deformable attention mechanism
CN115527072A (en) Chip surface defect detection method based on sparse space perception and meta-learning
CN115861226A (en) Method for intelligently identifying surface defects by using deep neural network based on characteristic value gradient change
Zhang et al. Rethinking unsupervised texture defect detection using PCA
CN110321890B (en) Digital instrument identification method of power inspection robot
Araar et al. Traffic sign recognition using a synthetic data training approach
Zhang Application of artificial intelligence recognition technology in digital image processing
CN118134915B (en) Intelligent detection method for contact wire clip slotter
CN116580285A (en) Railway insulator night target identification and detection method
CN115112669B (en) Pavement nondestructive testing identification method based on small sample
CN114926702B (en) Small sample image classification method based on depth attention measurement
CN113537240B (en) Deformation zone intelligent extraction method and system based on radar sequence image
CN114943862A (en) Two-stage image classification method based on structural analysis dictionary learning
CN114842183A (en) Convolutional neural network-based switch state identification method and system
Lin et al. Fabric defect detection based on multi-input neural network
CN112418085A (en) Facial expression recognition method under partial shielding working condition
Shao et al. Generative Adversial Network Enhanced Bearing Roller Defect Detection and Segmentation
Rao et al. Markov random field classification technique for plant leaf disease detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant