CN115908400A - Intelligent detection and positioning method for track irregularity and diseases - Google Patents

Intelligent detection and positioning method for track irregularity and diseases Download PDF

Info

Publication number
CN115908400A
CN115908400A CN202211726114.5A CN202211726114A CN115908400A CN 115908400 A CN115908400 A CN 115908400A CN 202211726114 A CN202211726114 A CN 202211726114A CN 115908400 A CN115908400 A CN 115908400A
Authority
CN
China
Prior art keywords
track
irregularity
disease
image
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211726114.5A
Other languages
Chinese (zh)
Inventor
赵凡
林日扬
刘茜
晁宇
张志伟
钟玉柱
王竞敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Province Hanjiang To Weihe River Valley Water Diversion Project Construction Co ltd
Xian University of Technology
Original Assignee
Shaanxi Province Hanjiang To Weihe River Valley Water Diversion Project Construction Co ltd
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Province Hanjiang To Weihe River Valley Water Diversion Project Construction Co ltd, Xian University of Technology filed Critical Shaanxi Province Hanjiang To Weihe River Valley Water Diversion Project Construction Co ltd
Priority to CN202211726114.5A priority Critical patent/CN115908400A/en
Publication of CN115908400A publication Critical patent/CN115908400A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an intelligent detection and positioning method for rail irregularity and diseases, which aims to solve the problems that a rail irregularity and disease monitoring system on the current market needs manual cooperation, is low in intelligentization and integration, is poor in dynamic monitoring and sensing effect, is high in labor cost and the like; on the other hand, the image target deep learning detection model is adopted to detect and position the defects such as stripping, fish scale injury, spalling, corrugation and the like on the track, so that the defects of unevenness and the defects of the track can be accurately and quickly detected and analyzed, and the problems of quick detection and positioning of the defects of the track in the running process of the track inspection vehicle are solved. Compared with manual detection, the method is more reliable, efficient and safe.

Description

Intelligent detection and positioning method for track irregularity and diseases
Technical Field
The invention belongs to the technical field of fusion of technologies such as image processing, three-dimensional reconstruction and depth network, and particularly relates to an intelligent detection and positioning method for track irregularity and diseases.
Background
In railway operations, safety issues have always been placed in the primary position. Because the speed of the high-speed train is high in the running process, even a tiny safety accident can bring immeasurable serious consequences once the accident happens. The important guarantee of high-speed running operation is a high-smoothness track, but the smoothness of the high-speed railway track can be inevitably influenced along with the continuous increase of the impact frequency and amplitude of a high-speed train on the track structure in long-term operation, so that potential safety hazards are brought. In addition, the damage and the fracture of the rail surface caused by natural disasters and using processes are also great hidden dangers of the safety of the railway track. Therefore, the detection and positioning of the defects and the irregularity of the rail are core tasks to be solved by the high-speed railway engineering department.
In the aspect of railway track detection, the most commonly adopted large-scale track detection vehicle carries out dynamic detection only by monthly detection or quarterly detection, so that the detection cost is high, the efficiency is low, and the operation of professional persons is required; static detection is carried out by adopting a manual or orbit geometric state detector, so that the cost is low, the operation is simple, and the influence of environmental factors is easily caused; the construction and maintenance cost of the three-level control network is high, the network construction period is long, the workload is large, the skylight time is short, the measurement efficiency is low, and the low-speed railway network construction is uneconomical, so that a new detection scheme with high measurement precision and high operation speed is urgently needed.
The existing railway rail irregularity and disease monitoring system on the market has the problems of manual cooperation, low intelligentization and integration, poor dynamic monitoring and perception effects, high labor cost and the like.
Disclosure of Invention
The invention aims to provide an intelligent detection and positioning method for track irregularity and track defects, which can realize accurate and rapid detection and analysis of track defects.
The invention adopts the technical scheme that the intelligent detection and positioning method for the track irregularity and the diseases specifically comprises the following steps:
step 1, building a data positioning system and a data acquisition system on a T-shaped rail inspection vehicle, and acquiring position information { (P) of the rail inspection vehicle under a WGS84 coordinate system by using the data positioning system t ,B t ) In which P is t And B t Respectively the geodetic longitude and the dimensional value of the rail inspection vehicle at the time T, wherein T is more than or equal to 1 and less than or equal to T, and T is total acquisition time and also is the number of acquired data;
step 2, collecting a t-time orbit image sequence from three viewpoints by using a three-eye camera on a moving track inspection vehicle
Figure BDA0004030012720000021
Respectively obtaining a left viewpoint image, a middle viewpoint image and a right viewpoint image of the track at the time t, and respectively storing the images to obtain a track image sequence;
step 3, taking the track image sequence as input, and generating a 3D point cloud of the track scene at the time t by using a three-dimensional point cloud generation method
Figure BDA0004030012720000022
Simultaneously get>
Figure BDA0004030012720000023
Projection relation matrix to each viewpoint image->
Figure BDA0004030012720000024
And &>
Figure BDA0004030012720000025
A homography matrix is also obtained between the middle viewpoint image and the left and right viewpoint images->
Figure BDA0004030012720000026
And &>
Figure BDA0004030012720000027
Step 4, taking the middle viewpoint track image of the t moment
Figure BDA0004030012720000028
As input, two side lines of the left track are respectively extracted by means of a straight line extraction method>
Figure BDA0004030012720000029
And two side lines of the right track>
Figure BDA00040300127200000210
Pick up>
Figure BDA00040300127200000211
And &>
Figure BDA00040300127200000212
Is greater than or equal to>
Figure BDA00040300127200000213
Is at>
Figure BDA00040300127200000214
Evenly sampling K points on the upper part to form a point set>
Figure BDA00040300127200000215
Multiplied by the homography matrix between the middle and left view images>
Figure BDA00040300127200000216
Get the left viewpoint image->
Figure BDA00040300127200000217
On projection point set->
Figure BDA00040300127200000218
Handle point set
Figure BDA00040300127200000219
Projection relation matrix->
Figure BDA00040300127200000220
And &>
Figure BDA00040300127200000221
As an input, an open-source opencv-python library function cv2.TriangulatePoints () is called, and an output is got ^ or>
Figure BDA00040300127200000222
Corresponding set of 3D coordinates->
Figure BDA00040300127200000223
Extraction of Lr t 1 And Lr t 2 Central line Lr of t m At Lr, at t m K points are uniformly sampled to form a point set { pr t m,1 ,…,pr t m,k ,…,pr t m,K Get the point set { pr ] by the same method t m,1 ,…,pr t m,k ,…,pr t m,K } corresponding 3D coordinate ^ or>
Figure BDA0004030012720000031
Step 5, marking the track width variable as W s Extracting to assemble
Figure BDA0004030012720000032
In the center of each point with W s Is a cube with a side length and belongs to a scene 3D point cloud>
Figure BDA0004030012720000033
All points in the interior constitute a point set>
Figure BDA0004030012720000034
Wherein->
Figure BDA0004030012720000035
Is/is>
Figure BDA0004030012720000036
A point set extracted for the center point of the cube>
Figure BDA0004030012720000037
Is composed of
Figure BDA0004030012720000038
N of (1) k Dot, N k Is->
Figure BDA0004030012720000039
The number of the inner points is greater or less>
Figure BDA00040300127200000310
Are respectively in>
Figure BDA00040300127200000311
X, Y, Z coordinate of (4), (4)>
Figure BDA00040300127200000312
An initial point cloud of a left track; in the same way, an initial point cloud @ of the right track is obtained>
Figure BDA00040300127200000313
Step 6, recording the variable of the track elevation as H s Performing height filtering on the initial point cloud of the left track, and performing height filtering on the point cloud of the right track by the same method; the updated left and right track point clouds obtained after filtering are respectively
Figure BDA00040300127200000314
And &>
Figure BDA00040300127200000315
Step 7, carrying out point cloud on the left and right tracks at the moment t
Figure BDA00040300127200000316
And { SPr t 1 ,…,SPr t k ,…,SPr t K Analyzing data, judging whether the track has irregularity, and if the track has irregularity, judging the irregularity type C t Parameter of irregularity PS t And the irregularity position P t ns Recording;
step 8, calculating the irregularity position P t ns Corresponding GPS position
Figure BDA00040300127200000317
Step 9, utilizing the collected track image data to make a track disease data set, training a target detector based on stripping, fish scaling, spalling and corrugation of a track of a YOLOv7 network structure, and obtaining a track disease detection Model ill
Step 10, the orbit image at the time t
Figure BDA00040300127200000318
Model for disease detection ill Py under python conditions is invoked as input with the detection program train under file path Project/yolov7, outputting a disease @>
Figure BDA00040300127200000319
In (b) position>
Figure BDA00040300127200000320
And a category +>
Figure BDA00040300127200000321
Will be/are>
Figure BDA00040300127200000322
Multiply by a homography matrix>
Figure BDA00040300127200000323
Get->
Figure BDA00040300127200000324
On the left viewpoint imageProjection point->
Figure BDA00040300127200000325
To be->
Figure BDA00040300127200000326
And &>
Figure BDA00040300127200000327
And a 3D to 2D projection matrix +>
Figure BDA00040300127200000328
And &>
Figure BDA00040300127200000329
As an input, an open-source opencv-python library function cv2.TriangulatePoints () is called, and an output ^ is expressed>
Figure BDA00040300127200000330
Corresponding 3D coordinates P t ill
Step 11, calculating the defect position P obtained at the time t of the rail inspection vehicle by using the same method as the step 8 t ill In 3D coordinates of (a) and (b)
Figure BDA0004030012720000041
Step 12, outputting the irregularity type C obtained by the detection of the rail inspection vehicle at the moment t t Parameter of irregularity PS t Unsmooth GPS position
Figure BDA0004030012720000042
Disease type detected at t moment of output rail inspection vehicle>
Figure BDA0004030012720000043
And GPS position->
Figure BDA0004030012720000044
The invention is also characterized in that:
the specific steps of step 4 are as follows:
step 4.1, track image
Figure BDA0004030012720000045
As an input, calling Python programming language opencv-Python library function cv2.Imread () will >>
Figure BDA0004030012720000046
Converted into a gray-scale picture->
Figure BDA0004030012720000047
Step 4.2, gray level map
Figure BDA0004030012720000048
As an input, call Python programming language opencv-Python library function cv2.Gaussian blur () pair>
Figure BDA0004030012720000049
Gaussian filtering is carried out to obtain a filtered image->
Figure BDA00040300127200000410
Step 4.3, with filtered image
Figure BDA00040300127200000411
For input, call Python programming language opencv-Python library function cv2.Canny () on->
Figure BDA00040300127200000412
Performing edge detection processing to obtain an edge image->
Figure BDA00040300127200000413
Step 4.4, with edge image
Figure BDA00040300127200000414
Call Python programming language opencv-Python library function cv2.HoughLinesP for input() Is paired and/or matched>
Figure BDA00040300127200000415
Extracting the track line to obtain the left and right side lines of the left track>
Figure BDA00040300127200000416
And the left and right side lines of the right track>
Figure BDA00040300127200000417
Lr t 2 The representation form of two end points of the 4 lines is respectively
Figure BDA00040300127200000418
Figure BDA00040300127200000419
Wherein it is present>
Figure BDA00040300127200000420
Represents an endpoint closer to the camera>
Figure BDA00040300127200000421
Representing endpoints that are farther from the camera;
step 4.5, calculating the center line of the side line of the left track
Figure BDA00040300127200000422
Counting/or>
Figure BDA00040300127200000423
And &>
Figure BDA00040300127200000424
Midpoint of two end points
Figure BDA0004030012720000051
Calculate->
Figure BDA0004030012720000052
And &>
Figure BDA0004030012720000053
The midpoint of the two end points is->
Figure BDA0004030012720000054
To be->
Figure BDA0004030012720000055
And &>
Figure BDA0004030012720000056
As input, a straight line extraction algorithm getLineEqualification () is called to get the centerline of two trajectories of the left track->
Figure BDA0004030012720000057
I.e. the median line & (a, b, c) of the general equation>
Figure BDA0004030012720000058
Expressed as->
Figure BDA0004030012720000059
ax + by + c =0; obtaining the center line Lr of the two tracks of the right track by the same method t m
Step 4.6 at
Figure BDA00040300127200000510
Evenly sampling K points on the upper part to form a point set>
Figure BDA00040300127200000511
Step 4.7, set points
Figure BDA00040300127200000512
The inner points are each multiplied by a homography matrix->
Figure BDA00040300127200000513
Get its picture at left viewpoint->
Figure BDA00040300127200000514
The projection point set in (4)>
Figure BDA00040300127200000515
Step 4.8, the point set on the middle viewpoint image
Figure BDA00040300127200000516
Projection point set on left viewpoint image>
Figure BDA00040300127200000517
Projection relation matrix->
Figure BDA00040300127200000518
As an input, calling the opencv-python library function cv2.Triangulatepoints () of an open source, and outputting a 3D coordinate set ^ based on the left track>
Figure BDA00040300127200000519
Step 4.9, at Lr t m Uniformly sampling K points to form a point set { pr t m,1 ,…,pr t m,k ,…,pr t m,K And (6) repeating the operation of the steps 4.7-4.8, and outputting a point set { pr } t m,1 ,…,pr t m,k ,…,pr t m,K The corresponding set of 3D coordinates { Pr } t 1 ,…,Pr t k ,…,Pr t K }。
The height filtering process of the initial point cloud of the left track in the step 6 comprises the following steps:
Figure BDA00040300127200000520
if its Z coordinate belongs to [ H ] s -△ h ,H s +△ h ]If not, then reserve, otherwise slave set +>
Figure BDA00040300127200000521
Middle deletion, where h And allowing a change value for the track point cloud elevation.
The specific steps of step 7 are as follows:
step 7.1, calculating the point cloud of the left orbit
Figure BDA00040300127200000522
Each sub-point in the system is collected>
Figure BDA00040300127200000523
Wherein k represents the position of the sub-point set, to obtain an equalized left-track point cloud coordinate set ≥>
Figure BDA00040300127200000524
Wherein->
Figure BDA0004030012720000061
Figure BDA0004030012720000062
Is collected as a point>
Figure BDA0004030012720000063
Elevation information of the inner point clouds, i.e.>
Figure BDA0004030012720000064
Then the elevation set of the left track point cloud is expressed as ≥>
Figure BDA0004030012720000065
Obtaining a point cloud coordinate set { pSPr) after the right orbit equalization by using the same method t 1 ,…,pSPr t k ,…,pSPr t K H and an elevation set of right track point clouds (Hr) t 1 ,...,Hr t k ,...,Hr t K };
7.2, judging whether the track has triangular pit irregularity, and recording irregularity parameters and positions if the track has triangular pit irregularity;
7.3, judging whether the track has unevenness, and if so, recording unevenness parameters and positions;
and 7.4, judging whether the track has horizontal irregularity, and if so, recording irregularity parameters and positions.
The specific process of the step 7.2 is as follows:
step 7.2.1, calculating the point cloud elevation difference at the parallel position of the left track and the right track according to the formula
Figure BDA0004030012720000066
Get a set +>
Figure BDA0004030012720000067
Step 7.2.2, if
Figure BDA0004030012720000068
Figure BDA0004030012720000069
Satisfies the condition>
Figure BDA00040300127200000610
Or is conditional>
Figure BDA00040300127200000611
It is then indicated that at time t of the movement of the rail vehicle there is a track crater irregularity, i.e. irregularity type C t Is equal to 1, where ε h The tolerance accuracy of the triangular pit irregularity is shown. Recording the irregularity parameter of the triangular pit>
Figure BDA00040300127200000612
The position of the disease which is not smooth with the triangular pit is
Figure BDA0004030012720000071
The specific process of the step 7.3 is as follows:
step 7.3.1, respectively calculating elements in the left and right track elevation sets and the standard elevation H h The absolute difference between them is recorded as the difference between the left and right elevationCollection of
Figure BDA0004030012720000072
And &>
Figure BDA00040300127200000715
Wherein +>
Figure BDA0004030012720000073
△Hr t k =|Hr t k -H h |;/>
Step 7.3.2,
Figure BDA0004030012720000074
If the condition is met>
Figure BDA0004030012720000075
It is then indicated that at time t of the movement of the rail vehicle, there is a high or low irregularity in the left rail, i.e. irregularity type C t Equal to 2, the irregularity parameter is recorded>
Figure BDA0004030012720000076
And uneven disease position>
Figure BDA0004030012720000077
If the condition Δ Hr is satisfied t k ≥δ h Then, it is indicated that there is a high or low irregularity in the right track at time t of the railcar motion, i.e., irregularity type C t Equal to 2, recording the altitude irregularity parameter PS t =△Hr t k And the height irregularity defect position P t ns =pSPr t k Wherein δ h The tolerance accuracy of the unevenness is high.
The specific process of the step 7.4 is as follows:
step 7.4.1, calculating the absolute difference of the elevation of the left track and the right track according to the calculation formula
Figure BDA0004030012720000078
Obtaining a set of absolute differences in elevation for the track/>
Figure BDA0004030012720000079
Step 7.4.2,
Figure BDA00040300127200000710
If the condition is met>
Figure BDA00040300127200000711
It is then indicated that at time t of the movement of the rail vehicle, the rail has a horizontal irregularity, i.e. irregularity type C t Equal to 3, recording the horizontal irregularity parameter>
Figure BDA00040300127200000712
And a horizontal irregularity disease position>
Figure BDA00040300127200000713
Wherein eta h To the allowed horizontal irregularity accuracy.
The specific steps of step 8 are as follows:
step 8.1 of calculating the unsmooth position P of the rail inspection vehicle at the t moment t Set of point cloud coordinates
Figure BDA00040300127200000714
Mean coordinate (x) of all points in (1) t ,y t ,z t ) According to the conversion relation from the space rectangular coordinate system to the geodetic coordinate system, the (x) is converted into the (x) in the space rectangular coordinate system t ,y t ,z t ) For relative coordinates in the geodetic coordinate system>
Figure BDA0004030012720000081
Step 8.2, track inspection vehicle position (P) acquired by the data positioning system at the time t t ,B t ) As the absolute position of the GPS of the track, the absolute position of the GPS and the relative coordinates of the track irregularity position are summed to obtain the absolute position of the GPS with the irregularity track
Figure BDA0004030012720000082
Wherein P is t ns =P t +wP t ns ,/>
Figure BDA0004030012720000083
The specific steps in step 9 are as follows:
step 9.1, selecting a target detection Yolov7 network structure for the track disease detection network structure, and modifying a configuration file YOLOv7.Yaml as follows: modifying the classification number into the classification number C of the track defects ill I.e. setting the number of classes variable nc to C ill The value is 4;
step 9.2, carrying out image acquisition on the C-type track diseases, wherein the total number of the acquired images is N s The number of the collected images of each class is
Figure BDA0004030012720000084
Step 9.3, the structure of the training sample is as follows: uploading the track disease image data to a data marking website, and marking the disease type and the disease detection true value frame of the track disease image on the website by using a marking tool; exporting the marked data set to a file storage path data/train by using a data set exporting function of a website after all images are marked, creating a data set configuration file data.yaml under the path data/train, and forming a track disease training data set railData by using the marked data set and the configuration file;
and 9.4, setting network training parameters in a network training file train. The method comprises the following steps that the total iteration number epoch of network training is =100, the number of images batch of network batch processing is =16, the initial learning rate lr of the network training is =0.01, the size W x H of an image input by a network is obtained, an optimizer used in the network training is set to Adam, the initial value step of a variable of the network training number is =1, a training sample data path is data/train, a training working device is set to have a decision =0, the GPU is used for training, and the output model name is yolo7.Pt;
step 9.5, defining a network loss function as:
Figure BDA0004030012720000085
wherein x n Denotes the score, y, for predicting the nth sample as a positive example n A label representing the nth sample, δ representing a sigmoid function;
and 9.6, running the file train in pyy in the conda environment, and performing network training when the variable step of the network training times is>=200 or network Loss function Loss less than 10 -1 And then ending the network training and outputting a track disease detection Model ill
The specific process of step 10 is as follows:
step 10.1, detecting the image
Figure BDA0004030012720000091
Model for disease detection ill Py is run under python environment as input, and outputs the detected disease type->
Figure BDA0004030012720000092
Rectangular frame where diseases are located>
Figure BDA0004030012720000093
Wherein
Figure BDA0004030012720000094
Is determined as having a disease in>
Figure BDA0004030012720000095
Middle position->
Figure BDA0004030012720000096
Is based on the center point coordinates of>
Figure BDA0004030012720000097
And &>
Figure BDA0004030012720000098
Respectively represent the widths anda height;
step 10.2, according to the formula (6), utilizing the homography matrix
Figure BDA0004030012720000099
Pick up the image>
Figure BDA00040300127200000910
Position of upper track disease
Figure BDA00040300127200000911
Projected to the left viewpoint image->
Figure BDA00040300127200000912
In the previous step, a corresponding disease position on the left viewpoint image is obtained>
Figure BDA00040300127200000913
Figure BDA00040300127200000914
Step 10.3, the position of the disease in the middle viewpoint image
Figure BDA00040300127200000915
Position of disease in left viewpoint image
Figure BDA00040300127200000916
3D to 2D projection matrix ^ er>
Figure BDA00040300127200000917
As input, calling the open-source opencv-python library function cv2.Triangulatepoints (), and outputting the 3D coordinate ^ of the track disease ^ 5>
Figure BDA00040300127200000918
The invention has the beneficial effects that:
the invention relates to an intelligent detection and positioning method for track diseases and irregularity, which carries out railway image disease detection, track three-dimensional coordinate reconstruction and track irregularity geometric parameter movement measurement in a way of measuring while walking, thereby realizing the automatic intelligent detection task of the track; compare in manual detection, more reliable, high-efficient, safety can realize accurate, quick detection and analysis to track disease and irregularity.
Drawings
FIG. 1 is a schematic flow chart of the intelligent detection and positioning method for track irregularity and diseases of the present invention;
FIG. 2 is a schematic flow chart of a line extraction method based on Hough transform adopted by the present invention;
FIG. 3 is a schematic view of a process for detecting track irregularity by using a track inspection vehicle according to the present invention;
FIG. 4 is a schematic diagram of the principle of detecting the irregularity of the triangular pits of the track by the track inspection vehicle adopted by the invention;
FIG. 5 is a schematic diagram of the principle of detecting the unevenness of the track by the track inspection vehicle adopted by the invention;
FIG. 6 is a schematic diagram of the principle of detecting horizontal irregularity of a track by a track inspection vehicle adopted by the invention;
FIG. 7 is a schematic diagram of a process for solving track irregularity damage and track surface damage actual GPS positions adopted by the present invention;
fig. 8 is a track defect detection image in the embodiment of the present invention.
Detailed Description
The invention is described in detail below with reference to the drawings and the detailed description.
The deep learning theory is newly applied to various industries, and also can be applied to the aspect of rail fault detection, and rail faults are used as specific targets to train a detection model by utilizing the advanced feature extraction mechanism and the category learning capability of a deep learning network, so that the rail faults are quickly detected and positioned in the operation process of a rail inspection vehicle.
In addition, the absolute position of the track where the track is uneven and damaged is calculated by using the absolute position of the GPS of the track inspection vehicle and the relative position of the point cloud coordinates, so that accurate positioning information can be provided for railway maintenance and overhaul departments.
Based on the above, the intelligent detection and positioning method for the track irregularity and the diseases comprises the following steps:
step 1, building a data positioning system and a data acquisition system on a T-shaped rail inspection vehicle, wherein the used equipment comprises: GPS, a three-view camera, an industrial grade notebook computer and a mobile power supply, and a data positioning system is utilized to acquire the position information { (P) of the rail inspection vehicle under the WGS84 coordinate system t ,B t ) In which P is t And B t Respectively representing the geodetic longitude and the dimension value of the rail inspection vehicle at the time T, wherein T is more than or equal to 1 and less than or equal to T, and T is the total acquisition time and also is the number of acquired data;
step 2, collecting a t-time orbit image sequence from three viewpoints by using a three-eye camera on a moving track inspection vehicle
Figure BDA0004030012720000111
Respectively a left viewpoint image, a middle viewpoint image and a right viewpoint image of the t time track, and combining->
Figure BDA0004030012720000112
File storage paths data/ill/left, data/ill/middle and data/ill/right respectively stored in a designated drive letter (e.g., a D-drive) of the industrial grade notebook;
step 3, taking the track image sequence as input, and generating a track scene 3D point cloud at the time t by using a three-dimensional point cloud generation method (OpenMVG + PMVS)
Figure BDA0004030012720000113
Simultaneously get>
Figure BDA0004030012720000114
Projection relation matrix to each viewpoint image->
Figure BDA0004030012720000115
And &>
Figure BDA0004030012720000116
Also obtains the middle view point image and the left and right viewsHomography matrix between point images>
Figure BDA0004030012720000117
And &>
Figure BDA0004030012720000118
Step 4, taking the middle viewpoint track image of the t moment
Figure BDA0004030012720000119
As input, two side lines of the left track are respectively extracted by means of a straight line extraction method>
Figure BDA00040300127200001110
And two side lines of the right track>
Figure BDA00040300127200001111
Extraction>
Figure BDA00040300127200001112
And &>
Figure BDA00040300127200001113
Is greater than or equal to>
Figure BDA00040300127200001114
Is at>
Figure BDA00040300127200001115
Up-evenly sampling K points to form a point set>
Figure BDA00040300127200001116
Multiply by a homography matrix between the middle-view and left-view images->
Figure BDA00040300127200001117
Get the left viewpoint image>
Figure BDA00040300127200001118
Set of projection points on>
Figure BDA00040300127200001119
Handle point set
Figure BDA00040300127200001120
Projection relation matrix->
Figure BDA00040300127200001121
And &>
Figure BDA00040300127200001122
Calling opencv-python library function cv2.Triangulatepoints () of an open source as input, and outputting the result of ^ based on>
Figure BDA00040300127200001123
Corresponding set of 3D coordinates +>
Figure BDA00040300127200001124
Extraction of Lr t 1 And Lr t 2 In the midline>
Figure BDA00040300127200001126
In or on>
Figure BDA00040300127200001127
K points are uniformly sampled to form a point set { pr t m,1 ,…,pr t m,k ,…,pr t m,K Using the same method, a set of points { pr }is obtained t m,1 ,…,pr t m,k ,…,pr t m,K } corresponding 3D coordinate ^ or>
Figure BDA00040300127200001128
The method comprises the following specific steps:
step 4.1, orbit image
Figure BDA00040300127200001129
As an input, calling Python programming language opencv-Python library function cv2.Imread () will >>
Figure BDA00040300127200001130
Converted into a gray-scale picture->
Figure BDA00040300127200001131
Step 4.2, gray level map
Figure BDA0004030012720000121
As an input, call Python programming language opencv-Python library function cv2.Gaussian blur () pair>
Figure BDA0004030012720000122
Gaussian filtering is carried out to obtain a filtered image->
Figure BDA0004030012720000123
Step 4.3, with filtered image
Figure BDA0004030012720000124
For input, call Python programming language opencv-Python library function cv2.Canny () on->
Figure BDA0004030012720000125
Performing edge detection processing to obtain an edge image->
Figure BDA0004030012720000126
Step 4.4, with edge image
Figure BDA0004030012720000127
To enter, call Python programming language opencv-Python library function cv2.HoughLinesP () vs>
Figure BDA0004030012720000128
Extracting the track line to obtain the left and right side lines of the left track>
Figure BDA0004030012720000129
And the left side of the right trackRight two side lines Lr t 1 、Lr t 2 The representation form of two end points of the 4 lines is respectively
Figure BDA00040300127200001211
Figure BDA00040300127200001212
Wherein it is present>
Figure BDA00040300127200001213
Represents an endpoint closer to the camera>
Figure BDA00040300127200001214
Representing endpoints that are farther from the camera;
step 4.5, calculating the center line of the side line of the left track
Figure BDA00040300127200001215
Counting/or>
Figure BDA00040300127200001216
And &>
Figure BDA00040300127200001217
Midpoint of two end points
Figure BDA00040300127200001218
Counting/or>
Figure BDA00040300127200001219
And &>
Figure BDA00040300127200001220
The midpoint of the two end points is->
Figure BDA00040300127200001221
Based on the point>
Figure BDA00040300127200001222
And &>
Figure BDA00040300127200001223
As an input, a straight line extraction algorithm getLineEquation () is called to get the midline of the two trajectories of the left track->
Figure BDA00040300127200001224
Is the parameter (a, b, c) of the general equation of (1), i.e. the central line->
Figure BDA00040300127200001225
Expressed as->
Figure BDA00040300127200001226
ax + by + c =0; obtaining the central line Lr of the two tracks of the right track by the same method t m The flow of the adopted straight line extraction algorithm is shown in figure 2;
at point P 1 And P 2 The pseudo code of the straight line extraction algorithm getlineequalization () as input can be represented as:
Figure BDA0004030012720000131
step 4.6 at
Figure BDA0004030012720000132
Evenly sampling K points on the upper part to form a point set>
Figure BDA0004030012720000133
Step 4.7, set points
Figure BDA0004030012720000134
Inner points are each multiplied by a homography matrix>
Figure BDA0004030012720000135
Get its picture in left viewpoint->
Figure BDA0004030012720000136
The projection point set in (4)>
Figure BDA0004030012720000137
/>
Step 4.8, the point set on the middle viewpoint image
Figure BDA0004030012720000138
Projection point set on left viewpoint image>
Figure BDA0004030012720000139
Projection relationship matrix>
Figure BDA00040300127200001310
Calling opencv-python library function cv2. Triangulatinepoints () of an open source as input, and outputting a 3D coordinate set ^ based on a left track>
Figure BDA00040300127200001311
Step 4.9 at
Figure BDA00040300127200001312
Uniformly sampling K points to form a point set { pr t m,1 ,…,pr t m,k ,…,pr t m,K And (6) repeating the operation of the steps 4.7-4.8, and outputting a point set { pr } t m,1 ,…,pr t m,k ,…,pr t m,K } corresponding set of 3D coordinates +>
Figure BDA00040300127200001313
Step 5, marking the track width variable as W s Extracting to assemble
Figure BDA00040300127200001314
In the center of each point with W s Is a cube with a side length and belongs to a scene 3D point cloud>
Figure BDA00040300127200001315
All points within constitute a point set>
Figure BDA00040300127200001316
Wherein +>
Figure BDA00040300127200001317
Is in or on>
Figure BDA00040300127200001318
A set of points extracted for the center point of the cube>
Figure BDA00040300127200001319
Is composed of
Figure BDA00040300127200001320
N of (1) k Dot, N k Is->
Figure BDA00040300127200001321
The number of the inner points is greater or less>
Figure BDA00040300127200001322
Are respectively based on>
Figure BDA00040300127200001323
X, Y, Z coordinate of (4), (4)>
Figure BDA00040300127200001324
Is the initial point cloud of the left orbit; in the same way, an initial point cloud @ of the right track is obtained>
Figure BDA0004030012720000141
Step 6, recording the variable of the track elevation as H s And performing height filtering on the initial point cloud of the left track:
Figure BDA0004030012720000142
if its Z coordinate belongs to [ H ] s -△ h ,H s +△ h ]Then it is reserved, otherwise it is selected from the set->
Figure BDA0004030012720000143
Middle deletion, where h Carrying out height filtering on the right track point cloud by the same method for the allowable change value of the track point cloud elevation; the updated left and right track point clouds obtained after filtering are respectively ^ and ^>
Figure BDA0004030012720000144
And &>
Figure BDA0004030012720000145
Step 7, carrying out point cloud on the left and right tracks at the moment t
Figure BDA0004030012720000146
And &>
Figure BDA0004030012720000147
Analyzing data, judging whether the track has irregularity, and if the track has irregularity, judging the irregularity type C t Irregularity parameter PS t And the irregularity position P t ns Recording is performed as shown in fig. 3; the method comprises the following specific steps:
step 7.1, calculating the point cloud of the left orbit
Figure BDA0004030012720000148
Each sub-point in the system is collected>
Figure BDA0004030012720000149
Wherein k represents the position of the sub-point set, to obtain an equalized left-track point cloud coordinate set ≥>
Figure BDA00040300127200001410
Wherein +>
Figure BDA00040300127200001411
Figure BDA00040300127200001412
Is collected as a point>
Figure BDA00040300127200001413
Elevation information of the inner point clouds, i.e.>
Figure BDA00040300127200001414
Then the elevation set of the left track point cloud is expressed as ≥>
Figure BDA00040300127200001415
The same method is utilized to obtain a point cloud coordinate set which is equalized on the right track>
Figure BDA00040300127200001416
And an elevation set of a right track point cloud>
Figure BDA00040300127200001417
7.2, judging whether the track has triangular pit irregularity, and recording irregularity parameters and positions if the track has triangular pit irregularity; the specific process is as follows:
step 7.2.1, calculating the point cloud elevation difference at the parallel position of the left track and the right track according to the formula
Figure BDA00040300127200001418
Get a set +>
Figure BDA00040300127200001419
Step 7.2.2, as shown in FIG. 4, if
Figure BDA0004030012720000151
Figure BDA0004030012720000152
Satisfies the condition>
Figure BDA0004030012720000153
Or is conditional>
Figure BDA0004030012720000154
Then it is indicated that there is a track triangular pit irregularity at time t of the railcar motion, i.e., irregularity type C t Is equal to 1, wherein ∈ h Recording the irregularity parameter of the triangular pit with the tolerance deviation precision of the irregularity of the triangular pit of 5mm>
Figure BDA0004030012720000155
Uneven disease position of triangular pit
Figure BDA0004030012720000156
7.3, judging whether the track has unevenness, and if so, recording unevenness parameters and positions; the specific process is as follows:
step 7.3.1, respectively calculating elements in the left and right track elevation sets and the standard elevation H h The absolute difference between the two sets is recorded as a left elevation difference set and a right elevation difference set
Figure BDA0004030012720000157
And &>
Figure BDA00040300127200001513
Wherein->
Figure BDA0004030012720000158
△Hr t k =|Hr t k -H h |;
Step 7.3.2, as shown in figure 5,
Figure BDA0004030012720000159
if the condition is met>
Figure BDA00040300127200001510
It is then indicated that at time t of the movement of the rail vehicle, there is a high or low irregularity in the left rail, i.e. irregularity type C t Equal to 2, recording the irregularity parameter
Figure BDA00040300127200001511
And high and lowRugged disease position->
Figure BDA00040300127200001512
If the condition Δ Hr is satisfied t k ≥δ h Then, it is indicated that there is a high or low irregularity in the right track at time t of the railcar motion, i.e., irregularity type C t Equal to 2, recording the height irregularity parameter PS t =△Hr t k And the height irregularity defect position P t ns =pSPr t k Wherein δ h The tolerance deviation precision of the unevenness is 6mm.
And 7.4, judging whether the track has horizontal irregularity, if so, recording irregularity parameters and positions, wherein the specific process is as follows:
step 7.4.1, calculating the absolute difference of the elevation of the left track and the right track according to the calculation formula
Figure BDA0004030012720000161
Obtaining a set of absolute differences in elevation for a track>
Figure BDA0004030012720000162
Step 7.4.2, as shown in figure 6,
Figure BDA0004030012720000163
if the condition is met>
Figure BDA0004030012720000164
It is then indicated that at time t of the movement of the rail vehicle, the rail has a horizontal irregularity, i.e. irregularity type C t Equal to 3, recording the horizontal irregularity parameter
Figure BDA0004030012720000165
And a horizontal irregularity disease position>
Figure BDA0004030012720000166
Wherein eta h Tolerance accuracy for horizontal irregularitiesThe value is 6mm.
Step 8, calculating the irregularity position P t ns Corresponding GPS position
Figure BDA0004030012720000167
As shown in fig. 7; the method comprises the following specific steps:
step 8.1, calculating the unsmooth position P of the rail inspection vehicle at the t moment t Set of point cloud coordinates
Figure BDA0004030012720000168
Mean coordinate (x) of all points in (A) t ,y t ,z t ) According to the conversion relation from the space rectangular coordinate system to the geodetic coordinate system, (x) t ,y t ,z t ) Is a relative coordinate in the geodetic coordinate system>
Figure BDA0004030012720000169
Step 8.2, the rail inspection vehicle position (P) acquired by the data positioning system at the time t t ,B t ) And as the GPS absolute position of the track, summing the relative coordinates of the GPS absolute position and the track unsmooth position to obtain the GPS absolute position with unsmooth track
Figure BDA00040300127200001610
Wherein P is t ns =P t +wP t ns ,/>
Figure BDA00040300127200001611
Step 9, making a track disease data set by using the acquired track image data, training target detectors of stripping, fish scale injury, spalling, corrugation and the like of the track based on a YOLOv7 network structure, and obtaining a track disease detection Model ill (ii) a The method comprises the following specific steps:
step 9.1, selecting a target detection Yolov7 network structure for the track disease detection network structure, and modifying a configuration file YOLOv7.Yaml as follows: modifying the classification number into the classification number C of the track diseases ill I.e. setting the number of classes variable nc to C ill The value is 4;
wherein, the YOLOv7 network structure for target detection is the YOLOv7 network structure for general target detection proposed in an article 'YOLOv 7: exchangeable bag-of-free sections new state-of-the-art for real-time object detectors', published by Chien-Yao Wang, alexey Bochkovsky et al in 2022.
Step 9.2, carrying out image acquisition on the C-type track diseases, wherein the total number of the acquired images is N s The number of collected images of each class is
Figure BDA0004030012720000171
Step 9.3, the structure of the training sample is as follows: uploading the track disease image data to a data annotation website, and performing disease type and disease detection true value frame annotation on the track disease image on the website by using an annotation tool; exporting the marked data set to a file storage path data/train by using a data set exporting function of a website after all the images are marked, creating a data set configuration file data.yaml under the path data/and writing the following contents into the file:
Figure BDA0004030012720000172
forming a track disease training data set railData by the marked data set and the configuration file;
step 9.4, setting network training parameters in a network training file train.py: the method comprises the following steps that the total iteration number epoch of network training is =100, the number of images batch of network batch processing is =16, the initial learning rate lr of the network training is =0.01, the size W x H of an image input by a network is obtained, an optimizer used in the network training is set to Adam, the initial value step of a variable of the network training number is =1, a training sample data path is data/train, a training working device is set to have a decision =0, the GPU is used for training, and the output model name is yolo7.Pt; the specific command line code is as follows:
Figure BDA0004030012720000173
step 9.5, defining a network loss function as:
Figure BDA0004030012720000181
/>
wherein x n Denotes the score, y, for predicting the nth sample as a positive example n A label representing the nth sample, δ representing a sigmoid function;
and 9.6, running the file train in pyy in the conda environment, and performing network training when the variable step of the network training times is>=200 or network Loss function Loss less than 10 -1 And then ending the network training and outputting a track disease detection Model ill
Step 10, the orbit image at the time t
Figure BDA0004030012720000182
Model for disease detection ill Py under python conditions is invoked as input with the detection program train under file path Project/yolov7, outputting a disease @>
Figure BDA0004030012720000183
In>
Figure BDA0004030012720000184
And a category pick>
Figure BDA0004030012720000185
Will->
Figure BDA0004030012720000186
Multiply by the homography matrix pick>
Figure BDA0004030012720000187
Get>
Figure BDA0004030012720000188
Projection point on the left viewpoint image->
Figure BDA0004030012720000189
To +>
Figure BDA00040300127200001810
And &>
Figure BDA00040300127200001811
And a 3D to 2D projection matrix +>
Figure BDA00040300127200001812
And &>
Figure BDA00040300127200001813
Calling opencv-python library function cv2.Triangulatepoints () of an open source as input and outputting->
Figure BDA00040300127200001814
Corresponding 3D coordinate P t ill The specific process is as follows:
step 10.1, detecting the image
Figure BDA00040300127200001815
Model for disease detection ill Py is run under python environment as input, and outputs the detected disease type->
Figure BDA00040300127200001816
The rectangular frame on which the disease is located>
Figure BDA00040300127200001817
Wherein
Figure BDA00040300127200001818
Is determined as having a disease in>
Figure BDA00040300127200001819
In mid position>
Figure BDA00040300127200001820
Is based on the center point coordinates of>
Figure BDA00040300127200001821
And &>
Figure BDA00040300127200001822
Respectively representing the width and the height of the rectangular detection frame;
step 10.2, according to the formula (6), using the homography matrix
Figure BDA00040300127200001823
Pick up the image>
Figure BDA00040300127200001824
Position of upper track disease
Figure BDA00040300127200001825
Projected to the left viewpoint image->
Figure BDA00040300127200001826
In the previous step, a corresponding disease position on the left viewpoint image is obtained>
Figure BDA00040300127200001827
Figure BDA00040300127200001828
Step 10.3, the position of the disease in the middle viewpoint image
Figure BDA00040300127200001829
Position of disease in left viewpoint image
Figure BDA00040300127200001830
3D to 2D projection matrix ^ er>
Figure BDA00040300127200001831
As an inputCalling the open-source opencv-python library function cv2.Triangulatepoints (), output 3D coordinate of a track disease ^ 4>
Figure BDA0004030012720000191
Step 11, calculating the disease position P obtained at the time t of the rail inspection vehicle by the same method as the step 8 t ill In 3D coordinates of (a) and (b)
Figure BDA0004030012720000192
Step 12, outputting the irregularity type C obtained by the detection of the rail inspection vehicle at the moment t t Parameter of irregularity PS t Unsmooth GPS position
Figure BDA0004030012720000193
The type of the disease detected at the time t of the output rail inspection vehicle>
Figure BDA0004030012720000194
And GPS position->
Figure BDA0004030012720000195
The rail defect detection image is shown in fig. 8, in which (a), (b), and (c) respectively represent rail spalling, rail fracture, and rail debris.
The rail disease detection network constructed by the method of the invention is tested on the rail image set, and the detection result is objectively evaluated by adopting the following indexes:
(1) Precision (P). The ratio of the samples whose detection results are true and which are correct to all the samples detected as true is indicated.
(2) Recall (R). The ratio of the sample whose detection result is true and which is correct to the total sample which is actually true is represented.
(3) F1Score: the harmonic average of the accuracy rate and the recall rate and the comprehensive measurement of the detection performance of the model are as follows:
Figure BDA0004030012720000196
and (3) carrying out performance test on the rail disease model detection according to the evaluation indexes, wherein the results are shown in table 1:
TABLE 1
Figure BDA0004030012720000197
Figure BDA0004030012720000201
The rail irregularity detection performance test is carried out on the rail irregularity detection method provided by the invention, and new objective evaluation indexes are added on the basis of P, R and F1Score in evaluation indexes: accuracy (Acc), which represents the proportion of samples detected correctly to the total samples.
The performance of the track irregularity detection method provided by the invention is judged according to the evaluation indexes, and the result is shown in table 2:
TABLE 2
Figure BDA0004030012720000202
As can be seen from tables 1 and 2, the method of the invention can obtain more accurate detection results on the execution of track diseases and track irregularity detection tasks, and the effectiveness of the method is proved. The subjective and objective results show that the method can well detect and identify the track disease image and the track point cloud, and the detection result reflects the high efficiency and the accuracy of the method. In addition, compared with the traditional track detection method, the method realizes the rapid and accurate detection and analysis of track diseases and track irregularity on the premise that the detection precision meets the requirement, and reduces the labor cost.
Through the mode, the intelligent detection and positioning method for the track irregularity and the diseases is based on the problems that the existing track irregularity and disease monitoring system on the market needs manual cooperation, is low in intelligentization and integration, poor in dynamic monitoring and sensing effects, high in labor cost and the like, and the mobile measurement method capable of detecting and positioning the track irregularity and the diseases of the railway in a walking and detecting mode is designed to achieve accurate and rapid detection and analysis of the track irregularity and the diseases. Therefore, the invention carries out three-dimensional reconstruction on the track and detects and positions the unevenness of triangular pits, height, level and the like of the track through the elevation analysis of point cloud by collecting and analyzing the data of the trinocular camera and the GPS sensor which are arranged on the track inspection vehicle; and on the other hand, the image target deep learning detection model is adopted to detect and position the defects of stripping, fish scale injury, layer crack, corrugation and the like on the track. Compared with the traditional detection method, the track irregularity detection and positioning method based on the stereoscopic vision is fast and efficient, can save manpower and material resources to a certain extent, and can be conveniently installed on a track inspection vehicle.

Claims (10)

1. The intelligent detection and positioning method for the track irregularity and the diseases is characterized in that: the method comprises the following steps:
step 1, building a data positioning system and a data acquisition system on a T-shaped rail inspection vehicle, and acquiring position information { (P) of the rail inspection vehicle under a WGS84 coordinate system by using the data positioning system t ,B t ) In which P is t And B t Respectively representing the geodetic longitude and the dimension value of the rail inspection vehicle at the time T, wherein T is more than or equal to 1 and less than or equal to T, and T is the total acquisition time and also is the number of acquired data;
step 2, collecting a t-time orbit image sequence from three viewpoints by using a three-eye camera on a moving track inspection vehicle
Figure FDA0004030012710000011
Figure FDA0004030012710000012
Respectively a left viewpoint image, a middle viewpoint image and a right viewpoint image of the t-time orbit, andrespectively storing to obtain track image sequences;
step 3, taking the track image sequence as input, and generating a 3D point cloud of the track scene at the time t by using a three-dimensional point cloud generation method
Figure FDA0004030012710000013
Get at the same time>
Figure FDA0004030012710000014
Projection relation matrix to each viewpoint image->
Figure FDA0004030012710000015
And &>
Figure FDA0004030012710000016
A homography matrix is also obtained between the middle viewpoint image and the left and right viewpoint images->
Figure FDA0004030012710000017
And &>
Figure FDA0004030012710000018
Step 4, taking the middle viewpoint track image of the t moment
Figure FDA0004030012710000019
As input, two side lines of the left track are respectively extracted by means of a straight line extraction method>
Figure FDA00040300127100000110
And two side lines of the right track>
Figure FDA00040300127100000111
Extraction>
Figure FDA00040300127100000112
And &>
Figure FDA00040300127100000113
In the midline>
Figure FDA00040300127100000114
Is at>
Figure FDA00040300127100000115
Evenly sampling K points on the upper part to form a point set>
Figure FDA00040300127100000116
Multiplying by a homography matrix between the mid-view and left-view images
Figure FDA00040300127100000117
Get the left viewpoint image->
Figure FDA00040300127100000118
On projection point set->
Figure FDA00040300127100000119
Set of handle points
Figure FDA00040300127100000120
Projection relation matrix->
Figure FDA00040300127100000121
And &>
Figure FDA00040300127100000122
Calling opencv-python library function cv2.Triangulatepoints () of an open source as input, and outputting the result of ^ based on>
Figure FDA00040300127100000123
Corresponding set of 3D coordinates->
Figure FDA00040300127100000124
Extraction>
Figure FDA00040300127100000125
And Lr t 2 Central line Lr of t m At Lr, at t m Evenly sampling K points on the upper part to form a point set>
Figure FDA00040300127100000126
The same method is used to obtain the point set->
Figure FDA00040300127100000127
Corresponding 3D coordinate->
Figure FDA0004030012710000021
Step 5, marking the track width variable as W s Extracting to assemble
Figure FDA0004030012710000022
Centering on each point and using W s Is a cube with a side length and belongs to a scene 3D point cloud>
Figure FDA0004030012710000023
All points within constitute a point set>
Figure FDA0004030012710000024
Wherein->
Figure FDA0004030012710000025
Is/is>
Figure FDA0004030012710000026
A point set extracted for the center point of the cube>
Figure FDA0004030012710000027
Figure FDA0004030012710000028
Is->
Figure FDA0004030012710000029
N of (1) k Dot, N k Is->
Figure FDA00040300127100000210
The number of inner points is greater than or equal to>
Figure FDA00040300127100000211
Figure FDA00040300127100000212
Are respectively as
Figure FDA00040300127100000213
X, Y, Z coordinate of (4), (4)>
Figure FDA00040300127100000214
Is the initial point cloud of the left orbit; in the same way, an initial point cloud @ of the right track is obtained>
Figure FDA00040300127100000215
Step 6, recording the variable of the track elevation as H s Performing height filtering on the initial point cloud of the left track, and performing height filtering on the point cloud of the right track by the same method; the updated left and right track point clouds obtained after filtering are respectively
Figure FDA00040300127100000216
And
Figure FDA00040300127100000217
step 7, carrying out point cloud on the left and right tracks at the moment t
Figure FDA00040300127100000218
And &>
Figure FDA00040300127100000219
Analyzing data, judging whether the track has irregularity, and if the track has irregularity, judging the irregularity type C t Irregularity parameter PS t And the unsmooth position P t ns Recording;
step 8, calculating the irregularity position P t ns Corresponding GPS position
Figure FDA00040300127100000220
Step 9, making a track disease data set by using the acquired track image data, training a target detector based on stripping, fish scaling, spalling and corrugation of a track of a YOLOv7 network structure, and obtaining a track disease detection Model ill
Step 10, the orbit image at the time t
Figure FDA00040300127100000221
Model for disease detection ill Py under python conditions is invoked as input with the detection program train under file path Project/yolov7, outputting a disease @>
Figure FDA00040300127100000222
In>
Figure FDA00040300127100000223
And a category pick>
Figure FDA00040300127100000224
Will be/are>
Figure FDA00040300127100000225
Multiply by a homography matrix>
Figure FDA00040300127100000226
Get->
Figure FDA00040300127100000227
Projection point on left viewpoint image->
Figure FDA00040300127100000228
To be->
Figure FDA00040300127100000229
And &>
Figure FDA00040300127100000230
And a 3D to 2D projection matrix +>
Figure FDA00040300127100000231
And &>
Figure FDA00040300127100000232
As an input, an open-source opencv-python library function cv2.TriangulatePoints () is called, and an output ^ is expressed>
Figure FDA00040300127100000233
Corresponding 3D coordinate +>
Figure FDA0004030012710000031
Step 11, calculating the disease position obtained at the t moment of the rail inspection vehicle by using the same method as the step 8
Figure FDA0004030012710000032
3D coordinates in (a) & r>
Figure FDA0004030012710000033
Step 12, outputting the irregularity type C obtained by the detection of the rail inspection vehicle at the moment t t Parameter of irregularity PS t Unsmooth GPS position
Figure FDA0004030012710000034
The type of the disease detected at the time t of the output rail inspection vehicle>
Figure FDA0004030012710000035
And GPS position->
Figure FDA0004030012710000036
2. The intelligent detection and positioning method for the track irregularity and the diseases according to claim 1, characterized in that the specific steps of step 4 are as follows:
step 4.1, orbit image
Figure FDA0004030012710000037
As an input, a Python programming language opencv-Python library function cv2. Immead () is called to ^ be ^ or ^ be ^ based>
Figure FDA0004030012710000038
Conversion into a grayscale map>
Figure FDA0004030012710000039
Step 4.2, gray level map
Figure FDA00040300127100000310
As an input, call Python programming language opencv-Python library function cv2.Gaussian blur () pair>
Figure FDA00040300127100000311
Gaussian filtering is performed to obtain a filtered image->
Figure FDA00040300127100000312
Step 4.3, with the filtered image
Figure FDA00040300127100000313
Is input intoCalling the Python programming language opencv-Python library function cv2.Canny () pair->
Figure FDA00040300127100000314
Performing edge detection processing to obtain an edge image->
Figure FDA00040300127100000315
Step 4.4, with edge image
Figure FDA00040300127100000316
To enter, call Python programming language opencv-Python library function cv2.HoughLinesP () vs>
Figure FDA00040300127100000317
Performing track line extraction to obtain left and right side lines of the left track>
Figure FDA00040300127100000318
And the left and right side lines of the right track>
Figure FDA00040300127100000319
The representation form of two end points of the 4 edge lines is respectively
Figure FDA00040300127100000320
Figure FDA00040300127100000321
Wherein it is present>
Figure FDA00040300127100000322
Represents an endpoint closer to the camera>
Figure FDA00040300127100000323
Representing endpoints that are farther from the camera;
step 4.5, calculating the side line of the left trackOf
Figure FDA00040300127100000324
Counting/or>
Figure FDA00040300127100000325
And &>
Figure FDA00040300127100000326
The midpoint of the two end points is->
Figure FDA0004030012710000041
Counting/or>
Figure FDA0004030012710000042
And &>
Figure FDA0004030012710000043
The midpoint of the two end points is->
Figure FDA0004030012710000044
Based on the point>
Figure FDA0004030012710000045
And &>
Figure FDA0004030012710000046
Respectively as input, calling a straight line extraction algorithm getLineEquation () to obtain the central line (or the line) of the two tracks of the left track>
Figure FDA0004030012710000047
Is the parameter (a, b, c) of the general equation of (1), i.e. the central line->
Figure FDA0004030012710000048
Is expressed as->
Figure FDA0004030012710000049
By using the sameThe method obtains the center line Lr of the two tracks of the right track t m
Step 4.6 at
Figure FDA00040300127100000410
Up-evenly sampling K points to form a point set>
Figure FDA00040300127100000411
Step 4.7, point set
Figure FDA00040300127100000412
The inner points are each multiplied by a homography matrix->
Figure FDA00040300127100000413
Get its picture in left viewpoint->
Figure FDA00040300127100000414
Projection point set in (1;)>
Figure FDA00040300127100000415
Step 4.8, the point set on the middle viewpoint image
Figure FDA00040300127100000416
Projection point set on left viewpoint image
Figure FDA00040300127100000417
Projection relation matrix->
Figure FDA00040300127100000418
Calling opencv-python library function cv2. Triangulatinepoints () of an open source as input, and outputting a 3D coordinate set ^ based on a left track>
Figure FDA00040300127100000419
Step 4.9 at
Figure FDA00040300127100000420
Evenly sampled K point forming point sets>
Figure FDA00040300127100000421
Operation of steps 4.7-4.8 with an output of a point set->
Figure FDA00040300127100000422
Corresponding set of 3D coordinates->
Figure FDA00040300127100000423
3. The intelligent detection and positioning method for the track irregularity and the disease according to claim 1, wherein the height filtering process of the initial point cloud of the left track in step 6 is as follows:
Figure FDA00040300127100000424
if its Z coordinate belongs to [ H ] s -△ h ,H s +△ h ]If not, then reserve, otherwise slave set +>
Figure FDA00040300127100000425
Middle deletion, where h And allowing a change value for the track point cloud elevation.
4. The intelligent detecting and positioning method for the unevenness and the diseases of the track according to claim 1, characterized in that the specific steps of step 7 are as follows:
step 7.1, calculating the point cloud of the left orbit
Figure FDA00040300127100000426
Each sub-point in the system is collected>
Figure FDA00040300127100000427
In which k represents the location of the sub-point set to obtain an averaged point cloud coordinate set &>
Figure FDA0004030012710000051
Wherein->
Figure FDA0004030012710000052
Figure FDA0004030012710000053
Figure FDA0004030012710000054
Set of points>
Figure FDA0004030012710000055
Elevation information of the inner point clouds, i.e.>
Figure FDA0004030012710000056
Then the elevation set of the left track point cloud is expressed as ≥>
Figure FDA0004030012710000057
The same method is utilized to obtain a point cloud coordinate set which is equalized on the right track>
Figure FDA0004030012710000058
And an elevation set of a right track point cloud>
Figure FDA0004030012710000059
7.2, judging whether the track has triangular pit irregularity, and recording irregularity parameters and positions if the track has triangular pit irregularity;
7.3, judging whether the track has unevenness, and if so, recording unevenness parameters and positions;
and 7.4, judging whether the track has horizontal irregularity, and if so, recording irregularity parameters and positions.
5. The intelligent detection and positioning method for the track irregularity and the disease according to claim 4, characterized in that the specific process of step 7.2 is as follows:
step 7.2.1, calculating the point cloud elevation difference at the parallel position of the left track and the right track according to the formula
Figure FDA00040300127100000510
Get the set->
Figure FDA00040300127100000511
Step 7.2.2, if
Figure FDA00040300127100000512
Figure FDA00040300127100000513
Satisfies the condition>
Figure FDA00040300127100000514
Or a condition>
Figure FDA00040300127100000515
It is then indicated that at time t of the movement of the rail vehicle there is a track crater irregularity, i.e. irregularity type C t Is equal to 1, wherein ∈ h Recording the irregularity parameter of the triangular pit for the tolerance deviation precision of the irregularity of the triangular pit>
Figure FDA00040300127100000516
The position which is not smooth with the triangular pit is judged>
Figure FDA0004030012710000061
6. The intelligent detecting and positioning method for the unevenness and the diseases of the track according to claim 4, characterized in that the specific process of step 7.3 is as follows:
step 7.3.1, respectively calculating elements in the left and right track elevation sets and the standard elevation H h The absolute difference between the two is recorded as a left and right elevation difference set
Figure FDA0004030012710000062
And &>
Figure FDA0004030012710000063
Wherein +>
Figure FDA0004030012710000064
Figure FDA0004030012710000065
Step 7.3.2,
Figure FDA0004030012710000066
If the condition is met>
Figure FDA0004030012710000067
It is then indicated that at time t of the movement of the rail vehicle, there is a high or low irregularity in the left rail, i.e. irregularity type C t Equal to 2, the irregularity parameter is recorded>
Figure FDA0004030012710000068
And uneven disease position>
Figure FDA0004030012710000069
If the condition is met>
Figure FDA00040300127100000610
It is then indicated that at time t of the movement of the rail vehicle, there is a high or low irregularity in the right rail, i.e. irregularity type C t Equal to 2, the irregularity parameter is recorded>
Figure FDA00040300127100000611
And uneven disease position>
Figure FDA00040300127100000612
Wherein delta h The tolerance accuracy of the unevenness is high.
7. The intelligent detecting and positioning method for the unevenness and the diseases of the track according to claim 4, characterized in that the specific process of step 7.4 is as follows:
step 7.4.1, calculating the absolute difference of the elevation of the left track and the right track according to the calculation formula
Figure FDA00040300127100000613
Obtaining a set of absolute differences in elevation for a track>
Figure FDA00040300127100000614
Step 7.4.2,
Figure FDA00040300127100000615
If the condition is met>
Figure FDA00040300127100000616
It is indicated that there is a horizontal irregularity in the rail at time t of the movement of the rail vehicle, i.e., irregularity type C t Equal to 3, recording a horizontal irregularity parameter>
Figure FDA00040300127100000617
And a horizontal irregularity disease position>
Figure FDA00040300127100000618
Wherein eta h The tolerance accuracy of the horizontal irregularity. />
8. The intelligent detecting and positioning method for the unevenness and the diseases of the track according to claim 1, characterized in that the specific steps of step 8 are as follows:
step 8.1 of calculating the unsmooth position P of the rail inspection vehicle at the t moment t Set of point cloud coordinates
Figure FDA0004030012710000071
Mean coordinate (x) of all points in (1) t ,y t ,z t ) According to the conversion relation from the space rectangular coordinate system to the geodetic coordinate system, (x) t ,y t ,z t ) Is a relative coordinate in the geodetic coordinate system>
Figure FDA0004030012710000072
Step 8.2, the rail inspection vehicle position (P) acquired by the data positioning system at the time t t ,B t ) And as the GPS absolute position of the track, summing the relative coordinates of the GPS absolute position and the track unsmooth position to obtain the GPS absolute position with unsmooth track
Figure FDA0004030012710000073
Wherein->
Figure FDA0004030012710000074
9. The intelligent detection and positioning method for the track irregularity and the diseases according to claim 1, characterized in that the specific steps in step 9 are as follows:
step 9.1, selecting a target detection Yolov7 network structure for the track disease detection network structure, and modifying a configuration file YOLOv7.Yaml as follows: modifying the classification number into the classification number C of the track diseases ill I.e. setting the number of classes variable nc to C ill The value is 4;
step 9.2, carrying out image acquisition on the C-type track diseases, wherein the total number of the acquired images is N s The number of the collected images of each class is
Figure FDA0004030012710000075
Step 9.3, the structure of the training sample is as follows: uploading the track disease image data to a data marking website, and marking the disease type and the disease detection true value frame of the track disease image on the website by using a marking tool; exporting the marked data set to a file storage path data/train by using a data set export function of a website after all the images are marked, creating a data set configuration file data.yaml under the path data/train, and forming a track disease training data set railData by using the marked data set and the configuration file;
step 9.4, setting network training parameters in a network training file train.py: the method comprises the following steps that the total iteration number epoch of network training is =100, the number of images batch of network batch processing is =16, the initial learning rate lr of the network training is =0.01, the size W x H of an image input by a network is obtained, an optimizer used in the network training is set to Adam, the initial value step of a variable of the network training number is =1, a training sample data path is data/train, a training working device is set to have a decision =0, the GPU is used for training, and the output model name is yolo7.Pt;
step 9.5, defining a network loss function as:
Figure FDA0004030012710000081
wherein x n Represents the score, y, of predicting the nth sample as a positive example n A label representing the nth sample, δ representing a sigmoid function;
and 9.6, running the file train in py in the conda environment, and performing network training when the variable step of the network training times is used>=200 or network Loss function Loss less than 10 -1 And then ending the network training and outputting a track disease detection Model ill
10. The intelligent detection and positioning method for the track irregularity and the diseases according to claim 1, characterized in that the specific process of step 10 is as follows:
step 10.1, detecting the image
Figure FDA0004030012710000082
Model for disease detection ill Py is run under python environment as input, and outputs the detected disease type->
Figure FDA0004030012710000083
The rectangular frame on which the disease is located>
Figure FDA0004030012710000084
Wherein
Figure FDA0004030012710000085
Is determined as having a disease in>
Figure FDA0004030012710000086
Middle position->
Figure FDA0004030012710000087
Is based on the center point coordinates of>
Figure FDA0004030012710000088
And &>
Figure FDA0004030012710000089
Respectively representing the width and the height of the rectangular detection frame;
step 10.2, according to the formula (6), utilizing the homography matrix
Figure FDA00040300127100000810
Image combination>
Figure FDA00040300127100000811
Position of an upper track disease>
Figure FDA00040300127100000812
Projected to left viewpoint image->
Figure FDA00040300127100000813
And obtaining the corresponding disease position on the left viewpoint image>
Figure FDA00040300127100000814
Figure FDA00040300127100000815
Step 10.3, the position of the disease in the middle viewpoint image
Figure FDA00040300127100000816
Position of disease in left viewpoint image
Figure FDA00040300127100000817
3D to 2D projection matrix +>
Figure FDA00040300127100000818
As input, calling the open-source opencv-python library function cv2.Triangulatepoints (), and outputting the 3D coordinate ^ of the track disease ^ 5>
Figure FDA0004030012710000091
/>
CN202211726114.5A 2022-12-30 2022-12-30 Intelligent detection and positioning method for track irregularity and diseases Pending CN115908400A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211726114.5A CN115908400A (en) 2022-12-30 2022-12-30 Intelligent detection and positioning method for track irregularity and diseases

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211726114.5A CN115908400A (en) 2022-12-30 2022-12-30 Intelligent detection and positioning method for track irregularity and diseases

Publications (1)

Publication Number Publication Date
CN115908400A true CN115908400A (en) 2023-04-04

Family

ID=86476353

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211726114.5A Pending CN115908400A (en) 2022-12-30 2022-12-30 Intelligent detection and positioning method for track irregularity and diseases

Country Status (1)

Country Link
CN (1) CN115908400A (en)

Similar Documents

Publication Publication Date Title
CN103499585B (en) Based on noncontinuity lithium battery film defect inspection method and the device thereof of machine vision
CN109059775B (en) Steel rail abrasion detection method with image edge extraction step
CN108189859B (en) Method for judging two laser image characteristics as related redundant characteristics
CN104063873A (en) Shaft sleeve part surface defect on-line detection method based on compressed sensing
CN113487533B (en) Part assembly quality digital detection system and method based on machine learning
CN113506269A (en) Turnout and non-turnout rail fastener positioning method based on deep learning
CN112884717A (en) System and method for real-time workpiece surface detection and tool life prediction
Han et al. A machine-learning classification approach to automatic detection of workers' actions for behavior-based safety analysis
CN114324401A (en) Full-coverage type pipeline detection system based on annular multi-beam sonar
CN111017151B (en) Method for positioning by using laser three-dimensional projection in ship body construction
CN113848209B (en) Dam crack detection method based on unmanned aerial vehicle and laser ranging
Xue et al. A high efficiency deep learning method for the x-ray image defect detection of casting parts
CN110672632A (en) Tunnel disease identification method
CN116678368B (en) BIM technology-based intelligent acquisition method for assembled steel structure data
CN115908400A (en) Intelligent detection and positioning method for track irregularity and diseases
CN206019585U (en) Rail Abrasion Detection System system
CN109766794B (en) Automatic real-time road detection method and system thereof
CN117191781A (en) Nondestructive testing system and method for micro array hole through hole rate of composite wallboard
Koch et al. Machine vision techniques for condition assessment of civil infrastructure
CN115331136A (en) Monitoring method for bridge construction
CN108181315B (en) Image processing-based biscuit damage detection device and detection method
CN112255383A (en) Water quality monitoring analysis system based on big data
CN111102932A (en) Automatic inspection method and system for foundation pit safety
Delina et al. Optimizing viscosity measurement: an automated solution with YOLOv3
CN109472371B (en) Large bridge structure operation management system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination