CN111079747B - Railway wagon bogie side frame fracture fault image identification method - Google Patents

Railway wagon bogie side frame fracture fault image identification method Download PDF

Info

Publication number
CN111079747B
CN111079747B CN201911272479.3A CN201911272479A CN111079747B CN 111079747 B CN111079747 B CN 111079747B CN 201911272479 A CN201911272479 A CN 201911272479A CN 111079747 B CN111079747 B CN 111079747B
Authority
CN
China
Prior art keywords
side frame
image
sample
neural network
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911272479.3A
Other languages
Chinese (zh)
Other versions
CN111079747A (en
Inventor
付德敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Kejia General Mechanical and Electrical Co Ltd
Original Assignee
Harbin Kejia General Mechanical and Electrical Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Kejia General Mechanical and Electrical Co Ltd filed Critical Harbin Kejia General Mechanical and Electrical Co Ltd
Priority to CN201911272479.3A priority Critical patent/CN111079747B/en
Publication of CN111079747A publication Critical patent/CN111079747A/en
Application granted granted Critical
Publication of CN111079747B publication Critical patent/CN111079747B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

A rail wagon bogie side frame fracture fault image identification method belongs to the technical field of rail wagon bogie safety. The invention aims at the problem that the side frame fracture detection of the existing railway wagon bogie is carried out in a manual mode and has poor reliability. The method comprises the steps of collecting original gray level images of side frames of a bogie of a running truck, determining a side frame area of each gray level image, preprocessing the side frame area to obtain side frame area sample images, forming a sample image set by all the side frame area sample images, configuring marking information for each side frame area sample image to form a marking file, and forming a sample data set based on the sample image set and the marking file; training a convolutional neural network inceptionv2 and a convolutional neural network Faster rcnn to obtain a trained inceptionv2 model and a Faster rcnn model; and processing the image to be detected by using the trained inceptionv2 model and the Faster rcnn model to obtain a corresponding side frame state prediction result, thereby realizing fault identification. The method is used for identifying the breakage of the side frame of the bogie.

Description

Railway wagon bogie side frame fracture fault image identification method
Technical Field
The invention relates to a rail wagon bogie side frame fracture fault image identification method, and belongs to the technical field of rail wagon bogie safety.
Background
A side frame fracture fault of a railway wagon bogie is a fault which endangers the driving safety, and if the fracture position cannot be timely processed before the fault occurs, the safety accident is easy to occur.
At present, in the detection of side frame breakage, a side frame image needs to be obtained firstly, and then the image is detected in a manual mode to judge whether the side frame is broken or not. Because the vehicle inspection personnel are easy to have fatigue, omission and other human factors in the process of detecting a large number of images, the conditions of missed inspection and wrong inspection can be caused, the reliability and the efficiency of the judgment mode can be influenced, and the driving safety of the truck is further influenced.
With the continuous development and the continuous maturity of the deep learning and the artificial intelligence, the technical means for processing the image information is more and more reliable; therefore, it is desirable to provide a technique for identifying a side frame failure using deep learning, so as to improve the accuracy and stability of identifying a side frame breakage failure.
Disclosure of Invention
The invention provides a railway wagon bogie side frame fracture fault image identification method, which aims at solving the problems that the side frame fracture detection of the existing railway wagon bogie is carried out in a manual mode and the reliability is poor.
The invention discloses a rail wagon bogie side frame fracture fault image identification method, which comprises the following steps of:
the method comprises the following steps: acquiring original gray level images of side frames of a truck bogie in operation, determining a side frame area of each gray level image according to truck bogie axle distance information and position information, preprocessing the side frame area to obtain side frame area sample images, forming a sample image set by all side frame area sample images, configuring marking information for each side frame area sample image to form a marking file, and forming a sample data set based on the sample image set and the marking file;
step two: training a convolutional neural network inceptov 2 and a convolutional neural network Faster rcnn by using the sample data set to obtain a trained inceptov 2 model and a Faster rcnn model;
step three: and processing the image to be detected by using the trained inceptov 2 model and the Faster rcnn model to obtain the corresponding side frame state detection type, thereby realizing fault identification.
According to the method for identifying the fault image of the side frame fracture of the railway wagon bogie, the preprocessing of the side frame area comprises the following steps:
data amplification is performed on the sideframe region and contrast is improved.
According to the method for identifying the railway wagon bogie side frame fracture fault image, the data are amplified to form at least one of the following data:
the side frame region with the improved contrast is rotated, translated, scaled and mirrored.
According to the method for identifying the side frame fracture fault image of the railway wagon bogie, the marking information comprises the following steps:
image name, detection category, and top left and bottom right coordinates of the target area in the sideframe area sample image.
According to the image identification method for the side frame breakage fault of the railway wagon bogie, the detection types comprise at least one type of breakage, water flow, chalk and shadow.
According to the method for identifying the side frame fracture fault image of the railway wagon bogie, the convolutional neural network inceptionv2 and the convolutional neural network Faster rcnn adopt COCO model parameters to initialize network parameters.
According to the method for identifying the side frame fracture fault of the railway wagon bogie, the side frame area sample image is input into a convolutional neural network inceptionv2 for feature extraction, and a low-dimensional feature map is obtained.
According to the method for identifying the side frame breakage fault image of the bogie of the railway wagon, the low-dimensional characteristic diagram is input into an RPN layer of a convolutional neural network Faster rcnn to generate a plurality of candidate frames of the target area, the image category of each candidate frame is divided into a foreground and a background, and the position of each candidate frame is subjected to regression adjustment;
dividing each obtained foreground candidate frame into 9 x 9 blocks by using an ROI Pooling layer uniformly, and performing maxporoling treatment on each block; and converting all the foreground candidate frames into data with the same size, sending the data into a full-connection layer, and performing final target area detection category classification and candidate frame position regression of the target area.
According to the method for identifying the side frame fracture fault image of the railway wagon bogie, the loss function L ({ p) of the side frame area sample image is obtained in the process of training the convolutional neural network inceptionv2 and the convolutional neural network Faster rcnni},{ti}) are defined as follows:
Figure GDA0002632715910000021
in the formula, piIs the classification probability of different detection classes, ti={tx,ty,tw,thIs a vector representing the offset of the candidate frame, txIs the x-axis coordinate offset, tyAs an offset of the y-axis coordinate, twAs frame width offset candidate, thAs frame height offset, NclsI is the total number of sample data, i is the detection class, λ is the proportional tradeoff parameter of the classification loss and the regression loss,
Figure GDA0002632715910000022
is and tiVectors with the same dimension represent the offset of the candidate frame to the target area marking frame; n is a radical ofregThe number of regression candidate boxes is;
class loss function L in which the target is predictedcls(pi) With Focal local, the definition is as follows:
Figure GDA0002632715910000031
formula (III) αiAdjusting the ratio for the sample, αi∈[0,1]And gamma is a positive number;
regression predicted position loss function
Figure GDA0002632715910000032
Comprises the following steps:
Figure GDA0002632715910000033
setting up
Figure GDA0002632715910000034
Then
Figure GDA0002632715910000035
According to the method for identifying the image of the fault of the side frame fracture of the bogie of the rail wagon, the image to be detected with the fracture detection type in the step three is subjected to binarization, so that the pixel value of a fracture part is 1, and the pixel value of a non-fracture part is 0; masking the broken part in comparison with the original image to be detected, judging the average pixel of the masked area, if the average pixel is lower than a set pixel threshold value, identifying the fault, and giving an alarm.
The invention has the beneficial effects that: the method is based on a deep learning detection network and is used for detecting the fracture condition of the side frame component in the acquired image. The method comprises the steps of firstly training an inceptionv2 network and a Faster rcnn network based on sample data to obtain a corresponding network model, and then processing an image to be detected based on the model to realize fault identification.
The convolution neural network model obtained by the method carries out fault analysis on the image to be detected by using an advanced image processing algorithm and a pattern recognition method according to prior knowledge, and judges whether the image is broken or not. And uploading the area with the fracture on the side frame image for alarming so as to facilitate the corresponding timely processing of the staff according to the alarm position. The invention can reliably predict the breakage condition of the side frame, thereby ensuring the safe operation of the railway wagon. The method can effectively improve the accuracy of fracture detection.
Drawings
FIG. 1 is a flow chart of a method for identifying a fault image of a side frame break of a railway freight car truck according to the present invention;
FIG. 2 is a flowchart of the calculation of weighting coefficients during the training of the convolutional neural network inceptionv2 and the convolutional neural network Faster rcnn;
FIG. 3 is a training flow diagram for obtaining inceptionv2 and the Faster rcnn model;
fig. 4 is a flowchart of processing an image to be detected.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The invention is further described with reference to the following drawings and specific examples, which are not intended to be limiting.
In a first specific embodiment, as shown in fig. 1, the invention provides a method for identifying a rail wagon bogie side frame fracture fault image, which comprises the following steps:
the method comprises the following steps: acquiring original gray level images of side frames of a truck bogie in operation, determining a side frame area of each gray level image according to truck bogie axle distance information and position information, preprocessing the side frame area to obtain side frame area sample images, forming a sample image set by all side frame area sample images, configuring marking information for each side frame area sample image to form a marking file, and forming a sample data set based on the sample image set and the marking file;
step two: training a convolutional neural network inceptov 2 and a convolutional neural network Faster rcnn by using the sample data set to obtain a trained inceptov 2 model and a Faster rcnn model;
step three: and processing the image to be detected by using the trained inceptov 2 model and the Faster rcnn model to obtain the corresponding side frame state detection type, thereby realizing fault identification.
In this embodiment, the method for acquiring the original grayscale image is as follows: the method comprises the steps that imaging devices are carried on two sides of a railway track, and when a railway wagon passes through the imaging devices, linear array images of the wagon are obtained by the imaging devices from the two sides. The acquired image is a high-definition image and is a clear gray image. Since truck side frame components may be affected by rain, mud, oil, black paint, and other natural or artificial conditions, and the images taken at different stations may differ. Thus, the side frame part images vary widely. Therefore, in the process of collecting the side frame image data, the side frame images under various conditions are collected as completely as possible to ensure diversity.
The side frame members may also vary in configuration among different types of trucks. However, some of the less common truck types have side frame components that are difficult to collect due to the large frequency differences that occur between the different types. Thus, all types of sideframe components are collectively referred to as a class, and the sample data set is established all by class.
The markup file may be an xml file.
The sample image sets and the information about the side frame area sample images in the tag file are in one-to-one correspondence, i.e., each side frame area sample image corresponds to one tag information. When the sample data set is used for training the neural network, the data information input each time comprises side frame area sample image information and corresponding marking information.
In order to deal with the pertinence of the image, the area of the side frame component can be preliminarily intercepted from the original gray-scale image according to hardware equipment, axle distance information, relevant positions and other prior knowledge.
The specific implementation process of this embodiment is shown in fig. 1, and the step of determining the side frame region of each gray scale image in the step one is the initial positioning step in fig. 1. The side frame image in fig. 1 is the image to be detected.
Further, pre-treating the sideframe region comprises:
data amplification is performed on the sideframe region and contrast is improved.
Although the establishment of the sample image set includes images under various conditions as much as possible, data amplification is still required to be performed on the images in order to improve the stability of the algorithm.
Because the angle distances of the imaging devices at all stations are different, the brightness degrees of the acquired images are different, and some images are too dark to clearly observe the fracture area of the side frame, the images can be subjected to local self-adaption to improve the contrast before entering a deep learning network.
By way of example, the form of data amplification includes at least one of:
the side frame region with the improved contrast is rotated, translated, scaled and mirrored.
The data amplification mode is carried out under random conditions, and the diversity and applicability of the sample can be guaranteed to the greatest extent.
Still further, the marking information includes:
image name, detection category, and top left and bottom right coordinates of the target area in the sideframe area sample image.
The image name is used to distinguish different images, and may be a serial number or an alphabetical identification, for example.
Still further, the detection category includes at least one of fracture, water flow, chalk, and shadow.
Because a large amount of water flow marks and chalk marks exist on the side frame, and the shadow of the side frame is similar to the image characteristics of the fracture, images are marked as fracture, water flow, chalk and shadow, and the detection category is obtained by means of manual marking.
Still further, the convolutional neural network inceptionv2 and the convolutional neural network fast rcnn adopt COCO model parameters to initialize network parameters.
Still further, the side frame area sample image is input into a convolutional neural network inceptionv2 for feature extraction, and a low-dimensional feature map is obtained.
Further, inputting the low-dimensional feature map into an RPN layer of a convolutional neural network Faster rcnn to generate a plurality of candidate frames of the target area, distinguishing the image category of each candidate frame into a foreground and a background, and performing regression adjustment on the positions of the candidate frames;
dividing each obtained foreground candidate frame into 9 x 9 blocks by using an ROI Pooling layer uniformly, and performing maxporoling treatment on each block; and converting all the foreground candidate frames into data with the same size, sending the data into a full-connection layer, and performing final target area detection category classification and candidate frame position regression of the target area.
The specific process of obtaining the image classification of each candidate frame as foreground and background is as follows: sliding a sliding window on the low-dimensional feature map, mapping the center of the sliding window onto the side frame area sample image, and when the overlap degree (IOU) of the area mapped on the side frame area sample image and the corresponding target area in the markup file is greater than 0.7, determining that the candidate frame area is a positive sample; when the overlapping degree of the area mapped on the side frame area sample image and the corresponding target area in the mark file is less than 0.3, the candidate frame area is a negative sample, and then the ratio of the positive sample to the negative sample is 1: training an RPN layer, wherein 64 foreground and background candidate frames are randomly extracted as a classification regression task finally output by training, wherein the foreground is selected when the IOU of the real target mark position is larger than 0.5, and the background is selected when the IOU is larger than 0.1 and smaller than 0.5.
The ROI Pooling layer is a max Pooling of fixed output size.
The specific process for obtaining inceptionv2 and the Faster rcnn model is shown in fig. 3.
Still further, the loss function L ({ p) of side frame area sample images during training of the convolutional neural network inceptonv 2 and the convolutional neural network Faster rcnni},{ti}) are defined as follows:
Figure GDA0002632715910000061
in the formula, piIs the classification probability of different detection classes, ti={tx,ty,tw,thIs a vector representing the offset of the candidate frame, txIs the x-axis coordinate offset, tyAs an offset of the y-axis coordinate, twAs frame width offset candidate, thAs frame height offset, NclsI is the total number of sample data, i is the detection class, λ is the proportional tradeoff parameter of the classification loss and the regression loss,
Figure GDA0002632715910000062
is and tiVectors with the same dimension represent the offset of the candidate frame to the target area marking frame; n is a radical ofregThe number of regression candidate boxes is;
class loss function L in which the target is predictedcls(pi) With Focal local, the definition is as follows:
Figure GDA0002632715910000063
formula (III) αiAdjusting the ratio for the sample, αi∈[0,1]And gamma is a positive number;
regression predicted position loss function
Figure GDA0002632715910000064
Comprises the following steps:
Figure GDA0002632715910000065
setting up
Figure GDA0002632715910000066
Then
Figure GDA0002632715910000071
According to the embodiment, Focal local is introduced into fast rcnn as a Loss function during classification, so that the influence caused by class unbalance can be avoided, and the detection accuracy is improved.
In the case of a positive sample, the sample,
Figure GDA0002632715910000072
is 1, when the sample is negative,
Figure GDA0002632715910000073
is 0.
Further, with reference to fig. 4, binarizing the image to be detected whose detection type is fracture in the third step, so that the pixel value of the fracture part is 1 and the pixel value of the non-fracture part is 0; masking the broken part in comparison with the original image to be detected, judging the average pixel of the masked area, if the average pixel is lower than a set pixel threshold value, identifying the fault, and giving an alarm.
In the process of training to obtain the inceptionv2 model and the Faster rcnn model, a trained weight coefficient can be obtained, as shown in fig. 1 and fig. 2. And (3) performing data conversion on the image to be detected by using an inceptionv2 model and a Faster rcnn model, and predicting four positions of side frame fracture, water flow, chalk and shadow by using the trained weight coefficient.
And if the pixel value of the mask part is lower than the set threshold value, carrying out fault alarm on the part of the side frame. If the frame is not larger than the threshold value, the state of the side frame is normal, and the next side frame image can be processed continuously.
In conclusion, the method of the invention replaces the existing manual detection with the automatic image identification mode, thus improving the detection efficiency and the detection accuracy; the method applies the deep learning algorithm to the automatic identification of the side frame fracture fault, and improves the robustness and the precision of the whole algorithm.
Although the invention herein has been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present invention. It is therefore to be understood that numerous modifications may be made to the illustrative embodiments and that other arrangements may be devised without departing from the spirit and scope of the present invention as defined by the appended claims. It should be understood that features described in different dependent claims and herein may be combined in ways different from those described in the original claims. It is also to be understood that features described in connection with individual embodiments may be used in other described embodiments.

Claims (5)

1. A rail wagon bogie side frame fracture fault image identification method is characterized by comprising the following steps:
the method comprises the following steps: acquiring original gray level images of side frames of a truck bogie in operation, determining a side frame area of each gray level image according to truck bogie axle distance information and position information, preprocessing the side frame area to obtain side frame area sample images, forming a sample image set by all side frame area sample images, configuring marking information for each side frame area sample image to form a marking file, and forming a sample data set based on the sample image set and the marking file;
step two: training a convolutional neural network inceptov 2 and a convolutional neural network Fasterrcnn by using the sample data set to obtain a trained inceptov 2 model and a Faster rcnn model;
step three: processing the image to be detected by using the trained inceptov 2 model and the Faster rcnn model to obtain the corresponding side frame state detection category and realize fault identification;
pretreating the sideframe region comprises:
performing data amplification on the side frame region and improving the contrast;
the data amplification format includes at least one of:
rotating, translating, zooming and mirroring the side frame area with the improved contrast;
the marking information includes:
image name, detection type and coordinates of the upper left corner and the lower right corner of a target area in the side frame area sample image;
the detection types comprise at least one of fracture, water flow, chalk and shadow;
binarizing the image to be detected with the detection category of fracture in the third step to enable the pixel value of the fracture part to be 1 and the pixel value of the non-fracture part to be 0; masking the broken part in comparison with the original image to be detected, judging the average pixel of the masked area, if the average pixel is lower than a set pixel threshold value, identifying the fault, and giving an alarm.
2. The method of image recognition of a rail wagon truck sideframe break fault as defined in claim 1,
the convolutional neural network inceptionv2 and the convolutional neural network Faster rcnn adopt COCO model parameters to initialize network parameters.
3. The method of image recognition of a rail wagon truck sideframe break fault as defined in claim 2,
and inputting the side frame area sample image into a convolutional neural network inceptotv 2 for feature extraction to obtain a low-dimensional feature map.
4. The method of image recognition of a rail wagon truck sideframe break fault as defined in claim 3,
inputting the low-dimensional feature map into an RPN layer of a convolutional neural network Faster rcnn to generate a plurality of candidate frames of the target area, distinguishing the image category of each candidate frame into a foreground and a background, and performing regression adjustment on the positions of the candidate frames;
dividing each obtained foreground candidate frame into 9 x 9 blocks by using an ROI Pooling layer uniformly, and performing maxporoling treatment on each block; and converting all the foreground candidate frames into data with the same size, sending the data into a full-connection layer, and performing final target area detection category classification and candidate frame position regression of the target area.
5. The method of image recognition of a rail wagon truck sideframe break fault as defined in claim 4,
loss function L ({ p) of side frame area sample image during training of convolutional neural network inceptionv2 and convolutional neural network Faster rcnni},{ti}) are defined as follows:
Figure FDA0002632715900000021
in the formula, piIs the classification probability of different detection classes, ti={tx,ty,tw,thIs a vector representing the offset of the candidate frame, txIs the x-axis coordinate offset, tyAs an offset of the y-axis coordinate, twAs frame width offset candidate, thAs frame height offset, NclsI is the total number of sample data, i is the detection class, λ is the proportional tradeoff parameter of the classification loss and the regression loss,
Figure FDA0002632715900000022
is and tiVectors with the same dimension represent the offset of the candidate frame to the target area marking frame; n is a radical ofregThe number of regression candidate boxes is;
class loss function L in which the target is predictedcls(pi) With Focal local, the definition is as follows:
Figure FDA0002632715900000023
formula (III) αiAdjusting the ratio for the sample, αi∈[0,1]And gamma is a positive number;
regression predicted position loss function
Figure FDA0002632715900000024
Comprises the following steps:
Figure FDA0002632715900000025
setting up
Figure FDA0002632715900000026
Then
Figure FDA0002632715900000027
CN201911272479.3A 2019-12-12 2019-12-12 Railway wagon bogie side frame fracture fault image identification method Active CN111079747B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911272479.3A CN111079747B (en) 2019-12-12 2019-12-12 Railway wagon bogie side frame fracture fault image identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911272479.3A CN111079747B (en) 2019-12-12 2019-12-12 Railway wagon bogie side frame fracture fault image identification method

Publications (2)

Publication Number Publication Date
CN111079747A CN111079747A (en) 2020-04-28
CN111079747B true CN111079747B (en) 2020-10-09

Family

ID=70314097

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911272479.3A Active CN111079747B (en) 2019-12-12 2019-12-12 Railway wagon bogie side frame fracture fault image identification method

Country Status (1)

Country Link
CN (1) CN111079747B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652296A (en) * 2020-05-21 2020-09-11 哈尔滨市科佳通用机电股份有限公司 Deep learning-based rail wagon lower pull rod fracture fault detection method
CN111931577A (en) * 2020-07-07 2020-11-13 国网上海市电力公司 Intelligent inspection method for specific foreign matters of power grid line
CN112233095B (en) * 2020-10-16 2021-05-28 哈尔滨市科佳通用机电股份有限公司 Method for detecting multiple fault forms of railway wagon locking plate device
CN112308135A (en) * 2020-10-29 2021-02-02 哈尔滨市科佳通用机电股份有限公司 Railway motor car sand spreading pipe loosening fault detection method based on deep learning
CN112348894B (en) * 2020-11-03 2022-07-29 中冶赛迪重庆信息技术有限公司 Method, system, equipment and medium for identifying position and state of scrap steel truck
CN112508034B (en) * 2020-11-03 2024-04-30 精英数智科技股份有限公司 Freight train fault detection method and device and electronic equipment
CN112330631B (en) * 2020-11-05 2021-06-04 哈尔滨市科佳通用机电股份有限公司 Railway wagon brake beam pillar rivet pin collar loss fault detection method
CN112329858B (en) * 2020-11-06 2021-07-16 哈尔滨市科佳通用机电股份有限公司 Image recognition method for breakage fault of anti-loosening iron wire of railway motor car
CN112329859A (en) * 2020-11-06 2021-02-05 哈尔滨市科佳通用机电股份有限公司 Method for identifying lost fault image of sand spraying pipe nozzle of railway motor car
CN112418253B (en) * 2020-12-18 2021-08-24 哈尔滨市科佳通用机电股份有限公司 Sanding pipe loosening fault image identification method and system based on deep learning
CN112613560A (en) * 2020-12-24 2021-04-06 哈尔滨市科佳通用机电股份有限公司 Method for identifying front opening and closing damage fault of railway bullet train head cover based on Faster R-CNN
CN112906534A (en) * 2021-02-07 2021-06-04 哈尔滨市科佳通用机电股份有限公司 Lock catch loss fault detection method based on improved Faster R-CNN network
CN112907532B (en) * 2021-02-10 2022-03-08 哈尔滨市科佳通用机电股份有限公司 Improved truck door falling detection method based on fast RCNN
CN115240172B (en) * 2022-07-12 2023-04-07 哈尔滨市科佳通用机电股份有限公司 Relieving valve loss detection method based on deep learning
CN115170882A (en) * 2022-07-19 2022-10-11 哈尔滨市科佳通用机电股份有限公司 Optimization method of rail wagon part detection network and guardrail breaking fault identification method
CN115346172B (en) * 2022-08-16 2023-04-21 哈尔滨市科佳通用机电股份有限公司 Method and system for detecting lost and broken hook lifting rod reset spring

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105260716A (en) * 2015-10-13 2016-01-20 长沙威胜信息技术有限公司 Fault indicator state identification method and fault indicator state identification device
CN106226050A (en) * 2016-07-15 2016-12-14 北京航空航天大学 A kind of TFDS fault automatic identifying method
CN108133186A (en) * 2017-12-21 2018-06-08 东北林业大学 A kind of plant leaf identification method based on deep learning
CN109165541A (en) * 2018-05-30 2019-01-08 北京飞鸿云际科技有限公司 Coding method for vehicle component in intelligent recognition rail traffic vehicles image
CN110175976A (en) * 2019-01-23 2019-08-27 上海应用技术大学 II plate-type ballastless track Crack Detection classification method of CRTS based on machine vision

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101261681B (en) * 2008-03-31 2011-07-20 北京中星微电子有限公司 Road image extraction method and device in intelligent video monitoring
CN105251707B (en) * 2015-11-26 2017-07-28 长沙理工大学 Bad part eject sorting equipment based on medical large transfusion visible foreign matters detecting system
US10198671B1 (en) * 2016-11-10 2019-02-05 Snap Inc. Dense captioning with joint interference and visual context
US10782966B2 (en) * 2017-07-13 2020-09-22 Wernicke LLC Artificially intelligent self-learning software operating program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105260716A (en) * 2015-10-13 2016-01-20 长沙威胜信息技术有限公司 Fault indicator state identification method and fault indicator state identification device
CN106226050A (en) * 2016-07-15 2016-12-14 北京航空航天大学 A kind of TFDS fault automatic identifying method
CN108133186A (en) * 2017-12-21 2018-06-08 东北林业大学 A kind of plant leaf identification method based on deep learning
CN109165541A (en) * 2018-05-30 2019-01-08 北京飞鸿云际科技有限公司 Coding method for vehicle component in intelligent recognition rail traffic vehicles image
CN110175976A (en) * 2019-01-23 2019-08-27 上海应用技术大学 II plate-type ballastless track Crack Detection classification method of CRTS based on machine vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度学习的动车关键部位故障图像检测;张江勇;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20190917;正文第11,27-29,35,37,38,40-43页 *

Also Published As

Publication number Publication date
CN111079747A (en) 2020-04-28

Similar Documents

Publication Publication Date Title
CN111079747B (en) Railway wagon bogie side frame fracture fault image identification method
CN111091544B (en) Method for detecting breakage fault of side integrated framework of railway wagon bogie
CN110261436B (en) Rail fault detection method and system based on infrared thermal imaging and computer vision
CN113160192B (en) Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background
CN111652227B (en) Method for detecting damage fault of bottom floor of railway wagon
CN111080620A (en) Road disease detection method based on deep learning
CN111079746B (en) Railway wagon axle box spring fault image identification method
CN111260629A (en) Pantograph structure abnormity detection algorithm based on image processing
CN103442209A (en) Video monitoring method of electric transmission line
CN106778833A (en) Small object loses the automatic identifying method of failure under a kind of complex background
CN111080600A (en) Fault identification method for split pin on spring supporting plate of railway wagon
CN111896540B (en) Water quality on-line monitoring system based on block chain
CN111091548B (en) Railway wagon adapter dislocation fault image identification method and system based on deep learning
CN110533023A (en) It is a kind of for detect identification railway freight-car foreign matter method and device
CN115995056A (en) Automatic bridge disease identification method based on deep learning
CN112329858B (en) Image recognition method for breakage fault of anti-loosening iron wire of railway motor car
CN112508911A (en) Rail joint touch net suspension support component crack detection system based on inspection robot and detection method thereof
CN115527170A (en) Method and system for identifying closing fault of door stopper handle of automatic freight car derailing brake device
CN115424128A (en) Fault image detection method and system for lower link of freight car bogie
CN111080599A (en) Fault identification method for hook lifting rod of railway wagon
CN114155226A (en) Micro defect edge calculation method
CN112085723B (en) Automatic detection method for spring jumping fault of truck bolster
CN111091554B (en) Railway wagon swing bolster fracture fault image identification method
CN111402185A (en) Image detection method and device
CN114581364A (en) GIS defect detection method based on X-ray imaging and Sobel-SCN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant