CN116030430A - Rail identification method, device, equipment and storage medium - Google Patents

Rail identification method, device, equipment and storage medium Download PDF

Info

Publication number
CN116030430A
CN116030430A CN202211736998.2A CN202211736998A CN116030430A CN 116030430 A CN116030430 A CN 116030430A CN 202211736998 A CN202211736998 A CN 202211736998A CN 116030430 A CN116030430 A CN 116030430A
Authority
CN
China
Prior art keywords
rail
image
data
fusion
equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211736998.2A
Other languages
Chinese (zh)
Inventor
张志勇
张成杰
徐洪
刘硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Cisai Tech Co Ltd
Original Assignee
Chongqing Cisai Tech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Cisai Tech Co Ltd filed Critical Chongqing Cisai Tech Co Ltd
Priority to CN202211736998.2A priority Critical patent/CN116030430A/en
Publication of CN116030430A publication Critical patent/CN116030430A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application provides a rail identification method, a device, equipment and a storage medium, and relates to the technical field of image identification, wherein the method comprises the following steps: carrying out data fusion on rail data acquired by the image recognition equipment and the radio detection equipment to obtain rail fusion image data; performing data processing on the rail fusion image data to obtain a rail contour image; and extracting coordinate points of the rail outline image by adopting a sliding window to obtain a target rail position. The rail data acquired by the two sensors are fused, so that the defects of low precision, easy failure, poor stability and the like caused by using a single sensor are avoided, and all-weather identification and extraction of the rail are realized; compared with the image recognition method such as deep learning, the method can provide accurate transverse and longitudinal position information of the rail target object, reduces the high requirement of a complex algorithm on hardware equipment, improves rail recognition efficiency, and has accuracy and robustness.

Description

Rail identification method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of image recognition technologies, and in particular, to a rail recognition method, apparatus, device, and storage medium.
Background
Foreign matter intrusion limit is one of main reasons of rail traffic locomotive accidents, and rail image recognition and extraction have important significance for detecting obstacles in rails. During running of the locomotive, by setting a region of interest (ROI) based on the rails extracted in real time, real-time detection is performed on obstacles invading the rails. The sensors used for detecting obstacles in front of running vehicles at home and abroad mainly comprise a machine vision sensor, an infrared imaging sensor, a radar, a laser ranging sensor and the like.
At present, the existing rail identification and extraction schemes mostly adopt a single visual camera sensor, and have the defects of insufficient identification precision, poor stability, easy influence of weather factors, failure and the like; most of the existing rail identification and extraction methods are based on image identification algorithms, the pixel level extraction is good, but the real world coordinate calculation precision of a target object is poor; most of the existing image recognition algorithms use a deep learning algorithm, have high requirements on processing hardware, have large calculation amount, and are difficult to ensure the real-time performance of rail recognition and extraction.
Disclosure of Invention
In view of this, an object of the embodiments of the present application is to provide a rail identifying method, apparatus, device, and storage medium, which uses two sensors of an image acquisition device and a radio detection device to perform fusion, so as to solve the defects of low accuracy, easy failure, poor stability, etc. caused by using a single sensor, and implement all-weather rail identifying and extracting; the radio detection equipment is used for assisting the image acquisition equipment, so that accurate transverse and longitudinal position information of the target object can be provided, and the problem that the coordinate calculation precision of the target object in the real world is poor due to the fact that only an image recognition algorithm is used is solved; based on the fusion of the image acquisition equipment and the radio detection equipment, a new algorithm architecture is designed, rail identification and extraction can be rapidly carried out in real time, and the problem of high requirements of complex algorithms on hardware equipment caused by adopting image processing methods such as deep learning is solved, so that the technical problem is solved.
In a first aspect, an embodiment of the present application provides a rail identifying method, including: carrying out data fusion on rail data acquired by the image recognition equipment and the radio detection equipment to obtain rail fusion image data; performing data processing on the rail fusion image data to obtain a rail contour image; and extracting coordinate points of the rail outline image by adopting a sliding window to obtain a target rail position.
In the implementation process, the two sensors of the image recognition equipment and the radio detection equipment are used for fusion, so that the defects of low precision, easy failure, poor stability and the like caused by using a single sensor are avoided, and all-weather recognition and extraction of the rail are realized; compared with the image recognition method such as deep learning, the method can provide accurate transverse and longitudinal position information of the target object, solves the problem that the coordinate calculation precision of the target object in the real world is poor due to the fact that only the image recognition algorithm is used, reduces the high requirements of complex algorithms on hardware equipment, and improves rail recognition efficiency.
Optionally, the data processing of the rail fusion image data to obtain a rail contour image includes: preprocessing the rail fusion image data to obtain a rail area target image; and performing filtering convolution processing on the rail region target image to obtain a rail outline image.
In the implementation process, the influence of illumination and noise can be eliminated by preprocessing the rail fusion image data, the image quality is improved, the image processing range is controlled, and the data calculation amount is reduced; the rail area target image is subjected to filtering convolution processing, so that the outline edge in the rail image is rapidly extracted, and the rail identification efficiency is improved.
Optionally, the preprocessing the rail fusion image data to obtain a rail area target image includes: taking the rail area as an interested area, and cutting the rail fusion image data to obtain a rail area cutting image; graying treatment is carried out on the rail region clipping image, and a rail region gray image is obtained; and carrying out image quality enhancement processing on the rail region gray level image to obtain a rail region target image.
In the implementation process, the rail fusion image data is cut through setting the region of interest, and only the data at the rail region is reserved, so that the data processing amount is reduced; the gray processing is carried out on the images, so that the color images can be uniformly changed into gray images; the image quality enhancement processing such as histogram equalization and Gaussian blur is carried out on the image, so that the contrast is improved, the image quality is enhanced, certain image noise is filtered, and the accuracy of subsequent rail identification is improved.
Optionally, the filtering convolution processing is performed on the rail area target image to obtain a rail outline image, which includes: setting an edge detection convolution kernel according to the actual width of the rail and the actual position of the image recognition equipment; and carrying out filtering convolution processing on the rail region target image based on the edge detection convolution kernel to obtain a rail contour image.
In the implementation process, the edge detection convolution kernel is set according to the actual width of the rail and the actual position of the image recognition equipment, and the convolution kernel filter is adopted to carry out filtering convolution processing on the rail area target image, so that the processing speed is high, real-time detection can be realized, and the accuracy of subsequent rail recognition is improved.
Optionally, the line number parameter and the column number parameter of the edge detection convolution kernel are related to the fineness of the rail contour recognition, and can be adjusted according to the fineness of the rail contour recognition.
In the implementation process, the edge contour detection is performed by setting the edge detection convolution kernel with parameter configuration, so that the optimal parameters can be configured according to different scenes, the application range of rail identification is widened, and the rail identification accuracy is improved.
Optionally, the extracting coordinate points of the rail contour image by using a sliding window to obtain a target rail position includes: traversing the rail outline image along the outline direction by adopting a sliding window to obtain traversed pixel coordinates; calculating the sum of the coordinates of the traversing pixels in the sliding window along the direction of the contour line to obtain a plurality of pixel coordinate points; and determining a coordinate point of the maximum value in the plurality of pixel coordinate points as a target rail position.
In the implementation process, the sliding window detection algorithm is adopted to traverse the rail contour image along the contour line direction, the sum of the coordinates of the traversed pixels in the sliding window is calculated along the contour line direction, the coordinate point of the maximum value of the sum is determined as the target rail position, the processing speed is high, real-time detection can be realized, and the rail identification accuracy and efficiency are improved.
Optionally, the data fusion of the rail data collected by the image recognition device and the radio detection device to obtain rail fusion image data includes: carrying out space synchronization on rail data acquired by the image recognition equipment and the radio detection equipment to obtain space synchronization data; and carrying out time synchronization on the space synchronization data to obtain rail fusion image data.
In the implementation process, the two sensor data of the image recognition equipment and the radio detection equipment are fused, so that the defects of low precision, easy failure, poor stability and the like caused by using a single sensor are avoided, all-weather rail recognition and extraction are realized, the processing speed is high, real-time detection can be realized, the application range of rail recognition is widened, and the efficiency and the accuracy are high.
In a second aspect, embodiments of the present application provide a rail identification apparatus, the apparatus comprising: an image acquisition device, a radio detection device, and a controller; the image acquisition equipment and the radio detection equipment are electrically connected with the controller; the image acquisition equipment and the radio detection equipment are used for acquiring rail data of the track area; the controller is used for carrying out data fusion on rail data acquired by the image recognition equipment and the radio detection equipment to obtain rail fusion image data; performing data processing on the rail fusion image data to obtain a rail contour image; and extracting coordinate points of the rail outline image by adopting a sliding window to obtain a target rail position.
In a third aspect, embodiments of the present application further provide an electronic device, including: a processor, a memory storing machine-readable instructions executable by the processor, which when executed by the processor perform the steps of the method described above when the electronic device is run.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method described above.
In order to make the above objects, features and advantages of the present application more comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a rail identifying method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a rail recognition algorithm according to an embodiment of the present application;
FIG. 3 is an extracted rail profile image provided in an embodiment of the present application;
FIG. 4 is a diagram of a sliding window detection rail position image provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a rail identifying device according to an embodiment of the present application;
fig. 6 is a block schematic diagram of an electronic device for providing a rail identifying apparatus according to an embodiment of the present application.
Icon: 300-an electronic device; 311-memory; 312-a storage controller; 313-processor; 314-peripheral interface; 315-an input-output unit; 316-display unit.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. The terms "first," "second," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Before describing the embodiments of the present application, a brief description will be first made of several technical concepts involved:
sliding Window detection algorithm (Moving Window): the sliding window algorithm is similar to the window hopping algorithm in that it controls traffic by limiting the maximum number of cells that can be received in each time window. The difference is that in the sliding window algorithm, the time window does not jump forward, but slides forward once every cell time, the length of the sliding being the time of one cell. The sliding window based object detection algorithm may be: firstly, fixing one of the sliding window areas; then sliding the sliding window on the image according to a designated step length, predicting each sliding obtaining area, and judging the probability of the existence of a target in the area; and (5) adjusting the size of the sliding window and the sliding step length, continuously sliding in the same mode, and predicting. The window size of the sliding window is not completely fit with the target scale under most conditions; after a sliding window prediction, a plurality of results are output, and NMS (non-maximum suppression) is needed to be performed on the output results, and then the output results are used as final output results.
Histogram equalization: a simple and effective image enhancement technique changes the gray scale of each pixel in an image by changing the histogram of the image, and is mainly used for enhancing the contrast of an image with a smaller dynamic range. The original image may be concentrated in a narrower interval due to its gray scale distribution, resulting in an insufficient definition of the image. For example, the gray level of an overexposed image is concentrated in a high brightness range, while underexposure will concentrate the image gray level in a low brightness range. By adopting histogram equalization, the histogram of the original image can be converted into a uniformly distributed (equalized) form, so that the dynamic range of gray value difference between pixels is increased, and the effect of enhancing the overall contrast of the image is achieved. In other words, the basic principle of histogram equalization is: the gray values with a large number of pixels in the image (i.e. the gray values which play a main role in the picture) are widened, and the gray values with a small number of pixels (i.e. the gray values which do not play a main role in the picture) are merged, so that the contrast is increased, the image is clear, and the purpose of enhancement is achieved.
Gaussian Blur (Gaussian blue): gaussian smoothing is a technique in image processing that is mainly used to reduce noise and detail of images. Image blur is numerically a smoothing; graphically, this is equivalent to creating a "blurring" effect, where the "center point" loses detail, and where gaussian blurring reduces the high frequency information of the image, and is therefore a low pass filter used in the preprocessing stage of computer vision algorithms to enhance the image effect of the image at different scale sizes.
The inventor notices that the existing rail recognition technical scheme mainly comprises: (1) Based on image video data, track extraction based on prior condensation in Hough space is performed for near tracks, and track extraction based on self-adaptive path growth is performed for far tracks. (2) Based on the image video data, for each of a plurality of pixel groups in the image, the railway track is detected based on a feature quantity that is based on a prescribed standard and that can be a scale representing the likelihood of being the railway track. (3) Based on the image video data, an environment-aware instance segmentation model is designed. The image is input into the segmentation model, and the complete front running track is segmented in real time with high precision. (4) And collecting point cloud data around the track, removing points far away from the ground, converting the point cloud data into a depth image, and finally extracting rail line information from the depth image. In summary, most of the existing rail identification and extraction schemes use a single traditional image processing algorithm and a deep learning image algorithm, the method of using multi-sensor fusion is less, the algorithm robustness is poor, and all-weather rail identification and extraction cannot be realized. In view of this, embodiments of the present application provide a rail identification method as described below.
Referring to fig. 1, fig. 1 shows a flowchart of a rail identifying method according to an embodiment of the present application, where the method includes: step 100, step 120 and step 140.
Step 100: carrying out data fusion on rail data acquired by the image recognition equipment and the radio detection equipment to obtain rail fusion image data;
step 120: performing data processing on the rail fusion image data to obtain a rail contour image;
step 140: and extracting coordinate points of the rail outline image by adopting a sliding window to obtain a target rail position.
Illustratively, the image acquisition device may be: the image sensor, the camera, the video camera, the machine vision sensor and the like can be arranged on the locomotive head, and can be any device capable of shooting or collecting images of the railway area in front of the locomotive running. The radio detection device may be: millimeter wave radar, laser ranging system, laser radar etc. can install at locomotive to can gather the cloud data of the preceding rail region of locomotive, and the detection equipment of weather such as not receiving raining, fog, night. The data processing may be: cutting and dividing, graying, histogram equalization, gaussian blur, filtering and the like are carried out on the fused image to delimit a target region of interest, and then preprocessing and edge detection and extraction are carried out, so that the edge of the outline of the target rail is obtained. Alternatively, the following embodiment uses a camera as an example and a millimeter wave radar as an example of the radio detection device.
Optionally, as shown in fig. 2, a camera installed in front of the locomotive shoots front rail image data, the millimeter wave radar perceives surrounding environment of a front rail, acquires rail point cloud data, performs data level fusion on the camera and the millimeter wave radar, and fuses point cloud information of the millimeter wave radar into the rail image to obtain rail fusion image data. The fusion process may be: firstly, the camera and the millimeter wave radar are calibrated in a combined mode, and spatial synchronization is guaranteed; simultaneously, synchronizing the time of the camera and the millimeter wave radar by a time linear interpolation method; and finally, projecting the point cloud data coordinates of the radar onto the image pixels, and matching with the image pixels. And after fusion, carrying out data processing such as preprocessing, edge contour line extraction and the like on the images to obtain rail contour images, and carrying out coordinate point extraction on the rail edge contour lines by using a sliding window detection algorithm to obtain the identified final actual rail position coordinate points, wherein each coordinate point is a coordinate value in the real world, and the coordinate origin is a calibration origin of a camera and a millimeter wave radar. The positioning mode of fusing the multiple sensors of the camera and the millimeter wave radar is adopted, so that collected rail position information can be provided by two positioning modes at any time, rail positions can be detected and positioned in real time without being influenced by outside severe weather, such as heavy rain, strong wind, night and the like, the probability of positioning failure is reduced, and accuracy and robustness are achieved.
By using the image recognition equipment and the radio detection equipment to fuse the two sensors, the defects of low precision, easy failure, poor stability and the like caused by using a single sensor are avoided, and all-weather recognition and extraction of the rail are realized; compared with the image recognition method such as deep learning, the method can provide accurate transverse and longitudinal position information of the target object, solves the problem that the coordinate calculation precision of the target object in the real world is poor due to the fact that only the image recognition algorithm is used, reduces the high requirements of complex algorithms on hardware equipment, and improves rail recognition efficiency.
In one embodiment, step 120 may include: step 121 and step 122.
Step 121: preprocessing rail fusion image data to obtain a rail area target image;
step 122: and performing filtering convolution processing on the rail region target image to obtain a rail contour image.
The preprocessing may be, for example, a process of preprocessing image data after the target region of interest is defined by clipping and segmentation, graying, histogram equalization, gaussian blur, etc., so as to reduce the amount of computation, eliminate the influence of illumination and noise, and improve the image quality. The filter convolution process may be a process of performing edge contour extraction on the image by using an edge detection algorithm, and may be, for example: the median filtering is utilized to carry out filtering processing on the image, the edge of the image is protected, noise is removed at the same time, then the Soble edge detection operator is utilized to carry out edge detection, meanwhile, the noise can be smoothed, and more accurate rail contour edge direction information is obtained. It is also possible that: and carrying out filtering convolution processing on the preprocessed rail image, wherein the rail filter can carry out convolution processing on image data by designing an edge detection convolution kernel K matrix to obtain rail contour edges.
By preprocessing the rail fusion image data, the influence of illumination and noise can be eliminated, the image quality is improved, the image processing range can be controlled, and the data calculation amount is reduced; the rail area target image is subjected to filtering convolution processing, so that the outline edge in the rail image is rapidly extracted, and the rail identification efficiency is improved.
In one embodiment, step 121 may include: step 1211, step 1212, and step 1213.
Step 1211: taking the rail area as an interested area, and cutting rail fusion image data to obtain a rail area cutting image;
step 1212: graying treatment is carried out on the rail region clipping image, and a rail region gray image is obtained;
step 1213: and carrying out image quality enhancement processing on the rail region gray level image to obtain a rail region target image.
The image quality enhancement process may be, for example, an image processing process such as histogram equalization, gaussian blur processing, or the like to improve image quality and noise removal. The clipping parameters of the ROI (region of interest) can be set to clip the fused image, only the data at the rail region is reserved, and the data processing amount is reduced. Assuming that the width and height of the fused image are W, H, and the ROI extraction parameter is Top, left, right, wherein Top represents a longitudinal clipping start point, left and Right are a lateral clipping start point and an end point, the ROI image size is: (Right-Left) x (H-Top).
And aiming at the ROI image, carrying out graying, histogram equalization and Gaussian blur processing to obtain a rail region target image. Compared with the direct processing of a color image (fusion image), the graying processing occupies smaller memory and has higher running speed, and the contrast ratio can be visually increased after the image is subjected to the graying processing, so that the target area is highlighted; the histogram equalization processing can also improve the contrast of the image, and the pixel distribution of the image is more uniform, so that the purpose of enhancing the image is achieved; the Gaussian blur processing is used for carrying out Gaussian filtering on the image, filtering noise and carrying out certain blur processing. These preprocessing procedures can reduce data throughput, improve image quality, and noise removal.
Cutting the rail fusion image data by setting the region of interest, only retaining the data at the rail region, and reducing the data processing amount; the gray processing is carried out on the images, so that the color images can be uniformly changed into gray images; the image quality enhancement processing such as histogram equalization and Gaussian blur is carried out on the image, so that the contrast is improved, the image quality is enhanced, certain image noise is filtered, and the accuracy of subsequent rail identification is improved.
In one embodiment, step 122 may include: step 1221 and step 1222.
Step 1221: setting an edge detection convolution kernel according to the actual width of the rail and the actual position of the image recognition equipment;
step 1222: and carrying out filtering convolution processing on the rail region target image based on the edge detection convolution kernel to obtain a rail contour image.
Illustratively, the rail filter may convolve the image data by designing an edge detection convolution kernel K matrix to obtain rail contour edges. Wherein the convolution kernel K may be expressed as:
Figure BDA0004032645930000111
for the setting of the convolution kernel, factors such as the rail width, the camera installation position and the like in an actual scene need to be considered, the actual setting of the convolution kernel K needs to be manually debugged, different convolution kernels K are tried to optimize the edge contour extraction result, and the rail filter has the best rail extraction effect. The above-mentioned convolution kernel K is a convolution kernel which corresponds to a better extraction effect of the current scene, the value of each column of the convolution kernel is composed of-1, 0, and-1 and 1 are symmetrical about 0.
The rail filter convolution calculation formula may be:
Figure BDA0004032645930000112
where t represents the image plane matrix space, [ a, b ] is the convolution kernel K, and the function h is the value of each position of the convolution kernel, and an effect schematic diagram after processing using the rail filter is shown in fig. 3, and it can be seen that: after filtering by the rail filter, a clear rail edge profile can be obtained, with the white highlighted vertical line portion in fig. 3, corresponding to the rail being extracted from the captured image data.
By setting the edge detection convolution kernel according to the actual width of the rail and the actual position of the image recognition equipment, the convolution kernel filter is adopted to carry out filtering convolution processing on the rail area target image, the processing speed is high, real-time detection can be realized, and the accuracy of the subsequent rail recognition is improved.
In one embodiment, the number of rows and columns parameters of the edge detection convolution kernel are related to the fineness of the rail profile identification and are adjustable according to the fineness of the rail profile identification.
Illustratively, the edge detection convolution kernel matrix is square in dimension and is generally odd in dimension, so that data processing is easier, and the number of rows and columns is generally equal and related to the fineness of rail contour recognition. For example, the convolution kernel dimension in step 1222 is 5x5, the convolution kernel dimension may be generally set to various dimensions of 5x5, 7x7, 9x9, etc., and specific dimension parameters may be optimized according to the required fineness of different scenes by configuration adjustment, but the value of each column in the convolution kernel is composed of-1, 0, and-1 and 1 are symmetrical about 0. By setting the edge detection convolution kernel with parameter configuration to carry out edge contour detection, the optimal parameters can be set according to different scenes, and the rail identification accuracy is improved.
In one embodiment, step 140 may include: step 141, step 142 and step 143.
Step 141: traversing the rail contour image along the contour line direction by adopting a sliding window to obtain traversed pixel coordinates;
step 142: calculating the sum of the coordinates of the traversed pixels in the sliding window along the contour line direction to obtain a plurality of pixel coordinate points;
step 143: a coordinate point of the maximum value of the plurality of pixel coordinate points is determined as a target rail position.
For example, the number of sliding windows can be left and right, and traversal is performed simultaneously, so as to improve the recognition efficiency. By setting sliding window parameters of the left sliding window and the right sliding window: slide_window, slide_w, slide_h, slide_interval traverse the rail contour image along the contour line direction. The slide_window is a matrix parameter, four elements respectively represent the lateral ranges of two sliding windows, slide_w is the width of the sliding window, slide_h is the height of the sliding window, and slide_interval is the step size of sliding window movement.
The sliding window detection algorithm flow may be: setting a left sliding window and a right sliding window, and traversing from bottom to top; calculating the sum of longitudinal direction pixel values for each window, and taking the position of the maximum value as the position of the target rail point; the extraction of the target rail points is completed by moving the sliding window along the track edge contour extracted in step 120 by the set sliding window step, and the effect of detecting the extracted target rail position by the sliding window is shown in fig. 4. For the rectangular window with the same size in the left and right columns in fig. 4, the sum of pixel coordinates in the longitudinal direction of the image in the rectangular window is calculated, obviously, the pixel value of the rail is larger, because the pixel value range of the gray level image is 0-255, the smaller the pixel value is, the darker the image is seen, the larger the pixel value is, the whiter the image is, the final rail position coordinate point is obtained by detection by using a sliding window detection algorithm, wherein each coordinate point is a coordinate value in the real world, and the coordinate origin is the calibration origin of the camera and the millimeter wave radar.
The sliding window detection algorithm is adopted to traverse the rail contour image along the contour line direction, the sum of the coordinates of the traversed pixels in the sliding window is calculated along the contour line direction, the coordinate point of the maximum value of the sum is determined as the target rail position, the processing speed is high, real-time detection can be realized, and the rail identification accuracy and efficiency are improved.
Optionally, step 100 may include: step 101 and step 102.
Step 101: carrying out space synchronization on rail data acquired by the image recognition equipment and the radio detection equipment to obtain space synchronization data;
step 102: and carrying out time synchronization on the space synchronization data to obtain rail fusion image data.
The image capture device may be a camera, for example, that may be mounted to the locomotive and that is capable of capturing or capturing images of the area of the rail in front of the locomotive. The radio detection equipment can be millimeter wave radar, can acquire point cloud data of a rail area in front of a locomotive, and is not influenced by rainy, foggy, night and other weather.
The camera and the millimeter wave radar are calibrated in a combined mode, and space synchronization is guaranteed; and simultaneously, the time of the camera and the millimeter wave radar is synchronized by a time linear interpolation method, so that data fusion is realized. For spatial synchronization, assuming that the radar coordinate system is Or, xr, yr is a point in the radar coordinate system, xw, yw is the offset of the radar center relative to the camera center, the conversion of the radar coordinate system into a world coordinate system formula centered on the camera may be:
Figure BDA0004032645930000131
the world coordinate system can be converted into a pixel coordinate system formula as follows:
Figure BDA0004032645930000132
wherein R is a rotation matrix, T is a translation matrix, f is a camera focal length, U 0 、V 0 Zc is a scaling factor for the center point in the image pixel coordinate system. And for time synchronization, acquiring camera data and radar data under the same time stamp by adopting a time linear interpolation method, so as to ensure the time synchronization of the phase data and the radar data.
The two sensor data of the image recognition equipment and the radio detection equipment are fused, so that the defects of low precision, easy failure, poor stability and the like caused by using a single sensor are avoided, all-weather rail recognition and extraction are realized, the processing speed is high, real-time detection can be realized, the application range of rail recognition is widened, and the efficiency and the accuracy are high.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a rail identifying apparatus according to an embodiment of the present application, where the apparatus includes: an image acquisition device, a radio detection device, and a controller; the image acquisition equipment and the radio detection equipment are electrically connected with the controller;
the image acquisition equipment and the radio detection equipment are used for acquiring rail data of the track area;
the controller is used for carrying out data fusion on rail data acquired by the image recognition equipment and the radio detection equipment to obtain rail fusion image data; carrying out data processing on the rail fusion image data to obtain a rail contour image; and extracting coordinate points of the rail contour image by adopting a sliding window to obtain the target rail position.
Optionally, performing data processing on the rail fusion image data to obtain a rail contour image, including:
preprocessing rail fusion image data to obtain a rail area target image;
and performing filtering convolution processing on the rail region target image to obtain a rail contour image.
Optionally, the preprocessing the rail fusion image data to obtain a rail area target image includes:
taking the rail area as an interested area, and cutting the rail fusion image data to obtain a rail area cutting image;
graying treatment is carried out on the rail region clipping image, and a rail region gray image is obtained;
and carrying out image quality enhancement processing on the rail region gray level image to obtain a rail region target image.
Optionally, the filtering convolution processing is performed on the rail area target image to obtain a rail outline image, which includes:
setting an edge detection convolution kernel according to the actual width of the rail and the actual position of the image recognition equipment;
and carrying out filtering convolution processing on the rail region target image based on the edge detection convolution kernel to obtain a rail contour image.
Optionally, the line number parameter and the column number parameter of the edge detection convolution kernel are related to the fineness of the rail contour recognition, and can be adjusted according to the fineness of the rail contour recognition.
Optionally, the extracting coordinate points of the rail contour image by using a sliding window to obtain a target rail position includes:
traversing the rail outline image along the outline direction by adopting a sliding window to obtain traversed pixel coordinates;
calculating the sum of the coordinates of the traversing pixels in the sliding window along the direction of the contour line to obtain a plurality of pixel coordinate points;
and determining a coordinate point of the maximum value in the plurality of pixel coordinate points as a target rail position.
Optionally, the data fusion of the rail data collected by the image recognition device and the radio detection device to obtain rail fusion image data includes:
carrying out space synchronization on rail data acquired by the image recognition equipment and the radio detection equipment to obtain space synchronization data;
and carrying out time synchronization on the space synchronization data to obtain rail fusion image data.
Referring to fig. 6, fig. 6 is a block schematic diagram of an electronic device. The electronic device 300 may include a memory 311, a memory controller 312, a processor 313, a peripheral interface 314, an input output unit 315, a display unit 316. It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 6 is merely illustrative and is not limiting of the configuration of the electronic device 300. For example, electronic device 300 may also include more or fewer components than shown in FIG. 6, or have a different configuration than shown in FIG. 6.
The above-mentioned memory 311, memory controller 312, processor 313, peripheral interface 314, input/output unit 315, and display unit 316 are electrically connected directly or indirectly to each other to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The processor 313 is used to execute executable modules stored in the memory.
The Memory 311 may be, but is not limited to, a random access Memory (Random AccessMemory, RAM), a Read Only Memory (ROM), a programmable Read Only Memory (Programmable Read-Only Memory, PROM), an erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), an electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc. The memory 311 is configured to store a program, and the processor 313 executes the program after receiving an execution instruction, and a method executed by the electronic device 300 defined by the process disclosed in any embodiment of the present application may be applied to the processor 313 or implemented by the processor 313.
The processor 313 may be an integrated circuit chip having signal processing capabilities. The processor 313 may be a general-purpose processor, including a central processing unit (Central ProcessingUnit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (digital signal processor, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field Programmable Gate Arrays (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The peripheral interface 314 couples various input/output devices to the processor 313 and the memory 311. In some embodiments, the peripheral interface 314, the processor 313, and the memory controller 312 may be implemented in a single chip. In other examples, they may be implemented by separate chips.
The input/output unit 315 is used for providing input data to a user. The input/output unit 315 may be, but is not limited to, a mouse, a keyboard, and the like.
The display unit 316 provides an interactive interface (e.g., a user interface) between the electronic device 300 and a user for reference. In this embodiment, the display unit 316 may be a liquid crystal display or a touch display. The liquid crystal display or the touch display may display a process of executing the program by the processor.
The electronic device 300 in the present embodiment may be used to perform each step in each method provided in the embodiments of the present application.
Furthermore, the embodiments of the present application also provide a computer readable storage medium, on which a computer program is stored, which when being executed by a processor performs the steps in the above-described method embodiments.
The computer program product of the above method provided in the embodiments of the present application includes a computer readable storage medium storing a program code, where instructions included in the program code may be used to perform steps in the above method embodiment, and specifically, reference may be made to the above method embodiment, which is not described herein.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, and the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form. The functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
It should be noted that the functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only memory (ROM) random access memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application, and various modifications and variations may be suggested to one skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (10)

1. A method of rail identification, the method comprising:
carrying out data fusion on rail data acquired by the image recognition equipment and the radio detection equipment to obtain rail fusion image data;
performing data processing on the rail fusion image data to obtain a rail contour image;
and extracting coordinate points of the rail outline image by adopting a sliding window to obtain a target rail position.
2. The method of claim 1, wherein the data processing the rail fusion image data to obtain a rail profile image comprises:
preprocessing the rail fusion image data to obtain a rail area target image;
and performing filtering convolution processing on the rail region target image to obtain a rail outline image.
3. The method of claim 2, wherein preprocessing the rail fusion image data to obtain a rail region target image comprises:
taking the rail area as an interested area, and cutting the rail fusion image data to obtain a rail area cutting image;
graying treatment is carried out on the rail region clipping image, and a rail region gray image is obtained;
and carrying out image quality enhancement processing on the rail region gray level image to obtain a rail region target image.
4. The method of claim 2, wherein said filtering the rail region target image to obtain a rail profile image comprises:
setting an edge detection convolution kernel according to the actual width of the rail and the actual position of the image recognition equipment;
and carrying out filtering convolution processing on the rail region target image based on the edge detection convolution kernel to obtain a rail contour image.
5. The method of claim 4, wherein the number of rows and columns parameters of the edge detection convolution kernel are related to and adjustable according to the fineness of rail profile identification.
6. The method according to any one of claims 1-5, wherein the performing coordinate point extraction on the rail profile image using a sliding window to obtain a target rail position includes:
traversing the rail outline image along the outline direction by adopting a sliding window to obtain traversed pixel coordinates;
calculating the sum of the coordinates of the traversing pixels in the sliding window along the direction of the contour line to obtain a plurality of pixel coordinate points;
and determining a coordinate point of the maximum value in the plurality of pixel coordinate points as a target rail position.
7. The method according to any one of claims 1-5, wherein the data fusion of the rail data collected by the image recognition device and the radio detection device to obtain rail fusion image data includes:
carrying out space synchronization on rail data acquired by the image recognition equipment and the radio detection equipment to obtain space synchronization data;
and carrying out time synchronization on the space synchronization data to obtain rail fusion image data.
8. A rail identification apparatus, the apparatus comprising: an image acquisition device, a radio detection device, and a controller; the image acquisition equipment and the radio detection equipment are electrically connected with the controller;
the image acquisition equipment and the radio detection equipment are used for acquiring rail data of the track area;
the controller is used for carrying out data fusion on rail data acquired by the image recognition equipment and the radio detection equipment to obtain rail fusion image data; performing data processing on the rail fusion image data to obtain a rail contour image; and extracting coordinate points of the rail outline image by adopting a sliding window to obtain a target rail position.
9. An electronic device, comprising: a processor, a memory storing machine-readable instructions executable by the processor, which when executed by the processor perform the steps of the method of any of claims 1 to 7 when the electronic device is run.
10. A computer-readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, performs the steps of the method according to any of claims 1 to 7.
CN202211736998.2A 2022-12-30 2022-12-30 Rail identification method, device, equipment and storage medium Pending CN116030430A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211736998.2A CN116030430A (en) 2022-12-30 2022-12-30 Rail identification method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211736998.2A CN116030430A (en) 2022-12-30 2022-12-30 Rail identification method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116030430A true CN116030430A (en) 2023-04-28

Family

ID=86078875

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211736998.2A Pending CN116030430A (en) 2022-12-30 2022-12-30 Rail identification method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116030430A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116820125A (en) * 2023-06-07 2023-09-29 哈尔滨市大地勘察测绘有限公司 Unmanned seeder control method and system based on image processing

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116820125A (en) * 2023-06-07 2023-09-29 哈尔滨市大地勘察测绘有限公司 Unmanned seeder control method and system based on image processing
CN116820125B (en) * 2023-06-07 2023-12-22 哈尔滨市大地勘察测绘有限公司 Unmanned seeder control method and system based on image processing

Similar Documents

Publication Publication Date Title
CN108280450B (en) Expressway pavement detection method based on lane lines
CN106934803B (en) method and device for detecting surface defects of electronic device
CN108038416B (en) Lane line detection method and system
CN105718872B (en) Auxiliary method and system for rapidly positioning lanes on two sides and detecting vehicle deflection angle
TWI607901B (en) Image inpainting system area and method using the same
Mu et al. Lane detection based on object segmentation and piecewise fitting
US20190340446A1 (en) Shadow removing method for color image and application
US20170261319A1 (en) Building height calculation method, device, and storage medium
CN110415208B (en) Self-adaptive target detection method and device, equipment and storage medium thereof
KR20130030220A (en) Fast obstacle detection
CN110675346A (en) Image acquisition and depth map enhancement method and device suitable for Kinect
CN108921813B (en) Unmanned aerial vehicle detection bridge structure crack identification method based on machine vision
CN109359593B (en) Rain and snow environment picture fuzzy monitoring and early warning method based on image local grid
CN112990087B (en) Lane line detection method, device, equipment and readable storage medium
US10497107B1 (en) Method, computer program product and computer readable medium for generating a mask for a camera stream
CN109711256B (en) Low-altitude complex background unmanned aerial vehicle target detection method
CN117094914B (en) Smart city road monitoring system based on computer vision
CN113239733B (en) Multi-lane line detection method
CN108765456B (en) Target tracking method and system based on linear edge characteristics
CN116030430A (en) Rail identification method, device, equipment and storage medium
WO2023019793A1 (en) Determination method, cleaning robot, and computer storage medium
CN111563457A (en) Road scene segmentation method for unmanned automobile
CN109800641B (en) Lane line detection method based on threshold value self-adaptive binarization and connected domain analysis
CN117456371B (en) Group string hot spot detection method, device, equipment and medium
CN112330544A (en) Image smear processing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination