CN111780678A - Method for measuring diameter of track slab embedded sleeve - Google Patents

Method for measuring diameter of track slab embedded sleeve Download PDF

Info

Publication number
CN111780678A
CN111780678A CN201911426818.9A CN201911426818A CN111780678A CN 111780678 A CN111780678 A CN 111780678A CN 201911426818 A CN201911426818 A CN 201911426818A CN 111780678 A CN111780678 A CN 111780678A
Authority
CN
China
Prior art keywords
track slab
image
point cloud
cloud data
embedded sleeve
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911426818.9A
Other languages
Chinese (zh)
Inventor
王九鑫
魏蒙蒙
赵毅婕
周中顺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Jiutian Incubator Technology Co ltd
Original Assignee
Xi'an Jiutian Incubator Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Jiutian Incubator Technology Co ltd filed Critical Xi'an Jiutian Incubator Technology Co ltd
Priority to CN201911426818.9A priority Critical patent/CN111780678A/en
Publication of CN111780678A publication Critical patent/CN111780678A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/08Measuring arrangements characterised by the use of optical techniques for measuring diameters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30172Centreline of tubular or elongated structure

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a method for measuring the diameter of an embedded sleeve of a track slab, and belongs to the technical field of track safety. The method comprises the following steps: establishing a coordinate system of the track slab through a binocular stereoscopic vision module and a linear structured light scanning module to obtain point cloud data of the track slab; segmenting point cloud data around the embedded sleeve in the point cloud data of the track slab; and preprocessing the point cloud data around the embedded sleeve, finding out the circle and the radius of the embedded sleeve by using a least square method, and determining the diameter of the embedded sleeve. The invention provides a method for obtaining point cloud data of an embedded sleeve of a track slab by using a binocular stereoscopic vision module and a line structure optical module, which can effectively remove wrong point cloud data at the edge of the embedded sleeve when performing feature extraction and matching point registration, provides convenience for subsequent data processing and increases the accuracy of measurement.

Description

Method for measuring diameter of track slab embedded sleeve
Technical Field
The invention relates to the technical field of track safety, in particular to a method for measuring the diameter of an embedded sleeve of a track slab.
Background
With the rapid development of the times, various technologies applied to high-speed railways in China have attracted attention, and have gained leading status in the industry in the world, and the high-speed railways have become the major artery of national economy thanks to a great deal of national support for the construction of the high-speed railways, ballastless track slabs used by the high-speed railways are one of the key technologies of the high-speed railway technology, track slabs are the basis of bearing tracks, play an important role in ensuring the safe and stable running of trains, are different from sleeper systems of common railways, and have strict requirements. The pre-buried sleeve is a section of plastic sleeve which is pre-buried in a railway spike fixing hole of a sleeper and has the advantages of good mechanical strength, heat resistance, wear resistance and high rigidity, the pre-buried sleeve has the advantages of long service life, simple and convenient construction conditions and the like, can be used for improving the durability and high insulativity of the sleeper and meeting the national requirements for high-speed railway development, and can be pre-buried in a concrete switch tie to improve the running speed of a train and the switch-passing speed of the train at the switch. Therefore, the geometric dimension precision of the track slab embedded casing plays an important role in ensuring the quality and safe and stable operation of the high-speed railway, China also has strict requirements on the measurement of the parameters of the embedded casing, and the selection of an effective and accurate method for measuring the radius of the embedded casing is also the important factor in detecting the parameters of the track slab.
At present, the detection technology of the track slab in China has been rapidly developed, and common track slab detection methods include a vernier caliper method and a structured light measurement method. The vernier caliper method is a measurement method which is firstly applied to a railway field and is used for manually and directly measuring geometric parameters of a track slab to be detected by using the vernier caliper. The vernier caliper method needs to be matched with corresponding tools, such as measuring tools of a depth gauge, a dial indicator, an angle gauge and the like. Although the detection principle of the vernier caliper method is simple and visual, a large number of matched instruments are needed during use, the measurement process is complicated, a large amount of manpower and material resources are needed for contact measurement, and potential safety hazards exist in the measurement process.
The structured light measurement method mainly utilizes the triangular displacement principle of laser to measure, a track slab is scanned in real time by a linear structured light sensor, point cloud data of the track slab is directly obtained by software, and a required measurement result is obtained by segmenting the point cloud data. However, since the embedded sleeve is embedded in the spike fixing hole of the sleeper and the side wall is not necessarily vertical, when the embedded sleeve is scanned by the structured light, a lot of wrong point cloud data appear at the boundary of the embedded sleeve, so that the position of the embedded sleeve cannot be accurately positioned, the point cloud data can be obtained, the diameter of the point cloud data can be calculated, and great influence is caused on subsequent data processing.
In summary, the conventional method for detecting the track slab has low efficiency, low accuracy and low speed, and cannot meet the detection requirement on the diameter of the embedded casing of each track slab.
Disclosure of Invention
In order to solve the problem that the efficiency and the accuracy of track slab detection in the related art are low, the embodiment of the invention provides a method for measuring the diameter of an embedded sleeve of a track slab. The method comprises the following steps:
establishing a coordinate system of the track slab through a binocular stereoscopic vision module and a linear structured light scanning module to obtain point cloud data of the track slab;
segmenting point cloud data around the embedded sleeve in the point cloud data of the track slab;
and preprocessing the point cloud data around the embedded sleeve, finding out the circle and the radius of the embedded sleeve by using a least square method, and determining the diameter of the embedded sleeve.
Optionally, the establishing a coordinate system of the track slab through the binocular stereo vision module and the line structured light scanning module to obtain the point cloud data of the track slab includes:
building a measuring platform, wherein the measuring platform comprises the binocular stereoscopic vision module and the linear structured light scanning module;
calibrating two cameras included in the binocular stereo vision module to obtain internal parameters and external parameters of the two cameras;
scanning the position of an embedded sleeve of the track slab by using a semiconductor laser included by the line structured light scanning module to obtain left and right images detected by two cameras included by the binocular stereo vision module;
extracting the central line of a laser light bar in the image by using Matlab;
matching the laser light bars extracted from the left image and the right image to obtain effective matching data;
and finishing three-dimensional reconstruction of the point cloud data according to the matched data, establishing a coordinate system of the track slab, and obtaining the point cloud data of the track slab.
Optionally, the binocular stereo vision module consists of two subminiature PoE industrial digital cameras with the models of MER-131-75GM/C-P, the two industrial digital cameras are fixed on the bracket at a certain angle respectively, and images are acquired from the left side and the right side; the line structure light scanning module comprises a semiconductor laser, an electric control displacement platform and a sliding guide rail, wherein the semiconductor laser is fixed on the electric control displacement platform and moves along the horizontal direction of the sliding guide rail under the control of a stepping motor.
Optionally, the calibrating two cameras included in the binocular stereo vision module to obtain internal parameters and external parameters of the two cameras includes:
using a 9-by-9 checkerboard with the side length of 10mm as a calibration object, placing the calibration object in a common field of view of the two cameras, and adjusting the focal lengths of the two cameras to ensure that clear images are shot;
shooting a plurality of groups of images in different directions by changing the orientation of the calibration object;
and carrying out double-camera calibration on the plurality of groups of shot images in different directions through a stereoCameraCalibrator application program in a Matlab toolbox.
Optionally, the scanning, at the position of the embedded casing of the track slab, of the semiconductor laser included in the line structured light scanning module is used to obtain the left and right images detected by the two cameras included in the binocular stereo vision module, and the scanning includes:
each time the semiconductor laser moves, the laser light bar at one position is displayed on the track plate;
the two cameras respectively acquire images once to obtain the left image and the right image.
Optionally, after acquiring the left and right images detected by the two cameras included in the binocular stereo vision module, the method further includes: and carrying out distortion correction on the acquired image.
Optionally, the extracting, by using Matlab, a center line of a laser light bar in the image includes:
and finding the normal direction of the laser light bar through the Hessian matrix in the image, and then performing Taylor series expansion on the normal direction of the laser light bar to obtain the image coordinate of the sub-pixel of the central point of the laser light bar.
Optionally, the matching the laser light bars extracted from the left and right images to obtain effective matching data includes: and matching by adopting an epipolar constraint method.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
according to the invention, a coordinate system of the track slab can be established through the binocular stereoscopic vision module and the linear structured light scanning module, the point cloud data of the track slab is obtained, then the point cloud data of the periphery of the embedded sleeve is divided, the circle and the radius of the embedded sleeve are determined, and the diameter of the embedded sleeve is further calculated. The method for determining the diameter of the embedded sleeve by using the binocular stereoscopic vision module and the linear structure optical module to obtain the point cloud data of the embedded sleeve of the track slab does not need to manually hold various measuring tools to carry out measurement one by one, so that a large amount of manpower and material resources are saved, the measuring efficiency is high, and potential safety hazards in the measuring process are avoided; secondly, due to the fact that the two cameras have different shooting angles for the structured light, when feature extraction and matching point registration are carried out, wrong point cloud data at the edge of the embedded casing can be effectively removed, convenience is provided for subsequent data processing, and accuracy of measurement is improved; in addition, the laser light bars are subjected to three-dimensional matching by using the polar line constraint method, so that the operation amount is greatly reduced, the matching accuracy is improved, and accurate measurement parameters can be obtained subsequently; and moreover, the characteristics of the point cloud graph of the embedded casing are combined, least square fitting is adopted, measurement parameters are obtained, and more accurate measurement data are obtained.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a method for measuring a diameter of an embedded casing of a track slab according to an embodiment of the present invention;
fig. 2 is a flowchart of another method for measuring the diameter of a track slab embedded casing according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a measurement platform provided by the embodiment of the invention;
FIG. 4 is a schematic diagram of radial distortion provided by an embodiment of the present invention;
FIG. 5 is a schematic diagram of tangential distortion provided by an embodiment of the present invention;
FIG. 6 is a schematic diagram of epipolar constraint provided by an embodiment of the present invention;
FIG. 7 is a schematic diagram of binocular stereo vision imaging provided by an embodiment of the present invention;
fig. 8 is a fitting diagram of an embedded casing 1 according to an embodiment of the present invention;
fig. 9 is a fitting diagram of the embedded casing 2 according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
A method for measuring the diameter of the track plate embedded casing according to an embodiment of the present invention will be described in detail with reference to fig. 1. Fig. 1 is a flowchart of a method for measuring the diameter of an embedded casing of a track slab according to an embodiment of the present invention. Referring to fig. 1, the method comprises the steps of:
step 101: and establishing a coordinate system of the track slab through the binocular stereoscopic vision module and the linear structured light scanning module to obtain point cloud data of the track slab.
Step 102: and partitioning the point cloud data around the embedded sleeve in the point cloud data of the track slab.
Step 103: and preprocessing the point cloud data around the embedded sleeve, finding out the circle and the radius of the embedded sleeve by using a least square method, and determining the diameter of the embedded sleeve.
Optionally, the establishing a coordinate system of the track slab through the binocular stereo vision module and the line structured light scanning module to obtain the point cloud data of the track slab includes:
building a measuring platform, wherein the measuring platform comprises the binocular stereoscopic vision module and the linear structured light scanning module;
calibrating two cameras included in the binocular stereo vision module to obtain internal parameters and external parameters of the two cameras;
scanning the position of an embedded sleeve of the track slab by using a semiconductor laser included by the line structured light scanning module to obtain left and right images detected by two cameras included by the binocular stereo vision module;
extracting the central line of a laser light bar in the image by using Matlab;
matching the laser light bars extracted from the left image and the right image to obtain effective matching data;
and finishing three-dimensional reconstruction of the point cloud data according to the matched data, establishing a coordinate system of the track slab, and obtaining the point cloud data of the track slab.
Optionally, the binocular stereo vision module consists of two subminiature PoE industrial digital cameras with the models of MER-131-75GM/C-P, the two industrial digital cameras are fixed on the bracket at a certain angle respectively, and images are acquired from the left side and the right side; the line structure light scanning module comprises a semiconductor laser, an electric control displacement platform and a sliding guide rail, wherein the semiconductor laser is fixed on the electric control displacement platform and moves along the horizontal direction of the sliding guide rail under the control of a stepping motor.
Optionally, the calibrating two cameras included in the binocular stereo vision module to obtain internal parameters and external parameters of the two cameras includes:
using a 9-by-9 checkerboard with the side length of 10mm as a calibration object, placing the calibration object in a common field of view of the two cameras, and adjusting the focal lengths of the two cameras to ensure that clear images are shot;
shooting a plurality of groups of images in different directions by changing the orientation of the calibration object;
and carrying out double-camera calibration on the plurality of groups of shot images in different directions through a stereoCameraCalibrator application program in a Matlab toolbox.
Optionally, the scanning, at the position of the embedded casing of the track slab, of the semiconductor laser included in the line structured light scanning module is used to obtain the left and right images detected by the two cameras included in the binocular stereo vision module, and the scanning includes:
each time the semiconductor laser moves, the laser light bar at one position is displayed on the track plate;
the two cameras respectively acquire images once to obtain the left image and the right image.
Optionally, after acquiring the left and right images detected by the two cameras included in the binocular stereo vision module, the method further includes: and carrying out distortion correction on the acquired image.
Optionally, the extracting, by using Matlab, a center line of a laser light bar in the image includes:
and finding the normal direction of the laser light bar through the Hessian matrix in the image, and then performing Taylor series expansion on the normal direction of the laser light bar to obtain the image coordinate of the sub-pixel of the central point of the laser light bar.
Optionally, the matching the laser light bars extracted from the left and right images to obtain effective matching data includes: and matching by adopting an epipolar constraint method.
All the above optional technical solutions can be combined arbitrarily to form an optional embodiment of the present invention, which is not described in detail herein.
Fig. 2 is a flowchart of a method for measuring the diameter of an embedded casing of a track slab according to an embodiment of the present invention. Embodiments of the present invention will be discussed in conjunction with fig. 2 in an expanded view of the embodiment provided in fig. 1. Referring to fig. 2, the method comprises the steps of:
step 201: and (3) constructing a measuring platform, wherein the measuring platform comprises a binocular stereoscopic vision module and a linear structured light scanning module.
Specifically, referring to fig. 3, the binocular stereoscopic vision module is composed of two subminiature PoE industrial digital cameras with the model numbers MER-131-75GM/C-P, and in order to ensure that the track plate is in the public measurement area, the two industrial digital cameras are respectively fixed on the bracket at a certain angle and are used for simulating the eyes of a human and acquiring images from the left side and the right side; the line structured light scanning module consists of a semiconductor laser, an electric control displacement platform (not shown in the figure) and a sliding guide rail, wherein the semiconductor laser is fixed on the electric control displacement platform and moves along the horizontal direction of the sliding guide rail under the control of a stepping motor, and two industrial digital cameras acquire image data once when the semiconductor laser moves horizontally, so that two left and right light bar images with different angles can be obtained, and three-dimensional point cloud data around the embedded sleeve of the track slab can be obtained subsequently according to a binocular stereo vision theory.
Step 202: and calibrating the two cameras included in the binocular stereo vision module to obtain the internal parameters and the external parameters of the two cameras.
It should be noted that, in the process of image measurement and machine vision application, in order to find out the correlation between the three-dimensional spatial position of a certain point on the surface of an object and the corresponding point in the corresponding image, a model of the camera imaged in space needs to be established, and obtaining the spatial model parameters is what we often say as camera parameters. In most cases, to obtain these commonly used camera parameters, a lot of experiments and calculations are needed to obtain the parameters, and this process of solving the camera parameters is called camera calibration. The calibration of camera parameters is an important link in both machine vision application and image measurement, and the accuracy of the camera to acquire images can be directly influenced by the precision of the parameters obtained by the camera calibration and the stability of an algorithm used in the calibration process. Therefore, the calibration of the camera is not only a premise of using the binocular stereo vision principle, but also an important point for subsequent work is to improve the calibration precision. The parameters needed for calibrating the binocular camera in the invention are shown in the following table 1:
TABLE 1 parameters for camera calibration
Figure RE-GDA0002633173420000071
Wherein the internal parameters are physical optical parameters of the camera itself, including principal point coordinates (μ) of the image0,v0) Effective focal length f in the mu and v directionsxAnd fyAnd a non-perpendicularity factor gamma between two coordinate axes of the camera. The external parameters refer to the direction and position relation of the camera coordinate system relative to the world coordinate system, the invention is expressed by a rotation matrix R and a translation matrix T, and points in the camera coordinate system can be converted into points in an ideal world coordinate system by using the external parameters of the camera, so that the actual three-dimensional coordinate information of the object is obtained, and the actual measurement of the object is facilitated.
The commonly used camera calibration methods mainly include a linear model calibration method and a nonlinear model calibration method. This is because the lens of the camera may have distortion, and the calibration method of the camera is divided into a linear model calibration method and a nonlinear model calibration method according to whether the distortion is generated. If the imaging model of the camera is a pinhole model under an ideal condition, a calibration method of a linear model is used, and at the moment, only external parameters need to be calibrated and solved to obtain a rotation matrix R and a translation matrix T; on the contrary, if the imaging model of the camera is a non-linear model, the distortion parameters (radial distortion parameters k1 and k2 and tangential distortion parameters p1 and p2) in the internal parameters also need to be considered.
It should be noted that, in the present invention, a planar checkerboard method may be adopted to calibrate the parameters of the two cameras, specifically, a checkerboard with 9 × 9 sides of 10mm is used as a calibration object, the calibration object is placed in a common field of view of the two cameras, the focal lengths of the two cameras are adjusted to ensure that the captured images are clear, and then a plurality of groups of images in different directions are captured by changing the orientation of the calibration object, so that the calibration work of the cameras can be performed after enough image pairs are obtained. In the invention, the calibration work of two cameras is carried out through a stereCameraCalibrator application program in a Matlab toolbox.
Secondly, the calibrated average error is directly displayed in the calibration result, the image group with large average error is deleted from the punctuation image group, the image group with small average error is reserved, finally the calibrated average error is 0.18pixels, which shows that the calibration result is reliable, and then data are derived to obtain the calibration parameter table of the camera shown in the following table 2:
TABLE 2 calibration parameter table of camera
Figure RE-GDA0002633173420000081
Figure RE-GDA0002633173420000091
It should be noted that the embodiment of the present invention is described by taking the calibration parameter table of the camera shown in table 2 as an example, and table 2 does not limit the embodiment of the present invention.
Step 203: and scanning the position of the embedded sleeve of the track slab by using a semiconductor laser included by the line structured light scanning module to acquire left and right images detected by two cameras included by the binocular stereoscopic vision module.
It should be noted that, each time the semiconductor laser moves, a laser light bar at one position is displayed on the track slab, the two cameras respectively collect images once, the images captured by each group of left and right cameras are subsequently matched and reconstructed once by using Matlab, and finally, the point cloud data of the track slab can be obtained by combining the matched and reconstructed data of all the light bars.
Step 204: and carrying out distortion correction on the acquired image.
It should be noted that, in order to increase the luminous flux, the camera may use a lens to replace the pinhole for imaging, but this replacement cannot completely meet the property of pinhole imaging, so that the imaged picture has distortion, and there are many reasons for image distortion, which can be summarized into two categories: radial distortion and tangential distortion, and therefore distortion correction of the image is also required before processing the acquired image.
Radial distortion refers to the phenomenon that an image point is shifted in the radial direction relative to the position of an ideal image, and comprises inward and outward shifting situations, which can be called pincushion distortion and barrel distortion respectively. As the name suggests, the pincushion distortion causes the edge points at the outer side of the image to be relatively diffused, and the imaging proportion is relatively increased; barrel distortion causes the outer edge points of the image to be relatively crowded and the imaging scale to be relatively reduced, as shown in figure 4.
Taking a lens as an example, taking the center of the lens as the origin of coordinate axis, the outward direction is the radius of the lens, the distortion is smaller when the light ray is closer to the center, and the distortion is larger when the light ray is far from the center along the radius direction, the correction formula of radial distortion is as follows (1):
Figure RE-GDA0002633173420000101
wherein (x, y) is an ideal distortion-free coordinate (image coordinate system), (x)dr,ydr) Is the coordinate of the distorted image pixel point, and r, x and y satisfy the following formula (2):
r2=x2+y2(2)
wherein the tangential distortion is caused by the lens being not exactly parallel to the imaging plane, when the lens is not parallel to the imaging plane, a distortion is generated, similar to the perspective transformation, as shown in fig. 5.
Specifically, the correction formula of the tangential distortion is shown in the following formula (3):
Figure RE-GDA0002633173420000102
according to the radial distortion model and the tangential distortion model, the following formula (4) can be obtained as a correction model after the two models finally act on the real image:
Figure RE-GDA0002633173420000103
in addition, in the process of image acquisition, because the fixed angles of the two images acquired by the two cameras are different, the shot images have polar line geometric relationship, the corresponding relationship is related to the projection matrix of the cameras, and because the two images are not on the same plane, a large amount of calculation is added for subsequent stereo matching. Therefore, the invention also carries out stereo correction, and the stereo correction has the function of strictly carrying out line correspondence on the two images after the distortion is eliminated so that the epipolar lines of the two images are exactly on the same horizontal line, namely, the images are projected on a common image plane so that the corresponding points have the same line coordinates. Therefore, when the matching point is calibrated, the subsequent stereo matching can be further optimized only by scanning and matching on the horizontal line corresponding to the point, and the calculation amount of a computer is greatly reduced. After the correction is completed, the two images can be connected together by matching the feature points, so that distance information is obtained, and subsequent calculation processing is performed.
Step 205: the central line of the laser light bar in the image was extracted using Matlab.
When the linear structured light measurement method is used for measurement, the center line of the laser light bar needs to be found out. The line structured light measuring method is based on an active optical technology, a line structured light laser projects a line light bar onto the surface of a measured object, a CCD (charge coupled device) at another angle is used for acquiring a deformation image of the light bar and analyzing the deformation condition of the light bar image acquired by the surface of the measured object, so that a two-dimensional profile of the measured surface is acquired.
Specifically, since the light intensity distribution of the light stripe of the laser in the line width direction is similar to the gaussian distribution and the width of the light stripe is larger than one pixel, the extraction of the center line of the light stripe can be performed by using the Steger algorithm. The Steger algorithm finds the normal direction of the light bars through a Hessian matrix in the light bar image, and obtains the image coordinates of the sub-pixels at the central points of the light bars through Taylor series expansion in the normal direction of the light bars. For any point (x, y) on the laser stripe in the image, the Hessian matrix can be expressed as the following equation (10):
Figure RE-GDA0002633173420000111
wherein the parameter rxxRepresenting the second partial derivative of the image in the x-direction, rxyRepresenting the second partial derivatives of the image in the x, y directions, ryyRepresenting the second partial derivative of the image in the y-direction. Gaussian filtering must be carried out on the image before solving the Hessian matrix. The normal direction of the light bar can be used by finding the eigenvector of the maximum eigenvalue of the Hessian matrix, using (n)x,ny) Represents, points (x)0,y0) As a reference point, the sub-pixel coordinates of the center of the bar can be found: (p)x,py)=(x0+tnx,y0+tny) Wherein t, n, r satisfy the following formula (11):
Figure RE-GDA0002633173420000112
if (tn)x,tny)∈[-0.5,0.5]×[-.5,0.5]Indicates that there is a point in the current pixel where the first derivative is zero, and the feature vector (n)x,ny) In the direction in which the second derivative is greater than the specified threshold, the center point of the light bar can be determined to be (x)0,y0) The sub-pixel coordinate is (p)x,py)。
Step 206: and matching the laser light bars extracted from the left image and the right image to obtain effective matching data.
It should be noted that the laser light bars extracted from the left and right images are matched to obtain effective matching data, that is, stereo matching. The image matching in the binocular stereo vision theory is a process for searching a matching point corresponding to a known point in a given image in another image, and due to the influence of factors such as noise, illumination conditions and shielding of external environment, a proper method needs to be selected to obtain a better stereo matching result. For example, in the process of searching for corresponding points of a left image and a right image, if no constraint condition exists, each pixel of the left image needs to be searched in the whole image space of the right image, the searching method is very low in efficiency, and due to the fact that wrong corresponding points are easily found due to other factors (such as repeated textures, light rays and the like), some constraint conditions can be selectively used according to specific requirements in the matching process, so that the range of searching for matching points of the images can be reduced, and the accuracy and the speed of an algorithm can be improved. Common constraint conditions are consistency constraint, similarity constraint, epipolar constraint, continuity constraint, uniqueness constraint, and the like.
Specifically, referring to fig. 6, the epipolar constraint method is one of the most commonly used stereo matching methods, and the search range can be reduced by using the epipolar geometry constraint rule, thereby improving matching efficiency and reducing mismatching. In the binocular measurement, an epipolar line is an intersection line of a polar plane and two images, the polar plane is a plane where an object space point and the centers of the two cameras are located together, epipolar constraint describes that projection points of the object space point on the two images are necessarily located on the same polar plane, and then it can be deduced that a corresponding point of each pixel point on a left image on a right image is necessarily located on an intersection line (namely an epipolar line) of the polar plane where the image point is located and the right image. The epipolar constraint provides a constraint condition of the corresponding point, and the epipolar constraint constrains the matching of the corresponding point from the searching of the whole image to the searching of the corresponding point from a straight line, thereby greatly reducing the searching range and being very effective constraint for improving the matching efficiency.
Suppose a point P in three-dimensional space is projected on two left and right different image planes I1, I2, the projected points being P1, Pr, respectively. P, P1 and Pr form a polar plane S in three-dimensional space. Intersection 1 of S and plane I1P1Passing point P1, it is called epipolar line corresponding to Pr. In the same way, the intersection line l of S and I2PrCalled epipolar line corresponding to P1 (the epipolar line corresponding to the image point on the left is on the image on the right, which is the same as it is on the right). Thus, from the previous camera calibration we can obtain a basis matrix F, based on which the corresponding epipolar equation can be calculated by the following equation (4):
Figure RE-GDA0002633173420000121
wherein, the basic matrix F can be directly obtained by camera calibration, and satisfies the following formula (5):
Figure RE-GDA0002633173420000122
after obtaining the polar line equation, respectively finding out the matching points corresponding to the left and right images, specifically, finding out the distances from all points to be matched of the left image to the corresponding right polar line equation, and finding out the point corresponding to the minimum distance, which is the matching point of the right image. And finally, taking the corresponding point which finally meets the epipolar constraint condition as a correct matching characteristic point to participate in the subsequent reconstruction of a three-dimensional system, wherein the image coordinate point of the stereo matching is the coordinate under a pixel coordinate system, and can be converted into a millimeter coordinate system through an internal reference matrix to participate in the three-dimensional reconstruction work. Specifically, the following formula (6) shows:
Figure RE-GDA0002633173420000131
where point (x, y) is the coordinate of point P in the image millimetre coordinate system and (u, v) is in the imageCoordinates in the prime coordinate System, (u)0、v0) Is the optical center and is the internal reference for camera calibration.
It is worth noting that the central line of the laser stripe in the image is obtained according to a steger algorithm, firstly, Gaussian filtering is carried out on the image, then, the normal direction of the laser stripe is found by using a Hessian matrix according to the characteristics of the laser stripe, and then, the image coordinate of the sub-pixel of the central point of the laser stripe is obtained by carrying out Taylor series expansion on the direction of the stripe. According to the principle of an epipolar constraint method, the epipolar equation of a left image and a right image needs to be found out firstly in the stereo matching, all image points of the left image are brought into the epipolar equation of the right image, and a point with the minimum distance to the epipolar line of the right image is obtained, wherein the point is a matching point of a corresponding point of the right image; likewise, the right image is processed as well. And finally, extracting matched left and right characteristic points, and converting the matched coordinate points from a pixel coordinate system to a millimeter world coordinate system to participate in three-dimensional reconstruction work according to the conversion relation between the pixel coordinate system and the image coordinate system. Because the linear structured light measurement method is adopted to match the image coordinates of the laser light bar, the measurement can be completed only by collecting the coordinate information of the laser light bar, so that the process of distortion correction in the binocular vision theory in the step 204 can be skipped, and the subsequent stereo matching and three-dimensional reconstruction can be directly started, that is, if the linear structured light measurement method is adopted to match the image coordinates of the laser light bar subsequently, the distortion correction step in the step 204 can not be executed.
Step 207: and finishing three-dimensional reconstruction of the point cloud data according to the matched data, establishing a coordinate system of the track slab, and obtaining the point cloud data of the track slab.
It should be noted that the three-dimensional reconstruction of the point cloud data is completed according to the matched data, a coordinate system of the track slab is established, and the point cloud data of the track slab, that is, the reconstruction of the three-dimensional system, is obtained. The principle of binocular stereo vision space measurement is the parallax principle, and fig. 7 is an imaging principle diagram of simple head-up binocular stereo vision. Referring to fig. 7, after obtaining enough coordinate information of the three-dimensional point cloud, the surface shape of the object can be determined through three-dimensional reconstruction.
Assuming centers of projection of two camerasThe connecting line distance is the base line distance B. Three-dimensional space point P (x)c,yc, zc) The feature points are the same feature points of the spatial object observed by the two cameras at the same time, and are expressed as Pleft (Xleft, Yleft), right (Xright, Yright) on the image pixel coordinate systems of the left and right cameras, respectively. Assuming that the images of the two cameras are on the same plane, the feature point P can be obtained by stereo matching, and the image coordinate of the point P is the same as the point of the left and right camera image coordinate systems on the Y-axis, that is, yeft ═ Y, the following formula (7) can be obtained by the triangle similarity principle:
Figure RE-GDA0002633173420000141
the parallax is defined as a difference in direction of viewing the same object from two points at a certain distance, and can be expressed as the following formula (8):
Disparity=Xleft-Xright (8)
from this, the three-dimensional coordinates of the feature point P in the camera coordinate system can be calculated as:
Figure RE-GDA0002633173420000142
according to the formula calculation, as long as any point on the imaging plane of the left camera can find a corresponding matching point on the imaging plane of the right camera, the three-dimensional coordinates of the point P in the camera coordinate system can be obtained through the image coordinates of the space point P in the image coordinate systems of the left camera and the right camera. In summary, as long as all points on the image plane can find corresponding matching points of the left and right imaging planes, the corresponding three-dimensional point coordinates can be obtained.
For convenience of calculation, the three-dimensional point cloud of the left camera imaging coordinate system obtained from the above may be converted from the left camera coordinate system to the world coordinate system, that is, the point cloud data of the left camera coordinate system is converted to the world coordinate system by the rotation matrix R and the translation matrix T, specifically, the following formula (9):
Figure RE-GDA0002633173420000143
in this way, the three-dimensional system reconstruction can be completed by unifying all the three-dimensional coordinates of the point P to the ideal world coordinate system.
Step 208: and partitioning the point cloud data around the embedded sleeve in the point cloud data of the track slab.
It should be noted that, through the binocular stereo vision module and the line structured light scanning module, a point cloud picture reconstructed after the semiconductor laser scans the track slab can be finally obtained, and then only the point cloud data around the embedded casing needs to be extracted, so that the required three-dimensional coordinate information of the embedded casing can be obtained.
Step 209: and preprocessing the point cloud data around the embedded sleeve, finding out the circle and the radius of the embedded sleeve by using a least square method, and determining the diameter of the embedded sleeve.
It should be noted that the coordinate data of the three-dimensional point cloud finally obtained by the binocular vision principle is basic data of the following steps of three-dimensional scene reproduction, three-dimensional data fitting and the like. Because of the influence of factors such as a camera used in the measurement process, an external measurement environment, an illumination angle and the like, the finally obtained three-dimensional point cloud data has measurement errors of different degrees, and the measurement errors influence the subsequent data processing to a certain degree. Therefore, the measured data needs to be preprocessed first, and the measured data is guaranteed to be effective and feasible. The invention uses least square method to process the measured data.
The least square method is also called a least square method and is a mathematical optimization method most commonly applied to measured data. The use of least squares can facilitate the computation of unknown data and minimize the sum of the squares of the error between these computed data and the actual data. In particular, the method can be used for finding the optimal curve or curved surface by fitting the curve or curved surface.
Point set for given data { (x)i,yi) F (i ═ 0, 1, 2, 3 … …, m), in the class of the functions taken
Figure RE-GDA0002633173420000151
In the specification, require
Figure RE-GDA0002633173420000152
And make Ei=p(xi)-yiThe sum of the squares of (a) is the minimum value, and specifically the following formula (11):
Figure RE-GDA0002633173420000153
geometrically is just finding a set of points with the data given before { (x)i,yi) The curve y (x) where the sum of squared distances of (i ═ 0, 1, 2, 3 … …, m) is the minimum. The function p (x) may be referred to as a fitting function or a least squares solution, and the method of solving the fitting function p (x) may be referred to as a least squares method of curve fitting.
It should be noted that the point cloud data is obtained based on the binocular stereoscopic vision principle, and the error point cloud at the position of the track plate embedded sleeve is basically eliminated in the matching process, so that the plane where the embedded sleeve is located is approximately a two-dimensional plane, and the coordinate points around the track plate embedded sleeve can be found to approximately fall on a circle by segmenting and extracting the point cloud data around the track plate embedded sleeve, so that the circle fitting can be performed by using the least square method according to the coordinate data, the square sum of the distances of the coordinate points is required to be minimum, the circle center coordinate and the radius after the fitting are obtained, and finally the diameter length of the embedded sleeve can be obtained.
Specifically, a specific implementation process using circle fitting may be: based on the principle that three points which are not on a straight line can form a circle, a matrix range is set by using matlab, a plurality of groups of point cloud data around the embedded casing are roughly divided, the matrix range is used for finding out the approximate position and the radius of the circle center as much as possible, therefore, a rectangular selection area is divided by the matrix, the point cloud data of four corners of the rectangle can be obtained by superposing the matrix cloud data with the embedded casing, as shown in the left diagrams of fig. 8 and 9, the points are firstly fitted for the first time to obtain a circle, and the rough range of the circle center and the radius can be obtained; then, according to the result of the first fitting, a round function in matlab is used to enable the circle center to keep a digit behind a decimal point, a ceil function is used to take an integer in a positive and infinite direction of the radius, the adjusted circle center coordinate and the adjusted radius are substituted into an equation of a circle, point cloud data around the circle of the embedded casing is selected, coordinate points around the embedded casing are segmented, and the parameters of the circle center and the radius are fitted by the segmented coordinate points through a least square method, as shown in the right diagrams of fig. 8 and 9. The data table obtained by fitting the embedded casing 1 and the embedded casing 2 twice is shown in the following table 3:
table 3 data table obtained by fitting the embedded casing 1 and the embedded casing 2 twice
Figure RE-GDA0002633173420000161
Figure RE-GDA0002633173420000171
It should be noted that the embodiment of the present invention is described by taking only the data table obtained by fitting the fastener driving sleeve 1 and the fastener driving sleeve 2 twice shown in table 3 as an example, and table 3 is not intended to limit the embodiment of the present invention.
As can be seen from table 3, when the diameter length of the obtained track slab embedded casing is compared with the data diameter length of 25mm, the measured relative error is about 1.5%, the center distance between two adjacent casings of the same rail bearing platform is calculated by the circle center coordinates obtained by measurement to be about 234.82mm, the standard value is 233.3mm, and the measured relative error is 0.86%, so that the result obtained by using the binocular system and linear structured light measurement method is relatively accurate.
According to the invention, a coordinate system of the track slab can be established through the binocular stereoscopic vision module and the linear structured light scanning module, the point cloud data of the track slab is obtained, then the point cloud data of the periphery of the embedded sleeve is divided, the circle and the radius of the embedded sleeve are determined, and the diameter of the embedded sleeve is further calculated. The method for determining the diameter of the embedded sleeve by using the binocular stereoscopic vision module and the linear structure optical module to obtain the point cloud data of the embedded sleeve of the track slab does not need to manually hold various measuring tools to carry out measurement one by one, so that a large amount of manpower and material resources are saved, the measuring efficiency is high, and potential safety hazards in the measuring process are avoided; secondly, due to the fact that the two cameras have different shooting angles for the structured light, when feature extraction and matching point registration are carried out, wrong point cloud data at the edge of the embedded casing can be effectively removed, convenience is provided for subsequent data processing, and accuracy of measurement is improved; in addition, the laser light bars are subjected to three-dimensional matching by using the polar line constraint method, so that the operation amount is greatly reduced, the matching accuracy is improved, and accurate measurement parameters can be obtained subsequently; and moreover, the characteristics of the point cloud graph of the embedded casing are combined, least square fitting is adopted, measurement parameters are obtained, and more accurate measurement data are obtained.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (8)

1. A method for measuring the diameter of an embedded sleeve of a track slab is characterized by comprising the following steps:
establishing a coordinate system of the track slab through a binocular stereoscopic vision module and a linear structured light scanning module to obtain point cloud data of the track slab;
segmenting point cloud data around the embedded sleeve in the point cloud data of the track slab;
and preprocessing the point cloud data around the embedded sleeve, finding out the circle and the radius of the embedded sleeve by using a least square method, and determining the diameter of the embedded sleeve.
2. The method of claim 1, wherein the establishing a coordinate system of a track slab by a binocular stereo vision module and a line structured light scanning module to obtain the point cloud data of the track slab comprises:
building a measuring platform, wherein the measuring platform comprises the binocular stereoscopic vision module and the linear structured light scanning module;
calibrating two cameras included in the binocular stereo vision module to obtain internal parameters and external parameters of the two cameras;
scanning the position of an embedded sleeve of the track slab by using a semiconductor laser included by the line structured light scanning module to obtain left and right images detected by two cameras included by the binocular stereo vision module;
extracting the central line of a laser light bar in the image by using Matlab;
matching the laser light bars extracted from the left image and the right image to obtain effective matching data;
and finishing three-dimensional reconstruction of the point cloud data according to the matched data, establishing a coordinate system of the track slab, and obtaining the point cloud data of the track slab.
3. The method according to claim 2, characterized in that the binocular stereoscopic vision module consists of two subminiature PoE industrial digital cameras, of the type MER-131-75GM/C-P, fixed on a support at an angle, respectively, to acquire images from the left and right sides; the line structure light scanning module comprises a semiconductor laser, an electric control displacement platform and a sliding guide rail, wherein the semiconductor laser is fixed on the electric control displacement platform and moves along the horizontal direction of the sliding guide rail under the control of a stepping motor.
4. The method according to claim 2, wherein the calibrating the two cameras included in the binocular stereo vision module to obtain the internal parameters and the external parameters of the two cameras comprises:
using a 9-by-9 checkerboard with the side length of 10mm as a calibration object, placing the calibration object in a common field of view of the two cameras, and adjusting the focal lengths of the two cameras to ensure that clear images are shot;
shooting a plurality of groups of images in different directions by changing the orientation of the calibration object;
and carrying out double-camera calibration on the plurality of groups of shot images in different directions through a stereoCameraCalibrator application program in a Matlab toolbox.
5. The method as claimed in claim 3, wherein the scanning of the position of the pre-buried sleeve of the track slab by using the semiconductor laser included in the line structured light scanning module to obtain the left and right images detected by the two cameras included in the binocular stereo vision module comprises:
each time the semiconductor laser moves, the laser light bar at one position is displayed on the track plate;
the two cameras respectively acquire images once to obtain the left image and the right image.
6. The method according to claim 2, wherein after acquiring the left and right images detected by the two cameras included in the binocular stereo vision module, the method further comprises: and carrying out distortion correction on the acquired image.
7. The method of claim 2, wherein extracting the centerline of the bar of laser light in the image using Matlab comprises:
and finding the normal direction of the laser light bar through the Hessian matrix in the image, and then performing Taylor series expansion on the normal direction of the laser light bar to obtain the image coordinate of the sub-pixel of the central point of the laser light bar.
8. The method of claim 2, wherein the matching the extracted laser light bars of the left and right images to obtain valid matching data comprises: and matching by adopting an epipolar constraint method.
CN201911426818.9A 2019-12-30 2019-12-30 Method for measuring diameter of track slab embedded sleeve Pending CN111780678A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911426818.9A CN111780678A (en) 2019-12-30 2019-12-30 Method for measuring diameter of track slab embedded sleeve

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911426818.9A CN111780678A (en) 2019-12-30 2019-12-30 Method for measuring diameter of track slab embedded sleeve

Publications (1)

Publication Number Publication Date
CN111780678A true CN111780678A (en) 2020-10-16

Family

ID=72754992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911426818.9A Pending CN111780678A (en) 2019-12-30 2019-12-30 Method for measuring diameter of track slab embedded sleeve

Country Status (1)

Country Link
CN (1) CN111780678A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112525106A (en) * 2020-10-23 2021-03-19 清华大学 Three-phase machine cooperative laser-based 3D detection method and device
CN114322755A (en) * 2021-11-16 2022-04-12 中铁十四局集团房桥有限公司 Automatic detection device and detection method for bottom template of switch tie
CN115115602A (en) * 2022-05-31 2022-09-27 江苏濠汉信息技术有限公司 Algorithm for positioning texture in wire diameter measurement process
CN115638725A (en) * 2022-10-26 2023-01-24 成都清正公路工程试验检测有限公司 Target point position measuring method based on automatic measuring system
CN116659419A (en) * 2023-07-28 2023-08-29 成都市特种设备检验检测研究院(成都市特种设备应急处置中心) Elevator guide rail parameter measuring device and method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112525106A (en) * 2020-10-23 2021-03-19 清华大学 Three-phase machine cooperative laser-based 3D detection method and device
CN112525106B (en) * 2020-10-23 2022-08-26 清华大学 Three-phase machine cooperative laser-based 3D detection method and device
CN114322755A (en) * 2021-11-16 2022-04-12 中铁十四局集团房桥有限公司 Automatic detection device and detection method for bottom template of switch tie
CN115115602A (en) * 2022-05-31 2022-09-27 江苏濠汉信息技术有限公司 Algorithm for positioning texture in wire diameter measurement process
CN115115602B (en) * 2022-05-31 2023-09-19 江苏濠汉信息技术有限公司 Algorithm for texture positioning in wire diameter measurement process
CN115638725A (en) * 2022-10-26 2023-01-24 成都清正公路工程试验检测有限公司 Target point position measuring method based on automatic measuring system
CN116659419A (en) * 2023-07-28 2023-08-29 成都市特种设备检验检测研究院(成都市特种设备应急处置中心) Elevator guide rail parameter measuring device and method
CN116659419B (en) * 2023-07-28 2023-10-20 成都市特种设备检验检测研究院(成都市特种设备应急处置中心) Elevator guide rail parameter measuring device and method

Similar Documents

Publication Publication Date Title
CN111780678A (en) Method for measuring diameter of track slab embedded sleeve
Liu et al. Concrete crack assessment using digital image processing and 3D scene reconstruction
Hou et al. Experimentation of 3D pavement imaging through stereovision
US7430312B2 (en) Creating 3D images of objects by illuminating with infrared patterns
Zhou et al. Rail profile measurement based on line-structured light vision
CN104990515B (en) Large-sized object three-dimensional shape measure system and its measuring method
CN114998499B (en) Binocular three-dimensional reconstruction method and system based on line laser galvanometer scanning
CN111091076B (en) Tunnel limit data measuring method based on stereoscopic vision
JP2017510793A (en) Structured optical matching of a set of curves from two cameras
CN105096314A (en) Binary grid template-based method for obtaining structured light dynamic scene depth
Zhao et al. A robust method for registering ground-based laser range images of urban outdoor objects
Kuznetsov et al. Image rectification for prism-based stereoscopic optical systems
CN116245926A (en) Method for determining surface roughness of rock mass and related assembly
CN114877826B (en) Binocular stereo matching three-dimensional measurement method, system and storage medium
Kochi et al. Development of 3D image measurement system and stereo‐matching method, and its archaeological measurement
Thanusutiyabhorn et al. Image-based 3D laser scanner
CN212363177U (en) Robot with multiple cameras for distance measurement
Pedersini et al. Automatic monitoring and 3D reconstruction applied to cultural heritage
Cui et al. Epipolar geometry for prism-based single-lens stereovision
KR20020037778A (en) Three-Dimensional Shape Measuring Method
CN116542927A (en) Track plate product flatness detection method based on camera vision
Zhao et al. 3D concave defect measurement system of the cryogenic insulated cylinder based on linear structured light
Jin et al. A Stereo Vision-Based Flexible Deflection Measurement System
Meng et al. An angle measurement system for bending machine based on binocular active vision
Ma et al. Hybrid scene reconstruction by integrating scan data and stereo image pairs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20201016