CN112565690B - Tunnel convergence monitoring method and device - Google Patents

Tunnel convergence monitoring method and device Download PDF

Info

Publication number
CN112565690B
CN112565690B CN202011360247.6A CN202011360247A CN112565690B CN 112565690 B CN112565690 B CN 112565690B CN 202011360247 A CN202011360247 A CN 202011360247A CN 112565690 B CN112565690 B CN 112565690B
Authority
CN
China
Prior art keywords
matching
image
detected
determining
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011360247.6A
Other languages
Chinese (zh)
Other versions
CN112565690A (en
Inventor
赵文一
王一妍
包元锋
江子君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Ruhr Technology Co Ltd
Original Assignee
Hangzhou Ruhr Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Ruhr Technology Co Ltd filed Critical Hangzhou Ruhr Technology Co Ltd
Priority to CN202011360247.6A priority Critical patent/CN112565690B/en
Publication of CN112565690A publication Critical patent/CN112565690A/en
Application granted granted Critical
Publication of CN112565690B publication Critical patent/CN112565690B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a tunnel convergence monitoring device and a tunnel convergence monitoring device. The method comprises the following steps: obtaining a frame sequence of a target tunnel, wherein the frame sequence comprises at least one time image; determining a reference image and an image to be detected based on the frame sequence, and determining a template image based on the reference image, wherein the template image comprises at least one target, so that the template image participating in matching has certain characteristics, the image matching precision is improved, and the precision of the monitored tunnel convergence information is improved; the template image and the image to be detected are matched to obtain a matching area of the image to be detected and the template image, convergence information of the target tunnel is determined based on the matching area, the convergence information of the tunnel is monitored in real time based on image matching, the method and the device are suitable for various monitoring scenes, and meanwhile, the acquisition speed of the tunnel convergence information is increased.

Description

Tunnel convergence monitoring method and device
Technical Field
The embodiment of the invention relates to the technical field of tunnel safety monitoring, in particular to a tunnel convergence monitoring method and device.
Background
With the continuous development of traffic, a large number of tunnels such as subways, automobiles, trains or high-speed rails appear. The data of the tunnel convergence deformation becomes an important key index for evaluating the tunnel safety, the tunnel convergence monitoring is an important component of tunnel measurement, can directly reflect whether the tunnel structure deformation exceeds the safety allowable range, and can provide effective parameters for the damage identification and safety monitoring of the tunnel structure.
However, most of the current tunnel convergence measurement adopts a convergence ruler, which is mainly to set a plurality of mark points on the cross section of a tunnel in advance and then measure each mark point by adopting the convergence ruler.
Disclosure of Invention
The invention provides a tunnel convergence monitoring method and device, which are used for determining convergence information of a tunnel based on image matching, so that the tunnel convergence is monitored in real time, and the tunnel convergence monitoring method and device are suitable for various monitoring scenes.
In a first aspect, an embodiment of the present invention provides a method for monitoring tunnel convergence, including:
acquiring a frame sequence of a target tunnel, wherein the frame sequence comprises at least one time image;
determining a reference image and an image to be measured based on the frame sequence, and determining a template image based on the reference image, wherein the template image comprises at least one target;
matching the template image with the image to be detected to obtain a matching area of the image to be detected and the template image;
determining convergence information of the target tunnel based on the matching region.
In a second aspect, an embodiment of the present invention further provides a device for monitoring tunnel convergence, where the device includes:
the device comprises a frame sequence acquisition module, a frame sequence acquisition module and a target tunnel acquisition module, wherein the frame sequence comprises at least one time image;
the template image determining module is used for determining a reference image and an image to be detected based on the frame sequence and determining a template image based on the reference image, wherein the template image comprises at least one target;
the matching module is used for matching the template image with the image to be detected to obtain a matching area of the image to be detected and the template image;
a convergence determining module for determining convergence information of the target tunnel based on the matching region.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
a storage device for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement the tunnel convergence monitoring method according to the embodiment of the present invention.
The embodiment of the invention has the following advantages or beneficial effects:
obtaining a frame sequence of a target tunnel, wherein the frame sequence comprises at least one time image; determining a reference image and an image to be detected based on the frame sequence, and determining a template image based on the reference image, wherein the template image comprises at least one target, so that the template image participating in matching has certain characteristics, the image matching precision is improved, and the precision of the monitored tunnel convergence information is improved; the template image is matched with the image to be detected to obtain a matching area of the image to be detected and the template image, the convergence information of the target tunnel is determined based on the matching area, the convergence information of the tunnel is monitored in real time based on image matching, the method and the device are suitable for various monitoring scenes, and meanwhile, the acquisition speed of the convergence information of the tunnel is increased.
Drawings
In order to more clearly illustrate the technical solution of the exemplary embodiment of the present invention, a brief introduction will be made to the drawings required for describing the embodiment. It is clear that the described figures are only figures of a part of the embodiments of the invention to be described, not all figures, and that for a person skilled in the art, without inventive effort, other figures can also be derived from them.
Fig. 1 is a schematic flowchart of a tunnel convergence monitoring method according to an embodiment of the present invention;
fig. 2 is a schematic top view of an apparatus layout according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a sliding window according to an embodiment of the present invention;
fig. 4 is a flowchart illustrating a tunnel convergence monitoring method according to a second embodiment of the present invention;
fig. 5 is a model diagram of a pinhole camera according to a second embodiment of the present invention;
FIG. 6 is a schematic diagram of a sub-pixel fitting provided in the second embodiment of the present invention;
FIG. 7 is a simplified diagram of an in-tunnel imaging system according to a second embodiment of the present invention;
fig. 8 is a schematic structural diagram of a tunnel convergence monitoring apparatus according to a third embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a schematic flowchart of a tunnel convergence monitoring method according to an embodiment of the present invention, which is applicable to a situation that convergence data of a target tunnel needs to be determined according to captured images of various time points of the target tunnel, where the method may be executed by a tunnel convergence monitoring device, and the device may be implemented by hardware and/or software, and the method specifically includes the following steps:
s110, acquiring a frame sequence of the target tunnel, wherein the frame sequence comprises at least one time image.
The target tunnel refers to a tunnel which needs to be converged and monitored so as to judge the structural deformation degree of the target tunnel, such as a railway tunnel or a highway tunnel. The frame sequence refers to an image sequence acquired at the same position point of the same target tunnel on a continuous time axis, and each frame corresponds to an image acquired at a time point. It should be noted that the frame sequence of the target tunnel mainly refers to a frame sequence acquired by the inner wall of the target tunnel, so that the convergence data of the tunnel is determined according to the deformation of the inner wall of the tunnel. Because the process of tunnel convergence is slow, when the frame sequence of the target tunnel is collected, sampling can be performed by adopting a lower frequency (frame rate), so that the calculation amount of tunnel convergence data is reduced.
Illustratively, the following description is made of the installation of a camera that acquires a sequence of frames of a target tunnel. The camera needs to be fixed to the inner wall of the tunnel, i.e. the side of the tunnel, and is required to be parallel to the bottom of the tunnel. The parallel placement of the cameras can be calibrated by using an inclinometer, the inclinometer is firstly horizontally placed at the bottom of the tunnel, the current pitch angle is measured, then the inclinometer is horizontally placed on the camera, so that the pitch angle of the camera is equal to that of the bottom of the tunnel (the pitch angle is difficult to realize in practice, but the difference between the two pitch angles is less than 0.1 degrees), the camera and the bottom of the tunnel can be ensured to be parallel at the moment, as shown in fig. 2, the point O in fig. 2 is the installation position of the camera in the top view, and the point OP is the optical axis of the camera. The imaging characteristics of the camera show that the distortion of the pixel points at the edge of the acquired image is serious, so that the section to be measured needs to be positioned at the imaging center part of the camera, and an included angle, called a yaw angle, exists between the camera and the tunnel extending direction. Meanwhile, due to the two-dimensional characteristic of camera imaging, natural characteristic points which can reflect the tunnel convergence movement direction cannot be selected, so that artificial targets are required to be installed on the inner wall of the tunnel as points to be measured, such as an L point and an R point in fig. 2, and the L point and the R point are at the same height as the camera. In practice can be as required at the tunnel await measuring on the sectional inner wall at will increase mark target quantity, if there is the convergence risk camera installation position on the tunnel inner wall, can be at the tunnel bottom of this mounting point for the next door adopt the mode of stand, install the camera on the stand.
And S120, determining a reference image and an image to be detected based on the frame sequence, and determining a template image based on the reference image, wherein the template image comprises at least one target.
The reference image and the image to be detected are from the same image frame sequence, the reference image is used for mapping part of contents onto the image to be detected, so that the part of the contents of the reference image can be in one-to-one correspondence with the same position point of the image to be detected in space, and the part of the contents of the reference image is a template image. Each image frame sequence may include a reference image and a plurality of images to be measured, for example, a first frame image of the image frame sequence may be used as the reference image, and each image after a first frame may be used as the image to be measured; and the intermediate frame image of the image frame sequence can be selected as a reference image, and the other frame images are used as images to be detected, so that image matching on a continuous time axis is realized, and a series of continuous tunnel convergence information is obtained.
Specifically, the reference image and the image to be detected comprise at least two targets, the targets can be regarded as tunnel feature points which are manually set, and the feature points are required to be set to improve the accuracy of image matching due to the fact that edge information contained in the inner wall of the tunnel is less. Therefore, the area containing the target in the reference image is selected as the template image, so that the template image and the pixel points of the image to be detected at the target position can be in one-to-one correspondence. For example, a feature descriptor may be used to intercept a template image including at least one target from a reference image, such as a histogram of oriented gradients or a scale-invariant feature transform; the template image may also be determined by using a Network model of the detected object, such as an FPN (Feature Pyramid Network) model, a retinanet (retinanet Network) model, or an ssd (single shot multi-box detector) model.
It should be noted that, because the illumination condition inside the tunnel is relatively poor, in order to ensure the imaging effect, the target needs to be illuminated at night, for example, an infrared light source is used as the target, or a general visible light source is used as the target, and when the target is installed, the direction irradiated by the light needs to face the camera. The infrared light source is convenient and fast, infrared light penetrating power is strong, the lighting effect is good, a camera is required to support infrared shooting, and preferably, a visible light source is adopted. Visible light sources are selected more, and the visible light sources need to be selected by comprehensively considering factors such as luminous intensity, frequency, heat radiation, service life, cost performance and the like, and as shown in table 1, performance comparison of three common visual sensor light sources is listed.
TABLE 1 comparison of light Source Performance
Figure BDA0002803764860000061
Because the vision sensor equipment needs to operate for 24 hours for a long time, a light source with relatively long service life is adopted, the high-frequency fluorescent lamp needs to be powered by a special high-frequency power supply, the flicker frequency of the light source is required to be higher than the image acquisition frequency of a camera, the use condition is harsh, and compared with the requirement, the LED arrays can be designed into various shapes according to the requirement to match with the selection of various lighting modes, therefore, the LED light source is preferably adopted as the target.
S130, matching the template image with the image to be detected to obtain a matching area of the image to be detected and the template image.
The template image can correspond to pixel points at the same target in the image to be detected one by one. The matching region refers to a region of the image to be measured having the highest correlation with the information of the template image, that is, a region including the same target. The matching of the template image and the image to be detected is to compare the information of each area in the template image and the image to be detected respectively, so as to determine the matching area with the highest correlation with the template image information from the image to be detected. The information correlation degree may be gradient information of the pixel points, gray value information, or a gray variation trend between the pixel points, or other correlation degrees.
Specifically, a region with the same size as the template image is selected from the image to be detected according to a set rule, information correlation degrees between the selected regions and the template image are calculated, the region with the largest information correlation degree is used as a matching region by comparing the information correlation degrees obtained from the regions, wherein the set rule can be in a sliding window mode, the size of the sliding window is equal to the size of the template image, the direction of the sliding window can be from left to right and from top to bottom, as shown in fig. 3, the sliding direction of the sliding window in the image to be detected is shown, wherein I represents the image to be detected, and T represents the sliding window. In addition, the direction of the sliding window can also be from right to left, from bottom to top, and the like, which is not limited in the present application. In an embodiment, the setting rule may further be that a matching region determined by a previous image to be detected (the image to be detected is an image to be detected of an adjacent frame) is obtained, a different preset range is added to the matching region of the previous image to be detected to obtain an approximate contour of a region to be selected by the image to be detected, and a region to be subjected to information correlation degree calculation with the template image is selected from the image to be detected based on the approximate contour. The matching between the image to be detected and the template image is guided based on the matching area of the previous image to be detected, so that the matching speed of the image to be detected and the template image is improved.
Optionally, matching the template image with the image to be detected to obtain a matching area of the image to be detected and the template image, including; and matching the information of each pixel point in the template image with the information of the pixel point in each region in the image to be detected respectively based on the information of each pixel point in the template image, determining the similarity, and determining the region with the highest similarity as a matching region.
The matching region is determined by calculating the similarity between the pixel value of each pixel in each region in the image to be detected and the pixel value of each pixel in the template image. Specifically, the calculation of the similarity satisfies the following formula:
Figure BDA0002803764860000081
in the formula, (x, y) represents the coordinates of the pixel points at the upper left corner of the selected area in the image to be detected, R (x, y) represents the similarity of the selected area in the image to be detected with (x, y) as the top left corner vertex and the size of the template image as the size, (x ', y') represents the coordinates of the pixel points in the template image, T '(x', y ') is the pixel value of the pixel points (x', y ') of the template image minus the pixel mean value of the template image, and I' (x + x ', y + y') is the pixel value of the pixel points (x + x ', y + y') of the selected area in the image to be detected minus the pixel mean value of the selected area in the image to be detected. Wherein, the calculation formula of T '(x', y ') and I' (x + x ', y + y') is as follows:
Figure BDA0002803764860000082
Figure BDA0002803764860000083
wherein, w and h are the width and the length of the resolution of the template image respectively.
Through the similarity calculation formula, the similarity of each selected area in the image to be detected can be calculated, illustratively, the maximum value of the similarity of each selected area is taken to obtain R (3, 3), namely the selected area with (3, 3) as the coordinate of the upper left corner pixel point is taken as the matching area, and the size of the known template image is w × h, then the coordinate of the upper right corner pixel point of the matching area can be determined to be (3+ w, 3), the coordinate of the lower left corner pixel point is (3, 3+ h), and the coordinate of the lower right corner pixel point is (3+ w, 3+ h).
Since the present embodiment uses the light source as the target, generally, the influence of the illumination on the image can be expressed by the following formula:
I″=α·I+β,
as can be seen from the illumination influence formula and the calculation formula of the similarity, the embodiment normalizes the illumination of the image, that is, the embodiment can resist global illumination intensity change. The illumination influence formula is substituted into the calculation formula of the similarity, so that the illumination coefficients are eliminated. Therefore, the similarity calculation method in this embodiment can obtain a clear and accurate image frame sequence, thereby improving the accuracy of image matching and further improving the accuracy of the convergence information of the monitored tunnel.
And S140, determining convergence information of the target tunnel based on the matching area.
If the target tunnel is converged in the process of acquiring the image frame sequence, the position of the matching region in the image to be detected is also changed relative to the position of the template image in the reference image, so that the convergence information of the target tunnel can be determined through the matching region. Specifically, the position of the matching region may be represented by a coordinate of a certain pixel point of the matching region in the image to be detected, such as a coordinate of a top left corner vertex of the matching region in the image to be detected, or a coordinate of a center point of the matching region in the image to be detected. The position of the template image can also be represented by the same coordinates of the pixel points in the reference image, the position change information of the pixel points is determined by comparing the coordinates of the pixel points in the image to be detected with the coordinates of the pixel points in the reference image, and the displacement information of the matching area relative to the template image is obtained based on the position change information of the pixel points. The convergence information of the target tunnel can be represented by the displacement information of the matching region, and specifically, the displacement information of the matching region can be mapped to actual displacement information of the target tunnel, so that the convergence information of the target tunnel is obtained; or directly mapping the coordinates of the matching area and the coordinates of the template image to be actual space points of the target tunnel, thereby obtaining the convergence information of the target tunnel.
According to the technical scheme of the embodiment, a frame sequence of a target tunnel is obtained, wherein the frame sequence comprises at least one time image; determining a reference image and an image to be detected based on the frame sequence, and determining a template image based on the reference image, wherein the template image comprises at least one target, so that the template image participating in matching has certain characteristics, the image matching precision is improved, and the precision of the monitored tunnel convergence information is improved; the template image and the image to be detected are matched to obtain a matching area of the image to be detected and the template image, convergence information of the target tunnel is determined based on the matching area, the convergence information of the tunnel is monitored in real time based on image matching, the method and the device are suitable for various monitoring scenes, and meanwhile, the acquisition speed of the tunnel convergence information is increased.
Example two
Fig. 4 is a flowchart of a tunnel convergence monitoring method according to a second embodiment of the present invention, and in this embodiment, based on the foregoing embodiments, further optimization is performed on "determining convergence information of a target tunnel based on a matching region". Wherein explanations of the same or corresponding terms as those of the above embodiments are omitted. Referring to fig. 4, the method for monitoring tunnel convergence provided in this embodiment includes the following steps:
s410, acquiring a frame sequence of the target tunnel, wherein the frame sequence comprises at least one time image.
S420, determining a reference image and an image to be measured based on the frame sequence, and determining a template image based on the reference image, wherein the template image comprises at least one target.
And S430, matching the template image with the image to be detected to obtain a matching area of the image to be detected and the template image.
And S440, determining the actual displacement amount of the target tunnel in the vertical direction and/or the horizontal direction based on the matching area.
The actual displacement of the target tunnel in the horizontal direction refers to the convergence distance of the inner wall of the target tunnel in the horizontal direction, and the horizontal direction refers to the direction from one side of the inner wall of the tunnel to the other side of the inner wall of the tunnel. The actual displacement of the target tunnel in the vertical direction refers to the convergence distance of the inner wall of the target tunnel in the vertical direction, and the vertical direction refers to the direction from the inner wall of the tunnel to the ground. Specifically, the actual displacement in the horizontal/vertical direction may be obtained by mapping the horizontal/vertical displacement of the matching region with respect to the template image, or may be obtained by mapping the matching region and the template image to actual spatial points, and then calculating the difference between the two actual spatial points in the horizontal/vertical direction. In the present embodiment, only the actual displacement amount of the target tunnel in the horizontal direction or the vertical direction may be determined, or the actual displacement amount of the target tunnel in the horizontal direction and the vertical direction may be determined at the same time.
According to the technical scheme, the actual displacement of the tunnel is monitored in real time based on image matching, so that accurate convergence data are obtained, and the method is suitable for various monitoring scenes.
Optionally, after obtaining the matching region between the image to be detected and the template image, the method further includes: determining a reference matching coordinate and a matching coordinate to be detected based on the vertex pixel point of the matching area; correspondingly, the actual displacement amount of the target tunnel in the vertical direction and/or the horizontal direction is determined based on the matching area, and the method comprises the following steps: and respectively determining the actual displacement of the target tunnel in the vertical direction and/or the horizontal direction based on the reference matching coordinate and the matching coordinate to be detected.
The vertex pixel points of the matching region may be pixel points at the upper left corner, the lower left corner, the upper right corner or the lower right corner of the matching region. The coordinates of the matching area in the image to be detected and the coordinates of the template image in the reference image, namely the matching coordinates to be detected and the reference matching coordinates, can be uniquely determined by the vertex pixel points of the matching area. The reference matching coordinates and the matching coordinates to be measured can be regarded as coordinates of the target included in the template image in the reference image and coordinates of the target in the image to be measured. Exemplary, the reference matches the coordinate with (x) 1 ,y 1 ) Showing, the matching coordinates to be measured are (x) 2 ,y 2 ) The actual displacement of the target tunnel in the vertical direction and the horizontal direction can be represented by x 2 -x 1 、y 2 -y 1 The mapped value may be represented by a mapping coordinate (x) 1 ′, 1 ′)、(x 2 ′,y 2 ') difference x between 2 ′-x 1 ′、y 2 ′-y 1 ' means.
Illustratively, since the camera actually capturing the image frame sequence is parallel to the bottom of the target tunnel, i.e. when the camera is directly facing the target plane, the optical axis is perpendicular to the target plane, according to the pinhole camera model, there is a relation similar to a triangle between the displacement of the captured target in the vertical direction and the displacement of the image, as shown in fig. 5, AB is the actual displacement of the target tunnel in the vertical direction, and a i B i As can be seen from FIG. 5, AB may be based on A for the difference between the reference matching coordinate and the ordinate of the matching coordinate to be measured i B i Obtained by calculation of similar triangles. Therefore, the vertical displacement of the image to be measured relative to the reference image can be obtained first, and the actual displacement in the vertical direction can be obtained through the similar triangle transformation based on the vertical displacement. For example, if the vertical displacement of the image to be measured with respect to the reference image is AB, the actual displacement of the target tunnel in the vertical direction may be represented by the following formula:
Figure BDA0002803764860000121
wherein A is i B i The actual displacement in the vertical direction is f, the focal length of the camera is f, and the working distance from the camera to the target acquisition plane is d.
In the embodiment, the reference matching coordinates and the matching coordinates to be detected are determined based on the vertex pixel points of the matching area, and the actual displacement of the target tunnel in the vertical direction and the actual displacement of the target tunnel in the horizontal direction are respectively determined based on the reference matching coordinates and the matching coordinates to be detected, so that the convergence information of the target tunnel can be accurately acquired.
Optionally, after the reference matching coordinate and the matching coordinate to be measured are respectively determined, the method further includes: and performing sub-pixel fitting processing on the reference matching coordinate and the matching coordinate to be detected, and updating the reference matching coordinate and the matching coordinate to be detected based on the result of the sub-pixel fitting processing.
In practical application, in order to reduce cost, a telephoto lens is not usually suitable for shooting, so that one pixel in an image can represent several millimeters to several centimeters in practice, the reference matching coordinate and the matching coordinate to be detected are not accurate enough, and sub-pixel fitting processing is required to obtain more accurate coordinates. As can be seen from the foregoing, when the similarity between each selected area of the image to be detected and the template image is calculated, the similarity calculated in each selected area of the image to be detected is represented by the upper left-corner pixel of the selected area. In the field of the integer pixel points with the maximum similarity, the similarity of each integer pixel point is smaller than the similarity of a pixel point at a certain sub-pixel position, the similarity of the pixel point at the sub-pixel position is a maximum value, as shown in fig. 6, the similarity of the maximum value point in one region is higher than that of other points in the region, the left graph in fig. 6 is the similarity of each point in the field of the integer pixel points with the maximum similarity, and the right graph is the similarity of each point in the field with the maximum value obtained after fitting. Therefore, the quadric surface can be adopted to fit the similarity of each integer pixel point in the field of the integer pixel points of the reference matching coordinate and the matching coordinate to be detected, and the maximum value point in the field is obtained, wherein the expression of the quadric surface is as follows:
F(x,y)=a 1 x 2 +a 2 y 2 +a 3 x+a 4 y+a 5 xy+a 6
wherein F (x, y) is the similarity of the matching area represented by the reference matching coordinate or the matching coordinate (x, y) to be detected, and a 1 、a 2 、a 3 、a 4 、a 5 、a 6 And substituting the pixel point coordinates in the eight fields of the reference matching coordinate or the to-be-detected matching coordinate (x, y) and the (x, y) into the formula, and solving 6 unknown coefficients through 9 equations. After obtaining the above 6 coefficients, the coordinates of the extreme point can be solved by:
Figure BDA0002803764860000131
substituting the reference matching coordinates and the coordinates of the pixel points in the eight fields of the reference matching coordinates into the quadric surface expression to obtain 6 coefficients of the reference matching coordinates, calculating the sub-pixel position pixel points corresponding to the reference matching coordinates through a calculation formula of the coordinates of the extreme points, and updating the reference matching coordinates by using the sub-pixel position pixel points. Correspondingly, the operation is repeated on the matching coordinate to be detected and the pixel point coordinates of the eight fields of the matching coordinate to be detected, so that the sub-pixel position pixel point corresponding to the matching coordinate to be detected is obtained, and the matching coordinate to be detected is updated by using the sub-pixel position pixel point. In the embodiment, the sub-pixel fitting processing is performed on the reference matching coordinate and the matching coordinate to be detected, and the reference matching coordinate and the matching coordinate to be detected are updated based on the result of the sub-pixel fitting processing, so that the coordinate of sub-pixel precision is obtained, the sub-pixel positioning of the matching area is realized, and the precision of the actual displacement of the tunnel is improved.
Optionally, determining an actual displacement of the target tunnel in the horizontal direction based on the reference matching coordinate and the matching coordinate to be detected, includes: and determining the actual displacement of the target tunnel in the horizontal direction based on the reference matching coordinate, the matching coordinate to be detected, the pixel size, the yaw angle, the camera focal length and the linear distance from the optical center of the camera to the target to be detected.
According to the foregoing, when the image frame sequence is actually acquired, a horizontal deflection angle, that is, a yaw angle, exists between the camera and the cross section of the target tunnel, so that preferably, coordinate conversion can be performed on the reference matching coordinate and the matching coordinate to be detected through the pixel size, the yaw angle, the camera focal length and the camera optical center, so as to obtain the accurate actual displacement of the target tunnel in the horizontal direction. Fig. 7 is a simplified diagram of imaging inside a tunnel, where C is the camera optical center, e is the camera optical axis, and zOy is the plane of the tunnel cross-section, where the y-direction is vertically downward. The method is characterized in that an angle B' CO is a camera yaw angle theta, d is OC and is a linear distance from an optical center to a plane to be measured, A is an initial position of a target to be measured, B is a current position of the target to be measured, and AB is an actual displacement to be measured, namely an actual displacement of a target tunnel in the horizontal direction.
In the embodiment, the actual displacement amount of the target tunnel in the vertical direction is determined based on the vertical displacement; the actual displacement of the target tunnel in the horizontal direction is determined based on the reference matching coordinate, the matching coordinate to be measured, the pixel size, the yaw angle, the camera focal length and the linear distance from the camera optical center to the target to be measured, so that the actual displacement of the target tunnel is accurately acquired, and the accuracy of tunnel convergence information is improved.
Optionally, based on the reference matching coordinate, the matching coordinate to be measured, the pixel size, the yaw angle, the camera focal length and the linear distance from the camera optical center to the target to be measured, the actual displacement of the target tunnel in the horizontal direction is determined, which includes: determining a reference actual abscissa and an actual abscissa to be measured of the target tunnel based on the reference matching coordinate, the matching coordinate to be measured, the pixel size, the yaw angle, the camera focal length and the linear distance from the camera optical center to the target to be measured; and determining the actual displacement of the target tunnel in the horizontal direction based on the reference actual abscissa and the actual abscissa to be measured.
And the difference between the reference actual abscissa and the actual abscissa to be measured is the actual displacement of the target tunnel in the horizontal direction. The reference actual abscissa and the actual abscissa of the target tunnel to be measured are mapped firstly, and then the actual displacement of the target tunnel in the horizontal direction is determined, so that the actual displacement of the target tunnel in the horizontal direction is accurately obtained.
Optionally, the reference actual abscissa and the actual abscissa to be measured of the target tunnel are determined based on the reference matching coordinate, the matching coordinate to be measured, the pixel size, the yaw angle, the camera focal length, and the linear distance from the camera optical center to the target to be measured, and the following formula is satisfied:
Figure BDA0002803764860000151
wherein A represents a reference matching coordinate point or a matching coordinate point to be measured, z A Is a reference actual abscissa of the reference matching coordinate point or an actual abscissa to be measured of the matching coordinate point to be measured, l is a linear distance from the optical center of the camera to a target to be measured (the target in the template image), which can be measured by a laser range finder or the like, θ is a yaw angle, f is a camera focal length,
Figure BDA0002803764860000152
the horizontal actual distance of the reference matching coordinate or the matching coordinate to be detected is adopted, and the diag is the straight line actual distance of the reference matching coordinate or the matching coordinate to be detected, wherein the diag and the diag
Figure BDA0002803764860000153
The calculation of (a) satisfies the following formula,
Figure BDA0002803764860000154
wherein, d i For the distance of the straight-line pixel from the reference matching coordinate to the center of the reference image or from the matching coordinate to be measured to the center of the image to be measured, i.e.
Figure BDA0002803764860000155
Wherein x and y are respectively the horizontal and vertical seats of the reference matching coordinate or the matching coordinate to be measuredMarking;
Figure BDA0002803764860000156
the pixel distance in the horizontal direction from the reference matching coordinate to the center of the reference image or from the matching coordinate to be detected to the center of the image to be detected is delta, and the pixel size is delta.
In this embodiment, the reference matching coordinate and the matching coordinate to be measured are respectively substituted into the above formula to obtain a reference actual abscissa and an actual abscissa to be measured, and the convergence value of the target tunnel in the horizontal direction is obtained by subtracting the reference actual abscissa from the actual abscissa to be measured, so that the convergence monitoring of the target tunnel in the horizontal direction is realized.
Optionally, the convergence direction and the convergence value of the target tunnel are determined based on the actual displacement of the target tunnel in the vertical direction and the horizontal direction.
Wherein, the convergence direction of the target tunnel refers to a real convergence direction obtained by synthesizing actual displacement amounts in the vertical direction and the horizontal direction of the target tunnel, for example, if the actual displacement amounts in the vertical direction and the horizontal direction of the target tunnel are respectively represented by Y, X, the convergence direction of the target tunnel may be represented by an angle with the ground:
Figure BDA0002803764860000161
and the convergence value in that direction
Figure BDA0002803764860000162
In this embodiment, the real convergence data of the tunnel inner wall of the target tunnel is obtained by determining the convergence direction and the convergence numerical value of the target tunnel, so that real-time convergence monitoring of the target tunnel is realized.
EXAMPLE III
Fig. 8 is a schematic structural diagram of a tunnel convergence monitoring device according to a third embodiment of the present invention, which is applicable to a situation that convergence data of a target tunnel needs to be determined according to a captured image of each time point of the target tunnel, and the device specifically includes: a frame sequence acquisition module 810, a template image determination module 820, a matching module 830, and a convergence determination module 840.
A frame sequence acquiring module 810, configured to acquire a frame sequence of a target tunnel, where the frame sequence includes at least one temporal image;
a template image determination module 820, configured to determine a reference image and an image to be detected based on the frame sequence, and determine a template image based on the reference image, where the template image includes at least one target;
the matching module 830 is configured to match the template image with the image to be detected, so as to obtain a matching area between the image to be detected and the template image;
a convergence determining module 840 for determining convergence information of the target tunnel based on the matching region.
In the embodiment, a frame sequence of the target tunnel is acquired by a frame sequence acquisition module, wherein the frame sequence comprises at least one time image; determining a reference image and an image to be detected based on a frame sequence through a template image determining module, and determining a template image based on the reference image, wherein the template image comprises at least one target, so that the template image participating in matching has certain characteristics, the image matching precision is improved, and the precision of monitored tunnel convergence information is improved; the method and the device have the advantages that the template image and the image to be detected are matched based on the matching module to obtain the matching area of the image to be detected and the template image, the convergence information of the target tunnel is determined based on the matching area through the convergence determining module, the convergence information of the tunnel is monitored in real time based on image matching, the method and the device are suitable for various monitoring scenes, and meanwhile, the acquisition speed of the convergence information of the tunnel is increased.
On the basis of the above device, optionally, the matching module 830 is specifically configured to match the pixel point information of each region in the image to be detected respectively based on the pixel point information in the template image, determine the similarity, and determine the region with the highest similarity as the matching region.
Optionally, the convergence determining module 840 includes a displacement determining unit for determining an actual displacement amount of the target tunnel in the vertical direction and/or the horizontal direction based on the matching region.
Optionally, the convergence determining module 840 further includes a convergence calculating unit, configured to determine a convergence direction and a convergence value of the target tunnel based on actual displacement amounts of the target tunnel in the vertical direction and the horizontal direction.
Optionally, the apparatus further includes a matching coordinate determining module, configured to determine a reference matching coordinate and a matching coordinate to be detected based on a vertex pixel point of the matching region after obtaining the matching region between the image to be detected and the template image. Correspondingly, the displacement determining unit is specifically configured to determine actual displacement amounts of the target tunnel in the vertical direction and/or the horizontal direction respectively based on the reference matching coordinates and the matching coordinates to be measured.
Optionally, the apparatus further includes a sub-pixel fitting module, configured to perform sub-pixel fitting processing on the reference matching coordinate and the matching coordinate to be detected, and update the reference matching coordinate and the matching coordinate to be detected based on a result of the sub-pixel fitting processing.
Optionally, the displacement determining unit includes a horizontal displacement subunit, where the horizontal displacement subunit is configured to determine an actual displacement of the target tunnel in the horizontal direction based on the reference matching coordinate, the matching coordinate to be detected, the pixel size, the yaw angle, the camera focal length, and a linear distance from the camera optical center to the target to be detected.
Optionally, the horizontal displacement subunit is specifically configured to determine a reference actual abscissa and an actual abscissa to be measured of the target tunnel based on the reference matching coordinate, the matching coordinate to be measured, the pixel size, the yaw angle, the camera focal length, and a linear distance from the camera optical center to the target to be measured; and determining the actual displacement of the target tunnel in the horizontal direction based on the reference actual abscissa and the actual abscissa to be measured.
Optionally, the horizontal displacement subunit is configured to, when determining the reference actual abscissa and the actual abscissa to be measured of the target tunnel based on the reference matching coordinate, the matching coordinate to be measured, the pixel size, the yaw angle, the camera focal length, and the linear distance from the camera optical center to the target to be measured, satisfy the following formula:
Figure BDA0002803764860000181
wherein A represents a reference matching coordinate point or a matching coordinate point to be measured, z A Is a reference actual abscissa of the reference matching coordinate point or a to-be-measured actual abscissa of the to-be-measured matching coordinate point, l is a linear distance from a camera optical center to a target to be measured, theta is a yaw angle, f is a camera focal length,
Figure BDA0002803764860000182
the horizontal actual distance of the reference matching coordinate or the matching coordinate to be detected is adopted, and the diag is the straight line actual distance of the reference matching coordinate or the matching coordinate to be detected, wherein the diag and the diag
Figure BDA0002803764860000183
The calculation of (a) satisfies the following formula,
Figure BDA0002803764860000184
wherein, d i The distance of the straight line pixel from the reference matching coordinate to the center of the reference image or from the to-be-detected matching coordinate to the center of the to-be-detected image,
Figure BDA0002803764860000185
the pixel distance in the horizontal direction from the reference matching coordinate to the center of the reference image or from the matching coordinate to be detected to the center of the image to be detected is delta, and the pixel size is delta.
The tunnel convergence monitoring device provided by the embodiment of the invention can execute the tunnel convergence monitoring method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
It should be noted that, the units and modules included in the system are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the embodiment of the present invention.
Example four
Fig. 9 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention. FIG. 9 illustrates a block diagram of an exemplary electronic device 90 suitable for use in implementing embodiments of the present invention. The electronic device 90 shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiment of the present invention.
As shown in fig. 9, the electronic device 90 is in the form of a general purpose computing device. The components of the electronic device 90 may include, but are not limited to: one or more processors or processing units 901, a system memory 902, and a bus 903 that couples various system components including the system memory 902 and the processing unit 901.
Bus 903 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
The electronic device 90 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 90 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 902 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)904 and/or cache memory 905. The electronic device 90 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 906 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 9, and commonly referred to as a "hard drive"). Although not shown in FIG. 9, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to the bus 903 by one or more data media interfaces. Memory 902 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 908 having a set (at least one) of program modules 907, including but not limited to an operating system, one or more application programs, other program modules, and program data, may be stored in, for example, memory 902, each of which examples or some combination thereof may include an implementation of a network environment. Program modules 907 typically perform the functions and/or methodologies of embodiments of the present invention as described herein.
The electronic device 90 may also communicate with one or more external devices 909 (e.g., keyboard, pointing device, display 910, etc.), and may also communicate with one or more devices that enable a user to interact with the electronic device 90, and/or with any devices (e.g., network card, modem, etc.) that enable the electronic device 90 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interface 911. Also, the electronic device 90 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 912. As shown, the network adapter 912 communicates with the other modules of the electronic device 90 via the bus 903. It should be appreciated that although not shown in FIG. 9, other hardware and/or software modules may be used in conjunction with the electronic device 90, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 901 executes various functional applications and data processing by running programs stored in the system memory 902, for example, implementing steps of a tunnel convergence monitoring method provided by the embodiment of the present invention, the method including:
acquiring a frame sequence of a target tunnel, wherein the frame sequence comprises at least one time image;
determining a reference image and an image to be measured based on the frame sequence, and determining a template image based on the reference image, wherein the template image comprises at least one target;
matching the template image with the image to be detected to obtain a matching area of the image to be detected and the template image;
and determining convergence information of the target tunnel based on the matching area.
Of course, those skilled in the art can understand that the processor may also implement the technical solution of the tunnel convergence monitoring method provided in any embodiment of the present invention.
EXAMPLE five
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the tunnel convergence monitoring method provided in any embodiment of the present invention, the method comprising:
acquiring a frame sequence of a target tunnel, wherein the frame sequence comprises at least one time image;
determining a reference image and an image to be measured based on the frame sequence, and determining a template image based on the reference image, wherein the template image comprises at least one target;
matching the template image with the image to be detected to obtain a matching area of the image to be detected and the template image;
and determining convergence information of the target tunnel based on the matching area.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for embodiments of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (5)

1. A method for monitoring tunnel convergence is characterized by comprising the following steps:
acquiring a frame sequence of a target tunnel, wherein the frame sequence comprises at least one time image;
determining a reference image and an image to be measured based on the frame sequence, and determining a template image based on the reference image, wherein the template image comprises at least one target;
matching the template image with the image to be detected to obtain a matching area of the image to be detected and the template image;
determining convergence information of the target tunnel based on the matching region;
the determining convergence information of the target tunnel based on the matching region includes:
determining the actual displacement amount of the target tunnel in the vertical direction and/or the horizontal direction based on the matching area;
after obtaining the matching area between the image to be detected and the template image, the method further comprises the following steps:
determining a reference matching coordinate and a matching coordinate to be detected based on the vertex pixel point of the matching area;
correspondingly, the actual displacement of the target tunnel in the vertical direction and/or the horizontal direction is determined based on the matching area, and the method comprises the following steps:
respectively determining the actual displacement of the target tunnel in the vertical direction and/or the horizontal direction based on the reference matching coordinate and the matching coordinate to be detected;
the determining the actual displacement of the target tunnel in the horizontal direction based on the reference matching coordinate and the matching coordinate to be detected includes:
determining the actual displacement of the target tunnel in the horizontal direction based on the reference matching coordinate, the matching coordinate to be detected, the pixel size, the yaw angle, the camera focal length and the linear distance from the camera optical center to the target to be detected;
the determining the actual displacement of the target tunnel in the horizontal direction based on the reference matching coordinate, the matching coordinate to be detected, the pixel size, the yaw angle, the camera focal length and the linear distance from the camera optical center to the target to be detected comprises:
determining a reference actual abscissa and an actual abscissa to be measured of the target tunnel based on the reference matching coordinate and the matching coordinate to be measured, the pixel size, the yaw angle, the camera focal length and the linear distance from the camera optical center to the target to be measured;
determining the actual displacement of the target tunnel in the horizontal direction based on the reference actual abscissa and the actual abscissa to be measured;
the reference actual abscissa and the actual abscissa to be measured of the target tunnel are determined based on the reference matching coordinate, the matching coordinate to be measured, the pixel size, the yaw angle, the camera focal length and the linear distance from the camera optical center to the target to be measured, and the following formulas are satisfied:
Figure FDA0003784750290000021
wherein A represents a reference matching coordinate point or a matching coordinate point to be measured, z A Is the reference actual abscissa of the reference matching coordinate point or the to-be-measured actual abscissa of the to-be-measured matching coordinate point, l is the linear distance from the optical center of the camera to the target to be measured, theta is the yaw angle, f is the focal length of the camera,
Figure FDA0003784750290000022
the horizontal actual distance of the reference matching coordinate or the matching coordinate to be detected is adopted, and the diag is the straight line actual distance of the reference matching coordinate or the matching coordinate to be detected, wherein the diag and the diag
Figure FDA0003784750290000023
The calculation of (a) satisfies the following formula,
diag=d i ×δ,
Figure FDA0003784750290000024
wherein, d i The distance of the straight line pixel from the reference matching coordinate to the center of the reference image or from the to-be-detected matching coordinate to the center of the to-be-detected image,
Figure FDA0003784750290000025
the pixel distance in the horizontal direction from the reference matching coordinate to the center of the reference image or from the matching coordinate to be detected to the center of the image to be detected is delta, and the pixel size is delta.
2. The method according to claim 1, wherein the matching the template image and the image to be detected to obtain a matching area of the image to be detected and the template image comprises;
and matching the information of each pixel point in the template image with the information of the pixel point in each area in the image to be detected respectively, determining the similarity, and determining the area with the highest similarity as a matching area.
3. The method of claim 1, further comprising:
and determining the convergence direction and the convergence value of the target tunnel based on the actual displacement of the target tunnel in the vertical direction and the horizontal direction.
4. The method according to claim 1, wherein after determining the reference matching coordinates and the matching coordinates to be measured, further comprising:
and performing sub-pixel fitting processing on the reference matching coordinate and the matching coordinate to be detected, and updating the reference matching coordinate and the matching coordinate to be detected based on the result of the sub-pixel fitting processing.
5. A tunnel convergence monitoring device, comprising:
the device comprises a frame sequence acquisition module, a frame sequence acquisition module and a target tunnel acquisition module, wherein the frame sequence comprises at least one time image;
the template image determining module is used for determining a reference image and an image to be detected based on the frame sequence and determining a template image based on the reference image, wherein the template image comprises at least one target;
the matching module is used for matching the template image with the image to be detected to obtain a matching area of the image to be detected and the template image;
a convergence determination module for determining convergence information of the target tunnel based on the matching region;
the convergence determining module comprises a displacement determining unit, a calculating unit and a calculating unit, wherein the displacement determining unit is used for determining the actual displacement of the target tunnel in the vertical direction and/or the horizontal direction based on the matching area;
the tunnel convergence monitoring device also comprises a matching coordinate determination module, a matching coordinate determination module and a matching coordinate determination module, wherein the matching coordinate determination module is used for determining a reference matching coordinate and a matching coordinate to be detected based on a vertex pixel point of a matching area after a matching area of an image to be detected and a template image is obtained;
correspondingly, the displacement determining unit is specifically configured to determine actual displacement amounts of the target tunnel in the vertical direction and/or the horizontal direction respectively based on the reference matching coordinate and the matching coordinate to be detected;
the displacement determining unit comprises a horizontal displacement subunit, wherein the horizontal displacement subunit is used for determining the actual displacement of the target tunnel in the horizontal direction based on the reference matching coordinate, the matching coordinate to be detected, the pixel size, the yaw angle, the camera focal length and the linear distance from the camera optical center to the target to be detected;
the horizontal displacement subunit is specifically used for determining a reference actual abscissa and an actual abscissa to be measured of the target tunnel based on the reference matching coordinate, the matching coordinate to be measured, the pixel size, the yaw angle, the camera focal length and the linear distance from the camera optical center to the target to be measured;
determining the actual displacement of the target tunnel in the horizontal direction based on the reference actual abscissa and the actual abscissa to be measured;
the horizontal displacement subunit is used for determining a reference actual abscissa and an actual abscissa to be measured of the target tunnel based on the reference matching coordinate, the matching coordinate to be measured, the pixel size, the yaw angle, the camera focal length and the linear distance from the camera optical center to the target to be measured, and satisfies the following formula:
Figure FDA0003784750290000041
wherein A represents a reference matching coordinate point or a matching coordinate point to be measured, z A Is a reference actual abscissa of the reference matching coordinate point or a to-be-measured actual abscissa of the to-be-measured matching coordinate point, l is a linear distance from a camera optical center to a target to be measured, theta is a yaw angle, f is a camera focal length,
Figure FDA0003784750290000042
the horizontal actual distance of the reference matching coordinate or the matching coordinate to be detected is adopted, and the diag is the straight line actual distance of the reference matching coordinate or the matching coordinate to be detected, wherein the diag and the diag
Figure FDA0003784750290000043
The calculation of (a) satisfies the following formula,
diag=d i ×δ,
Figure FDA0003784750290000044
wherein, d i Linear pixel distance from reference matching coordinate to reference image center or from matching coordinate to be measured to image centerAfter the separation, the water is separated from the water,
Figure FDA0003784750290000045
the pixel distance in the horizontal direction from the reference matching coordinate to the center of the reference image or from the matching coordinate to be detected to the center of the image to be detected is delta, and the pixel size is delta.
CN202011360247.6A 2020-11-27 2020-11-27 Tunnel convergence monitoring method and device Active CN112565690B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011360247.6A CN112565690B (en) 2020-11-27 2020-11-27 Tunnel convergence monitoring method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011360247.6A CN112565690B (en) 2020-11-27 2020-11-27 Tunnel convergence monitoring method and device

Publications (2)

Publication Number Publication Date
CN112565690A CN112565690A (en) 2021-03-26
CN112565690B true CN112565690B (en) 2022-09-30

Family

ID=75046428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011360247.6A Active CN112565690B (en) 2020-11-27 2020-11-27 Tunnel convergence monitoring method and device

Country Status (1)

Country Link
CN (1) CN112565690B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115200540B (en) * 2022-07-08 2023-07-28 安徽省皖北煤电集团有限责任公司 Mine roadway deformation monitoring and early warning method and system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102589523A (en) * 2011-01-11 2012-07-18 香港理工大学深圳研究院 Method and equipments for remotely monitoring displacement of building
CN104809720B (en) * 2015-04-08 2017-07-14 西北工业大学 The two camera target association methods based on small intersection visual field
CN106296726A (en) * 2016-07-22 2017-01-04 中国人民解放军空军预警学院 A kind of extraterrestrial target detecting and tracking method in space-based optical series image
CN106920259B (en) * 2017-02-28 2019-12-06 武汉工程大学 positioning method and system
CN109631829B (en) * 2018-12-17 2022-05-27 南京理工大学 Self-adaptive fast-matching binocular ranging method
CN111091567B (en) * 2020-03-23 2020-06-23 南京景三医疗科技有限公司 Medical image registration method, medical device and storage medium

Also Published As

Publication number Publication date
CN112565690A (en) 2021-03-26

Similar Documents

Publication Publication Date Title
CA2078556C (en) Computer assisted video surveying and method therefor
CN101881000B (en) Photographic measurement system and method for pavement evenness
CN104657711B (en) A kind of readings of pointer type meters automatic identifying method of robust
CN107102004A (en) A kind of tunnel detector
CN112825190B (en) Precision evaluation method, system, electronic equipment and storage medium
CN109297428A (en) A kind of high-precision deformation based on unmanned plane patrols survey technology method
CN102788572B (en) Method, device and system for measuring attitude of lifting hook of engineering machinery
CN104964708B (en) A kind of road surface pit detection method based on vehicle-mounted binocular vision
CN109341668A (en) Polyphaser measurement method based on refraction projection model and beam ray tracing method
CN106504287B (en) Monocular vision object space positioning system based on template
CN116704048B (en) Double-light registration method
CN106651957B (en) Monocular vision object space localization method based on template
CN112565690B (en) Tunnel convergence monitoring method and device
CN111970454A (en) Shot picture display method, device, equipment and storage medium
CN111145136A (en) Synthesis method, system and storage medium for transformer substation meter image data set
CN112595236A (en) Measuring device for underwater laser three-dimensional scanning and real-time distance measurement
CN115717867A (en) Bridge deformation measurement method based on airborne double cameras and target tracking
CN112419287B (en) Building deflection determination method and device and electronic equipment
Huang et al. Measurement method and recent progress of vision-based deflection measurement of bridges: A technical review
CN112902869B (en) Method and device for adjusting laser plane of rail profile measuring system
WO2022126339A1 (en) Method for monitoring deformation of civil structure, and related device
CN105303580A (en) Identification system and method of panoramic looking-around multi-camera calibration rod
CN115861407A (en) Safe distance detection method and system based on deep learning
CN113202456B (en) Underground coal mine tapping angle measuring device and method based on image processing
CN111309942B (en) Data acquisition method, device and system for construction site

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant