CN115375762A - Three-dimensional reconstruction method for power line based on trinocular vision - Google Patents

Three-dimensional reconstruction method for power line based on trinocular vision Download PDF

Info

Publication number
CN115375762A
CN115375762A CN202210937648.6A CN202210937648A CN115375762A CN 115375762 A CN115375762 A CN 115375762A CN 202210937648 A CN202210937648 A CN 202210937648A CN 115375762 A CN115375762 A CN 115375762A
Authority
CN
China
Prior art keywords
image
camera
xyz
trinocular
power line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210937648.6A
Other languages
Chinese (zh)
Inventor
成云朋
张庆富
王鑫
冯兴明
王永
张济韬
李建华
张学波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yancheng Power Supply Co of State Grid Jiangsu Electric Power Co Ltd
Original Assignee
Yancheng Power Supply Co of State Grid Jiangsu Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yancheng Power Supply Co of State Grid Jiangsu Electric Power Co Ltd filed Critical Yancheng Power Supply Co of State Grid Jiangsu Electric Power Co Ltd
Priority to CN202210937648.6A priority Critical patent/CN115375762A/en
Publication of CN115375762A publication Critical patent/CN115375762A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a three-dimensional reconstruction method of a power line based on trinocular vision, which comprises the following steps: (1): shooting an image of a target power transmission line by using a trinocular camera to obtain a trinocular image and a rolling angle when the camera shoots; (2): respectively carrying out primary correction on the image information by using a Bouguet algorithm; (3): the secondary correction of the top camera and the right camera is completed by translating the top camera and the right camera, and a three-eye image is obtained after correction; (4): determining a matching pair of the power lines according to the camera attitude; (5): extracting a power line in the fitting matching pair; (6): and matching the power lines in the pairing by using polar constraint and reconstructing a three-dimensional vector of the power line of the left image. The invention provides a three-dimensional power line reconstruction method based on trinocular vision, which is used for realizing accurate positioning of a power transmission line, reducing the working strength and danger coefficient of inspection personnel and improving the working efficiency.

Description

Three-dimensional reconstruction method for power line based on trinocular vision
Technical Field
The invention belongs to the technical field of computer vision and power transmission line inspection, and particularly relates to a three-dimensional reconstruction method for a power line based on trinocular vision.
Background
The transmission line is an indispensable component in the power grid, mainly undertakes the transmission of electric energy, and plays an important role in the safe and reliable operation of the power grid. Therefore, the guarantee of the normal operation of the power transmission line system is an important basis for guaranteeing the stable and safe operation of various industries. However, due to the continuous development of modern society, the distribution range of the power transmission line is wider and wider, the geographical environment is more and more complex, and vegetation, buildings and the like with different heights on the ground can pose potential threats to the power transmission line. Therefore, the power corridor needs to be regularly inspected to ensure the normal operation state of the power equipment.
At present, the line operation and maintenance are mainly carried out manually, the labor intensity is high, and the maintenance risk of the field high-voltage power corridor is extremely high. The low-efficiency inspection mode cannot meet the requirements of construction and development of a modern power grid, so that an intelligent, efficient and safe power transmission line inspection technology is absolutely necessary. With the maturity of camera technique and image processing technique to and the wide application of unmanned aerial vehicle in each field of the electric wire netting operation, use the two in combination, there is the area of hidden danger in the automation under the power line and reports to the police, just can realize intelligent maintenance work. Therefore, the research on the two-dimensional image identification and the three-dimensional reconstruction of the power line has important social significance and economic benefit.
Disclosure of Invention
In order to solve the problems, the invention provides a three-dimensional power line reconstruction method based on trinocular vision, which is used for realizing accurate positioning of a power transmission line, reducing the working strength and danger coefficient of inspection personnel and improving the working efficiency.
The invention specifically relates to a three-dimensional reconstruction method of a power line based on trinocular vision, which comprises the following steps:
step (1): shooting an image of a target power transmission line by using a trinocular camera to obtain a trinocular image and a rolling angle theta during shooting by the camera z
Step (2): using Bouguet algorithm to respectively pair { I left ,I right }、{I left ,I up Performing primary correction;
and (3): the secondary correction of the top camera and the right camera is finished by translating the top camera and the right camera, and a trinocular image { I is obtained after correction 1 ,I 2 ,I 3 In which I 1 ,I 2 Polar lines of (2) are parallel to the image transverse axis, I 1 ,I 3 With polar lines parallel to the longitudinal axis of the image and I 1 ,I 2 Transverse parallax and 1 ,I 3 are equal;
and (4): according to the camera attitude theta z Determining matched pairs of power lines
Figure BDA0003784259800000021
And (5): extraction and fitting
Figure BDA0003784259800000022
A power line of;
and (6): and matching the power lines in Matchl by using epipolar constraint and reconstructing a three-dimensional vector of the power line of the left image.
Compared with the prior art, the beneficial effects are that:
(1) The trinocular camera with the vertical double-base-line structure solves the problem of low power line matching precision under different shooting angles by adding the auxiliary camera. The three-eye image provides parallax information in the horizontal direction and the vertical direction, the system can self-adaptively select parallax in the proper direction for stereo matching, and the accuracy and stability of power line matching under different camera postures can be greatly improved;
(2) The invention provides a method for realizing cooperative correction of a three-eye image by using an improved Bouguet algorithm, wherein a left eye image is unchanged after correction, polar lines of the left eye image and a right eye image are parallel to an image transverse axis, and polar lines of a left upper eye image and an image longitudinal axis;
(3) According to the method, the primary correction result is optimized by the characteristic point of the trinocular image obtained through the SURF algorithm, the high-precision trinocular image correction result can be quickly obtained, and the precision of subsequent power line stereo matching is improved. The method is suitable for various aerial photography shooting postures, and can accurately correct the trinocular image with low calibration precision.
Drawings
FIG. 1 is a flow chart of an algorithm of a three-dimensional reconstruction method of a power line based on trinocular vision;
FIG. 2 is a schematic view of a model of a trinocular camera according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a trinocular image matching according to an embodiment of the present invention.
Detailed Description
The following describes in detail a specific embodiment of a power line three-dimensional reconstruction method based on trinocular vision with reference to the accompanying drawings.
As shown in fig. 1, the specific operation flow of the power line three-dimensional reconstruction method of the present invention is as follows:
1. fig. 2 is a schematic diagram of a trinocular camera model, which is composed of a vertical double-baseline trinocular camera module composed of three cameras with the same specification and a level gauge, wherein the level gauge is arranged on a pan-tilt and used for measuring a rolling angle of the cameras during shooting in real time; firstly, use the three-eye phaseThe camera shoots the target transmission line image and obtains the image information { I left ,I right ,I up And roll angle θ at camera capture z Transmitting the information into an information module for processing;
2. based on calibration parameters of a trinocular camera, respectively pairing { I by using an improved Bouguet algorithm left ,I right }、{I left ,I up Correcting to obtain corrected three-eye image (I) " left ,I’ right ,I’ up And I " left =I left (ii) a Corrected I " left ,I’ right Is parallel to the transverse axis of the image, I " left ,I’ up Parallel to the longitudinal axis of the image; the method comprises the following specific steps:
21 Obtaining calibration parameters of the trinocular camera according to the Zhang calibration method, and constructing I by using the Bouguet algorithm left ,I right Of (2) a rotation matrix R l ,R r To { I } left ,I right Performing primary horizontal correction, and specifically comprising the following steps:
a. firstly, first of all { I left ,I right The transformation matrix R between lr Composite matrix r divided into left and right cameras l ,r r Wherein
Figure BDA0003784259800000031
Figure BDA0003784259800000032
b. Creation of { I left ,I right The translation vector T between lr Rotation matrix R of direction rect Such that the baseline is parallel to the imaging plane;
R rect =[e 1 e 2 e 3 ] T (1)
wherein e is 1 =T lr /||T lr I is the sum translation vector T lr The poles in the same direction are arranged in the same direction,
Figure BDA0003784259800000033
Figure BDA0003784259800000034
and
Figure BDA0003784259800000035
translation vectors in the x, y and z directions, respectively;
Figure BDA0003784259800000036
a vector in the direction of the image plane; e.g. of the type 3 =e 1 ×e 2 Is perpendicular to e 3 And e 3 The vector of the plane in which the lens is located;
c. obtaining an integral rotation matrix R of the left camera and the right camera according to the formula (2) l ,R r (ii) a Left and right camera coordinate system O l -xyz,O r -xyz times the respective global rotation matrix R l ,R r The main optical axes of the left and right cameras are parallel, the image plane is parallel to the base line, and the coordinate systems of the rotated left and right cameras are O' l -xyz,O’ r -xyz。
Figure BDA0003784259800000037
22 O' l -xyz,O’ r -xyz simultaneous rotation around respective optical centers
Figure BDA0003784259800000038
Obtaining a new coordinate system O " l -xyz,O” r -xyz, then O " l -xyz with O l -xyz coincidence. After rotation, a line-aligned image { I' left ,I’ right And l' left =I left
23 Repeat step 21) for { I } left ,I up Correcting to obtain an integral rotation matrix R of the left eye camera and the upper eye camera l2 ,R u And the corrected coordinates of the left and the upper cameras are O' l2 -xyz,O’ u -xyz;
24 ) repeating step 22) to O' l2 -xyz,O’ u Surrounding the respective light by-xyzThe heart rotates simultaneously
Figure BDA0003784259800000039
Obtaining a new coordinate system O " l2 -xyz,O” u -xyz, then O " l2 -xyz with O l -xyz registration, rotation, yielding a column-aligned image { I " left ,I’ up }, and I " left =I left
3. Obtaining a three-eye image (I) after primary correction left ,I’ right ,I’ up Let the coordinate of the main point of the top camera be
Figure BDA00037842598000000310
The principal point coordinates of the right camera are
Figure BDA00037842598000000311
By translating the top camera and the right camera, respectively
Figure BDA00037842598000000312
Is added with an offset
Figure BDA0003784259800000041
The secondary correction of the top camera and the right camera can be finished; obtaining a trinocular image { I after correction 1 ,I 2 ,I 3 In which I 1 ,I 2 Polar lines of (2) are parallel to the image transverse axis, I 1 ,I 3 With polar lines parallel to the longitudinal axis of the image and I 1 ,I 2 Transverse parallax and I 1 ,I 3 Are equal; the method comprises the following specific steps:
31 Acquisition of a trinocular image I using SURF algorithm left ,I’ right ,I’ up Coordinates of feature points, and defining the feature points of the trinocular image as:
Figure BDA0003784259800000042
wherein P is os1 、P os2 And P os3 Are respectively I left 、I’ right And l' up Characteristic point parameter of (1), m p 、n p And z p Are respectively I left And l' right And T' up The total number of the middle feature points,
Figure BDA0003784259800000043
is I left Middle (i) p1 The coordinates of the individual characteristic points are,
Figure BDA0003784259800000044
is l' right Middle j p1 Coordinates of each characteristic point;
Figure BDA0003784259800000045
is l' up Middle q (q) p1 Coordinates of each characteristic point;
32 Calculate I) left 、I' right Characteristic point parameter P of os1 、P os2 Selecting the point with the minimum Euclidean distance as a rough matching point, sorting the rough matching points according to Euclidean distance ascending order, deleting abnormal points, and selecting the top k p A matching point defined as
Figure BDA0003784259800000046
In the same way, calculate I left 、I’ up Characteristic point parameter P of os1 、P os3 Sorting all the Euclidean distances according to the Euclidean distances, and selecting the top k p A matching point defined as
Figure BDA0003784259800000047
According to
Figure BDA0003784259800000048
The common left eye feature point coordinates in the three-eye image are redefined as
Figure BDA0003784259800000049
Wherein the number of matching points is n;
33 The errors introduced by calibration may cause the epipolar lines of the left and right eye images after the initial correction to not be perfectly parallel to the horizontal axis. Due to correctionAfter should be such that I left 、I’ right Are equal, so that:
Figure BDA00037842598000000410
while
Figure BDA00037842598000000411
To be provided with
Figure BDA00037842598000000412
As new principal point coordinates, repeating the steps 23) -24) to correct the right camera again to obtain a corrected right eye image I 2
34 For lateral offset, since after correction should be such that I left 、I’ up Are equal, so that:
Figure BDA00037842598000000413
for longitudinal offset, due to corrected I left 、I’ up The longitudinal parallax between should be equal to I left 、I’ right Is equal, so:
Figure BDA0003784259800000051
to be provided with
Figure BDA0003784259800000052
As new principal point coordinates, repeating the steps 23) -24) to correct the top camera again to obtain a corrected upper eye image I 3 . The final corrected trifocal image is { I } 1 ,I 2 ,I 3 In which I 1 For the corrected left eye image, I 2 For corrected right eye image, I 3 Is the corrected upper eye image.
4. Three-eye image processingThe complexity is twice that of the binocular image, and therefore, in order to reduce the operation time, whether to process the right eye image or the upper eye image in the power line extraction process will depend on the camera photographing angle θ z According to theta z Determining matched pairs of power lines
Figure BDA0003784259800000053
If it is not
Figure BDA0003784259800000054
With the above eye image and the left eye image as matched pairs of lines, i.e.
Figure BDA0003784259800000055
If it is used
Figure BDA0003784259800000056
With right and left eye images as matching pairs of lines, i.e.
Figure BDA0003784259800000057
5. Extraction and fitting
Figure BDA0003784259800000058
In a power line of
Figure BDA0003784259800000059
For example, the specific steps of power line extraction are as follows:
51 Match) feature extraction based on power lines in aerial images l The specific steps of all the power lines in (1) are as follows:
a. will be provided with
Figure BDA00037842598000000510
Converted into a grey-scale image I gray And Gaussian filtering; finally, histogram equalization is carried out to obtain a preprocessed image I gray2
b. Obtaining an edge map I based on an edge detection algorithm (ED) edge In which I edge The gray value of the background such as the sky is 0, and the gray value of the power line is 255;
c. in I edge Based on the vector, the single-pixel width line feature in the rasterization form is converted into a two-dimensional vector V by a vector tracking algorithm 1 And deleting short features less than 20 pixels in length, then V 1 Can be represented by formula (5), I edge Each of the points (xj, y) other than 0 j ) Are all assigned to different vector groups v i
Figure BDA00037842598000000511
Then, V is calculated according to the formula (6) 1 Removing the overbending feature with camber greater than 0.27;
Figure BDA00037842598000000512
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA00037842598000000513
d. aggregating the line characteristics according to the end point projection method of the line segments, if the two line segments are collinear, connecting to obtain a two-dimensional vector
Figure BDA00037842598000000514
For v i Carrying out iterative processing to obtain a vector with a single pixel width
Figure BDA00037842598000000515
52 Suppose a line segment
Figure BDA0003784259800000061
Is a fitting equation of
Figure BDA0003784259800000062
According to the least square method
Figure BDA0003784259800000063
Fitting to obtain a fitting parameter a k (k =0,1,. N); according to the fitting parameter a 0 The power lines are sorted from small to large, and all the power lines in the image can be sorted
Figure BDA0003784259800000064
Wherein i represents the sorted sequence number;
53 ) repeat steps 52) -52) extraction and fitting
Figure BDA0003784259800000065
In the electric power line obtaining
Figure BDA0003784259800000066
Figure BDA0003784259800000067
N 1 ,N 2 Represents the number of power lines; the power lines corresponding to the same i, j are power lines with the same name;
6. utilizing polar line constraint to match power lines in Matchl and reconstructing a three-dimensional vector of a left image power line, and specifically comprising the following steps:
61 As shown in figure 3 (a),
Figure BDA0003784259800000068
at this time
Figure BDA0003784259800000069
The trend of the power lines in the graph is oblique, and the epipolar line is parallel to the transverse axis of the image, so that the intersection point of the epipolar line and the power lines is unique; calculating the intersection point of the homonymous epipolar line and the homonymous power line to obtain a homonymous image point pair on the power line
Figure BDA00037842598000000610
The corresponding disparity value of the same-name image point pair is
Figure BDA00037842598000000611
62 As shown in figure 3 (b),
Figure BDA00037842598000000612
at this time
Figure BDA00037842598000000613
The electric lines run horizontally in the figure, and the epipolar lines are parallel to the longitudinal axis of the image, so that the intersection points of the epipolar lines and the electric lines are unique; calculating the intersection point of the homonymous core line and the homonymous power line to obtain homonymous image point pairs on the power line
Figure BDA00037842598000000614
The corresponding disparity value of the same-name image point pair is
Figure BDA00037842598000000615
The three-dimensional vector V of the power line can be obtained according to the parallax information 3 The expression may be represented by equation (8):
Figure BDA00037842598000000616
finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and not for limiting the same. It will be understood by those skilled in the art that various modifications and equivalents may be made to the embodiments of the invention as described herein, and such modifications and variations are intended to be within the scope of the claims appended hereto.

Claims (7)

1. A three-dimensional reconstruction method of a power line based on trinocular vision is characterized by comprising the following steps:
step (1): shooting an image of a target power transmission line by using a trinocular camera to obtain a trinocular image and a rolling angle theta during shooting by the camera z
Step (2): using Bouguet algorithm to respectively couple { I left ,I right }、{I left ,I up Performing primary correction;
and (3): completing the top camera by translating the top camera and the right cameraAnd performing secondary correction on the right camera to obtain a trinocular image { I after correction 1 ,I 2 ,I 3 In which I 1 ,I 2 Polar lines of (a) are parallel to the transverse axis of the image, I 1 ,I 3 With polar lines parallel to the longitudinal axis of the image and I 1 ,I 2 Transverse parallax and 1 ,I 3 are equal;
and (4): according to camera attitude theta z Determining matched pairs of power lines
Figure FDA0003784259790000011
And (5): extraction and fitting
Figure FDA0003784259790000012
A power line of (2);
and (6): match using polar constraint l And matching the power lines in the left graph and reconstructing a three-dimensional vector of the power line in the left graph.
2. The method as claimed in claim 1, wherein the trinocular camera module comprises a trinocular camera with a vertical double-baseline structure composed of three cameras with the same parameters, and a level gauge is arranged on the pan/tilt head for measuring the roll angle of the camera during shooting in real time.
3. The method according to claim 1, wherein the step (2) specifically comprises the following steps:
21 Obtaining calibration parameters of the three-view camera according to the Zhang calibration method, and constructing I by using a Bouguet algorithm left ,I right Of (3) a rotation matrix R l ,R r To { I left ,I right Performing primary horizontal correction;
22 O' l-xyz ,O' r-xyz Rotate simultaneously about respective optical centers
Figure FDA0003784259790000013
Obtaining a new coordinate system O' l -xyz,O” r -xy z Then O " l -xyz with O l -xyz coincidence; after rotation, a line-aligned image { I' left ,I' right H and l' left =I left
23 Repeat step 21) for { I } left ,I up Correcting to obtain an integral rotation matrix R of the left eye camera and the upper eye camera l2 ,R u And the corrected coordinates of the left and the upper cameras are O' l2 -xyz,O' u -xyz;
24 ) repeating step 22) to obtain O' l2 -xyz,O' u -xyz simultaneous rotation around respective optical centers
Figure FDA0003784259790000014
Obtaining a new coordinate system O " l2 -xyz,O” u -xyz, then O " l2 -xyz with O l -xyz coincidence; after rotation, a column quasi image (I) is obtained " left ,I' up And I " left =I left
4. The method according to claim 3, wherein the step 21) specifically comprises the following steps:
a. firstly, first of all { I left ,I right The transformation matrix R between lr Composite matrix r divided into left and right cameras l ,r r Wherein
Figure FDA0003784259790000021
Figure FDA0003784259790000022
b. Creation of { I left ,I right The translation vector T between lr Rotation matrix R of direction rect Such that the baseline is parallel to the imaging plane;
R rect =[e 1 e 2 e 3 ] T
wherein e is 1 =T lr /||T lr I is the sum translation vector T lr The poles in the same direction are arranged in the same direction,
Figure FDA0003784259790000023
Figure FDA0003784259790000024
and
Figure FDA0003784259790000025
translation vectors in the x, y and z directions, respectively;
Figure FDA0003784259790000026
a vector in the direction of an image plane; e.g. of a cylinder 3 =e 1 ×e 2 Is perpendicular to e 3 And e 3 The vector of the plane in which the lens is located;
c. obtaining an integral rotation matrix R of the left camera and the right camera according to the following formula l ,R r (ii) a Left and right camera coordinate system O l -xyz,O r -xyz times the respective global rotation matrix Rl, R r The main optical axes of the left and right cameras are parallel, the image plane is parallel to the base line, and the coordinate systems of the rotated left and right cameras are O' l -xyz,O' r -xyz;
Figure FDA0003784259790000027
5. The method for reconstructing three-dimensional power line based on trinocular vision as claimed in claim 1, wherein the step (3) comprises the following steps:
31 Using SURF algorithm to obtain a trinocular image I left ,I' right ,I' up Coordinates of feature points, and definition of feature points of the trinocular image are as follows:
Figure FDA0003784259790000028
wherein P is os1 、P os2 And P os3 Are respectively I left 、I' right And l' up Characteristic point parameter of (1), m p 、n p And z p Are respectively I left 、I' right And I' up The total number of the middle feature points,
Figure FDA0003784259790000029
is shown as I left Middle (i) p1 The coordinates of the individual characteristic points are,
Figure FDA00037842597900000213
is l' right Middle j p1 Coordinates of each characteristic point;
Figure FDA00037842597900000214
is l' up Middle q (q) p1 Coordinates of each characteristic point;
32 Calculate I) left 、I' right Characteristic point parameter P of os1 、P os2 Selecting the point with the minimum Euclidean distance as a rough matching point, sorting the rough matching points according to Euclidean distance ascending order, deleting abnormal points, and selecting the top k p A matching point defined as
Figure FDA00037842597900000212
For the same reason, calculate I left 、I' up Characteristic point parameter P of os1 、P os3 Sorting all the Euclidean distances according to the Euclidean distances, and selecting the top k p A matching point defined as
Figure FDA0003784259790000031
According to
Figure FDA0003784259790000032
Common left eye feature point seat inThe target is to redefine the three-mesh image matching point as
Figure FDA0003784259790000033
Wherein the number of matching points is n;
33 Errors caused by calibration can cause epipolar lines of the left eye image and the right eye image after primary correction not to be completely parallel to a transverse axis; since after correction should make I left 、I’ right Are equal, so that:
Figure FDA0003784259790000034
and then
Figure FDA0003784259790000035
To be provided with
Figure FDA0003784259790000036
As new principal point coordinates, repeating the steps 23) -24) to correct the right camera again to obtain a corrected right eye image I 2
34 For lateral offset, since the correction should be such that I left 、I’ up Are equal, so that:
Figure FDA0003784259790000037
for longitudinal offset, due to corrected I left 、I’ up The longitudinal parallax between should be equal to I left 、I’ right Equal, so that:
Figure FDA0003784259790000038
to be provided with
Figure FDA0003784259790000039
As new principal point coordinates, repeating the steps 23) -24) to correct the top camera again to obtain a corrected upper eye image I 3 (ii) a The final corrected trifocal image is { I } 1 ,I 2 ,I 3 In which I 1 For the corrected left eye image, I 2 For a corrected right eye image, I 3 Is the corrected upper eye image.
6. The method for reconstructing three-dimensional power line based on trinocular vision according to claim 1, wherein the step (4) comprises the following specific contents:
if it is used
Figure FDA00037842597900000310
With the above eye image and the left eye image as matched pairs of lines, i.e.
Figure FDA00037842597900000311
If it is not
Figure FDA00037842597900000312
With right and left eye images as matching pairs of lines, i.e.
Figure FDA00037842597900000313
7. The three-dimensional reconstruction method for power lines based on trinocular vision as claimed in claim 1, wherein the step (6) comprises the following steps:
61 When is in contact with
Figure FDA00037842597900000314
At this time
Figure FDA00037842597900000315
The electric lines run obliquely in the figure, and the epipolar lines are parallel to the horizontal axis of the image, so that the intersection points of the epipolar lines and the electric lines are unique; calculating outObtaining the homonymous image point pair on the power line at the intersection point of the homonymous epipolar line and the homonymous power line
Figure FDA0003784259790000041
The corresponding disparity value of the same-name image point pair is
Figure FDA0003784259790000042
62 When
Figure FDA0003784259790000043
At this time
Figure FDA0003784259790000044
The electric lines run horizontally in the figure, and the epipolar lines are parallel to the longitudinal axis of the image, so that the intersection points of the epipolar lines and the electric lines are unique; calculating the intersection point of the homonymous core line and the homonymous power line to obtain homonymous image point pairs on the power line
Figure FDA0003784259790000045
The corresponding disparity value of the same-name image point pair is
Figure FDA0003784259790000046
Obtaining a three-dimensional vector V of the power line according to the parallax information 3 The expression is:
Figure FDA0003784259790000047
CN202210937648.6A 2022-08-05 2022-08-05 Three-dimensional reconstruction method for power line based on trinocular vision Pending CN115375762A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210937648.6A CN115375762A (en) 2022-08-05 2022-08-05 Three-dimensional reconstruction method for power line based on trinocular vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210937648.6A CN115375762A (en) 2022-08-05 2022-08-05 Three-dimensional reconstruction method for power line based on trinocular vision

Publications (1)

Publication Number Publication Date
CN115375762A true CN115375762A (en) 2022-11-22

Family

ID=84063483

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210937648.6A Pending CN115375762A (en) 2022-08-05 2022-08-05 Three-dimensional reconstruction method for power line based on trinocular vision

Country Status (1)

Country Link
CN (1) CN115375762A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503570A (en) * 2023-06-29 2023-07-28 聚时科技(深圳)有限公司 Three-dimensional reconstruction method and related device for image

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503570A (en) * 2023-06-29 2023-07-28 聚时科技(深圳)有限公司 Three-dimensional reconstruction method and related device for image
CN116503570B (en) * 2023-06-29 2023-11-24 聚时科技(深圳)有限公司 Three-dimensional reconstruction method and related device for image

Similar Documents

Publication Publication Date Title
CN112767391B (en) Power grid line part defect positioning method integrating three-dimensional point cloud and two-dimensional image
CN107194991B (en) Three-dimensional global visual monitoring system construction method based on skeleton point local dynamic update
CN103278138B (en) Method for measuring three-dimensional position and posture of thin component with complex structure
CN104484648B (en) Robot variable visual angle obstacle detection method based on outline identification
CN108520537B (en) Binocular depth acquisition method based on luminosity parallax
CN109580649B (en) Engineering structure surface crack identification and projection correction method and system
CN109903227A (en) Full-view image joining method based on camera geometry site
CN109470149B (en) Method and device for measuring position and posture of pipeline
CN111223133A (en) Registration method of heterogeneous images
CN115345822A (en) Automatic three-dimensional detection method for surface structure light of aviation complex part
CN104537707A (en) Image space type stereo vision on-line movement real-time measurement system
CN111091076B (en) Tunnel limit data measuring method based on stereoscopic vision
CN113971768A (en) Unmanned aerial vehicle-based three-dimensional dynamic detection method for power transmission line illegal building
CN114841944B (en) Tailing dam surface deformation inspection method based on rail-mounted robot
CN113298947A (en) Multi-source data fusion-based three-dimensional modeling method medium and system for transformer substation
CN113902809A (en) Method for jointly calibrating infrared camera and laser radar
CN110992463B (en) Three-dimensional reconstruction method and system for sag of transmission conductor based on three-eye vision
CN110084743A (en) Image mosaic and localization method based on more air strips starting track constraint
CN114998448A (en) Method for calibrating multi-constraint binocular fisheye camera and positioning space point
CN115375762A (en) Three-dimensional reconstruction method for power line based on trinocular vision
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
CN117173601B (en) Photovoltaic power station array hot spot identification method and system
CN113409242A (en) Intelligent monitoring method for point cloud of rail intersection bow net
CN114998532B (en) Three-dimensional image visual transmission optimization method based on digital image reconstruction
CN110487254B (en) Rapid underwater target size measuring method for ROV

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination