CN112767391B - Power grid line part defect positioning method integrating three-dimensional point cloud and two-dimensional image - Google Patents

Power grid line part defect positioning method integrating three-dimensional point cloud and two-dimensional image Download PDF

Info

Publication number
CN112767391B
CN112767391B CN202110211115.5A CN202110211115A CN112767391B CN 112767391 B CN112767391 B CN 112767391B CN 202110211115 A CN202110211115 A CN 202110211115A CN 112767391 B CN112767391 B CN 112767391B
Authority
CN
China
Prior art keywords
coordinate system
image
dimensional
camera
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110211115.5A
Other languages
Chinese (zh)
Other versions
CN112767391A (en
Inventor
王仁书
吴文斌
郑宗安
冯尚龙
谢朝辉
许军
徐鹏飞
王晓杰
方超颖
林鸿伟
林力辉
吴晓杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electric Power Research Institute of State Grid Fujian Electric Power Co Ltd
State Grid Fujian Electric Power Co Ltd
Original Assignee
Electric Power Research Institute of State Grid Fujian Electric Power Co Ltd
State Grid Fujian Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electric Power Research Institute of State Grid Fujian Electric Power Co Ltd, State Grid Fujian Electric Power Co Ltd filed Critical Electric Power Research Institute of State Grid Fujian Electric Power Co Ltd
Priority to CN202110211115.5A priority Critical patent/CN112767391B/en
Publication of CN112767391A publication Critical patent/CN112767391A/en
Application granted granted Critical
Publication of CN112767391B publication Critical patent/CN112767391B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
    • Y04S10/52Outage or fault management, e.g. fault detection or location

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a power grid line part defect positioning method fusing three-dimensional point cloud and two-dimensional images, which comprises the following steps of; step S1: calibrating camera parameters of the unmanned aerial vehicle to enable a pixel coordinate system of the camera to be converted with a world coordinate system; step S2: extracting shooting point location information from a visible light image F acquired by the unmanned aerial vehicle for aerial shooting and routing inspection of the power grid line, and acquiring three-dimensional point cloud data of a routing inspection target in the visible light image F according to the shooting point location information; step S3: acquiring the central coordinates of the defective part by using a deep learning model; step S4: performing projection transformation on the three-dimensional point cloud data to obtain a two-dimensional image F'; step S5: f, F 'is subjected to fusion processing to obtain the coordinate position of the defective part in the image F'; step S6: transforming the coordinate position of the defect part in the image F' to a position in a three-dimensional world coordinate system; the method can accurately record the information of the defective part of the power grid line and promote the subsequent image information processing efficiency of the machine patrol operation.

Description

Power grid line part defect positioning method fusing three-dimensional point cloud and two-dimensional image
Technical Field
The invention relates to the technical field of power grid line inspection, in particular to a power grid line part defect positioning method fusing three-dimensional point cloud and two-dimensional images.
Background
Unmanned aerial vehicle carries the visible light to shoot the device and carries out the fine and close routing inspection of electric wire netting circuit and has progressively normalized, and present circuit part defect analysis also mainly is based on the circuit image that unmanned aerial vehicle shot. However, there are two problems in the photographing process of the unmanned aerial vehicle: firstly, due to the fact that deviation exists in actual operation, the position of a target component in an image is uncertain due to changes of factors such as a shooting target area, an angle and an azimuth; and secondly, the shot image contains more parts, and the accurate corresponding relation between the defects in the image and the actual part positions cannot be established only by depending on the two-dimensional image information after defect analysis. Therefore, in the current line defect recording, the content of the line component defect is mainly recorded in a text description manner, and since the defect occurrence position cannot be accurately provided in the text description, and the description of the defect portion is different or even ambiguous due to different tower types, new component application and the like, the analysis of the line defect is difficult, and therefore, a method capable of accurately positioning the line defect is demanded.
Disclosure of Invention
The invention provides a power grid line part defect positioning method fusing three-dimensional point cloud and two-dimensional images, which has the characteristics of rapidness and accuracy, and accurately records the information of a power grid line defect part through methods of three-dimensional coordinate transformation, image characteristic identification matching and the like, so that the subsequent image information processing working efficiency in the power grid line machine patrol operation is promoted, and the mining and application effects of massive patrol image data information are promoted and improved.
The invention adopts the following technical scheme.
A power grid line part defect positioning method fusing three-dimensional point cloud and two-dimensional images is used for unmanned aerial vehicle aerial photography inspection, and comprises the following steps;
step S1: calibrating camera parameters of a camera carried by the unmanned aerial vehicle, so that a pixel coordinate system of the camera and a world coordinate system can be converted mutually;
step S2: extracting shooting point location information corresponding to the image from a visible light image F acquired by the unmanned aerial vehicle for aerial photography and routing inspection of the power grid line, and acquiring three-dimensional point cloud data of a routing inspection target in the visible light image F according to the information;
step S3: intelligently identifying a defective component in the inspection target by using a deep learning model to obtain a center coordinate of the defective component;
step S4: performing projection transformation on the three-dimensional point cloud data to process three-dimensional image information of the three-dimensional point cloud data to obtain coordinates of the three-dimensional point cloud data in a coordinate system with a camera as an origin, and simultaneously performing transformation from a world coordinate system to an image coordinate system according to camera calibration parameters to obtain a two-dimensional image F';
step S5: performing fusion processing, and confirming by adopting two modes of characteristic matching and position matching in order to ensure accurate matching of the defective parts in the inspection target; firstly, carrying out feature matching on the whole visible light image F and the two-dimensional image F' to obtain the position offset of the whole target in the two images; then, calculating the center coordinates and offset of the defect part identified in the image F to obtain the coordinate position of the defect part in the image F';
step S6: carrying out coordinate inverse transformation; and transforming the coordinate position of the defective part in the image F' to the position in a three-dimensional world coordinate system, establishing the corresponding relation between the state of the part and the actual position, and recording the result.
The inspection target in the step S2 is a tower in a power grid line, and the step S2 extracts three-dimensional point cloud of the tower corresponding to the inspection target on the basis of the visible light image information.
In step S3, after the deep learning model intelligently identifies the defective component in the inspection target, a defective target frame is output at the center position coordinates in the visible light image F to identify the defective component.
The shooting location information in the step S2 includes the shooting position of the unmanned aerial vehicle and the shooting orientation of the camera; the unmanned aerial vehicle shooting position comprises the height of the unmanned aerial vehicle and the GPS coordinate of the unmanned aerial vehicle shooting point;
in step S4, a three-dimensional point cloud projection conversion process is performed on the target tower in the visible light image F, and the method includes the following steps;
step S41: according to the shooting position and the camera shooting direction of the unmanned aerial vehicle and the three-dimensional point cloud data of the shooting target tower, carrying out coordinate system transformation on the three-dimensional point cloud data of the tower;
step S42: and further carrying out coordinate system transformation on the three-dimensional point cloud data by combining a transformation matrix obtained by calibrating the camera to obtain a two-dimensional image under a pixel coordinate system.
In the step S5, the fusion process between the visible light image F and the two-dimensional image F' is performed, and the method includes the following steps:
step S51: extracting the tower outline in the visible light image, carrying out feature matching on the tower shape in the visible light image F and the two-dimensional image F', and simultaneously calculating the deviation of the target positions in the two images;
step S52: further, the position coordinates of the defective part in F' are calculated based on the obtained deviation and the target frame center coordinates obtained in step S3.
In step S6, after transforming the coordinate position of the defective component in the image F' to a position in the three-dimensional world coordinate system, establishing a corresponding relationship between the state of the defective component and the actual position, and recording the result in a data structure, where the fields of the data structure include a serial number, an actual position of the component, a name of a line in which the component is located, a pole tower number, time, a component name, and a state.
In the step S3, defect recognition is carried out by adopting an intelligent recognition model based on YoloV5 deep learning; the intelligent recognition model comprises an attention module, an up-sampling module, a CSP1_ i module, a CSP2_ i module and an SPP module, wherein the CSP1_ i module comprises a CBL module and i residual error connecting units; CSP2_ i module includes i CBL modules; the SPP module comprises three maximum pooling layers and is used for deeply mining the image characteristics;
when the intelligent recognition model carries out defect recognition, the image to be recognized is input into the detection model, the vertex coordinates of the defect part and the rectangular target frame in the visible light image are obtained through the processing of the convolutional neural network, the position of the defect part is represented by the central point coordinates, and the central point coordinates of the part are obtained through calculating the vertex coordinates of the target frame.
In the step S2, performing coordinate transformation on the target tower by using the transformation matrix between the pixel coordinate system and the world coordinate system in the step S1 through information extraction of the visible light image; the specific method comprises the following steps: the coordinate transformation comprises transformation from a space rectangular geodetic coordinate system to an unmanned aerial vehicle geographic coordinate system and transformation from the unmanned aerial vehicle geographic coordinate system to a camera coordinate system;
the tower is set as a point t1 under a space rectangular geodetic coordinate system XYZ, and the coordinate is (t) x1 ,t y1 ,t z1 ) The corresponding longitude is B, latitude L and height H; c1 represents the coordinate of the camera center in the space rectangular ground coordinate system as (c) x1 ,c y1 ,c z1 ) C2 is the image center in spaceThe coordinate in the rectangular geodetic coordinate system is (c) x2 ,c y2 ,c z2 ) Wherein f ═ c 2 -c 1 L, f is the focal length of the camera; under the condition of horizontal shooting of the unmanned aerial vehicle, the pitch angle and the roll angle can be set to be 0, and only the yaw angle omega of the holder is considered; in the geographical coordinate system X 'Y' Z 'of the drone, c1 is the origin, c 1-Z' points to the center of the earth, c1-Y 'points to the true east, and c 1-X' points to the true north. In a camera coordinate system X ' Y ' Z ', c1 is an origin, c1-Z ' points to the center of the earth, c1-Y ' points to the shooting direction of the camera, and c1-X ' and c1-Y ' are in the same plane and are perpendicular to each other;
the transformation from the spatial rectangular-geodetic coordinate system to the geographical coordinate system of the unmanned aerial vehicle takes the formula
Figure BDA0002951492580000031
e is the first eccentricity of the earth ellipsoid, N is the curvature radius of the prime circle, and the longitude is B, the latitude L and the height H;
the transformation from the geographical coordinate system of the unmanned aerial vehicle to the camera coordinate system adopts the formula
Figure BDA0002951492580000032
Alpha and beta are included angles between the camera and the unmanned aerial vehicle in the horizontal and vertical directions, and when the shooting direction of the camera is consistent with the direction of the unmanned aerial vehicle, the shooting direction can be set to be 0; theta is the roll angle, omega is the yaw angle, psi is the pitch angle, and theta is 0, that is, when the body is held horizontally
Figure BDA0002951492580000033
Converting the camera coordinate system to the pixel coordinate system according to the camera calibration parameters in the step S1, wherein the formula is
Figure BDA0002951492580000034
f is the focal length of the camera, u 0 ,v 0 Is a reference origin in a pixel coordinate system;
after the coordinate transformation, the corresponding coordinates of the target tower in the two-dimensional image F' are obtained as follows:
Figure BDA0002951492580000035
(t x1 ,t y1 ,t z1 ) As coordinates of the actual tower, (u ', v') as coordinates of the tower in the pixel coordinate system.
In the step S5, the hough transform is used to extract the main object features in the visible light image F, and then the features are matched with the image F', in the following manner:
step A1, carrying out gray processing and noise filtering on the visible light image F; wherein the noise filtering adopts a formula mean filtering method, and the formula is
Figure BDA0002951492580000041
Wherein, the pixel point (p, q) is a point in the original image, S is the neighborhood of the point, n is the number of the midpoint in the neighborhood, and the filtered image is
Figure BDA0002951492580000042
A2, extracting contour features of the main body target; adopting a canny algorithm to extract edges, then carrying out hough transformation and mapping to Hough space, forming thresholds such as the number of the minimum points of a straight line by setting the minimum curve intersection points required by the straight line, and filtering an interference straight line; carrying out contour feature matching; step A3, converting the size of the input image F 'to make the size of the input image F' the same as that of the image F, and then realizing the superposition matching of the features by adopting a translation mode; is given by the formula
Figure BDA0002951492580000043
Selecting a translation vector with the highest contact ratio, namely the smallest D value, as a target offset value delta M (delta x, delta y) in the two graphs in a traversal optimization mode;
step A4, intelligently identifying and obtaining the center coordinate (u) of the target frame Fni in the visible light image F according to the visible light image ci ,v ci ) And Δ M, the position coordinates of the target center in F' can be obtained according to the following formula;
Figure BDA0002951492580000044
in step S6, the coordinate of the target coordinate is inversely transformed according to the transformation matrix used for the coordinate transformation, and the pixel coordinate system is transformed into the world coordinate system according to the formula
Figure BDA0002951492580000045
Compared with the prior art, the invention has the following beneficial effects:
(1) the invention adopts a method of fusing three-dimensional point cloud and two-dimensional image information of the power grid line component, thereby improving the accuracy of information acquisition of the working condition information and the position of the component, avoiding ambiguity and uncertain influence generated by character description, and providing a basis for further developing the research of line defect cause, development trend, defect panoramic display and the like.
(2) According to the invention, the deep excavation and fusion application of the two-dimensional visible light image and the three-dimensional point cloud information for power grid line inspection are carried out, and the utilization rate of power grid inspection data is improved.
(3) The invention relates to comprehensive management and calling of information such as unmanned aerial vehicle shooting point positions, camera parameters, tower three-dimensional point clouds, tower visible light images and the like, and is beneficial to intensive management of massive power grid line data.
The invention adopts the data structure and the unified world coordinate system to store the inspection data of the inspection target, thereby being beneficial to the unified management and maintenance of the power network.
Drawings
The invention is described in further detail below with reference to the accompanying drawings:
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a schematic diagram of a visible light image recognition model according to the present invention;
FIG. 3 is a schematic diagram of the coordinate transformation of the present invention;
fig. 4 is a schematic diagram of the feature matching process of the present invention.
Detailed Description
As shown in the figure, the power grid line part defect positioning method fusing the three-dimensional point cloud and the two-dimensional image is used for unmanned aerial vehicle aerial photography inspection, and comprises the following steps;
step S1: calibrating camera parameters of a camera carried by the unmanned aerial vehicle, so that a pixel coordinate system of the camera and a world coordinate system can be converted mutually;
step S2: extracting shooting point location information corresponding to the image from a visible light image F acquired by the unmanned aerial vehicle for aerial photography and routing inspection of the power grid line, and acquiring three-dimensional point cloud data of a routing inspection target in the visible light image F according to the information;
step S3: intelligently identifying a defective component in the inspection target by using a deep learning model to obtain a central coordinate of the defective component;
step S4: performing projection transformation on the three-dimensional point cloud data to process three-dimensional image information of the three-dimensional point cloud data to obtain coordinates of the three-dimensional point cloud data in a coordinate system with a camera as an origin, and simultaneously performing transformation from a world coordinate system to an image coordinate system according to camera calibration parameters to obtain a two-dimensional image F';
step S5: performing fusion processing, and confirming by adopting two modes of characteristic matching and position matching in order to ensure accurate matching of the defective parts in the inspection target; firstly, carrying out feature matching on the whole of the visible light image F and the two-dimensional image F' to obtain the position offset of the whole target in the two images; then, calculating the center coordinates and offset of the defect part identified in the image F to obtain the coordinate position of the defect part in the image F';
step S6: carrying out coordinate inverse transformation; and transforming the coordinate position of the defective part in the image F' to the position in a three-dimensional world coordinate system, establishing the corresponding relation between the state of the part and the actual position, and recording the result.
The inspection target in the step S2 is a tower in a power grid line, and the step S2 extracts three-dimensional point cloud of the tower corresponding to the inspection target on the basis of the visible light image information.
In step S3, after the deep learning model intelligently identifies the defective component in the inspection target, a defective target frame is output at the center position coordinates in the visible light image F to identify the defective component.
The shooting location information in the step S2 includes the shooting position of the unmanned aerial vehicle and the shooting orientation of the camera; the unmanned aerial vehicle shooting position comprises the height of the unmanned aerial vehicle and the GPS coordinate of the unmanned aerial vehicle shooting point;
in step S4, performing three-dimensional point cloud projection transformation on the target tower in the visible light image F, wherein the method includes the following steps;
step S41: according to the shooting position of the unmanned aerial vehicle and the shooting direction of the camera, and in combination with the three-dimensional point cloud data of the shooting target tower, coordinate system transformation is carried out on the three-dimensional point cloud data of the tower;
step S42: and further carrying out coordinate system transformation on the three-dimensional point cloud data by combining a transformation matrix obtained by calibrating the camera to obtain a two-dimensional image under a pixel coordinate system.
In the step S5, the fusion process between the visible light image F and the two-dimensional image F' is performed, and the method includes the following steps:
step S51: extracting the tower outline in the visible light image, carrying out feature matching on the tower shape in the visible light image F and the two-dimensional image F', and simultaneously calculating the deviation of the target positions in the two images;
step S52: further, the position coordinates of the defective part in F' are calculated based on the obtained deviation and the target frame center coordinates obtained in step S3.
In step S6, after transforming the coordinate position of the defective component in the image F' to a position in the three-dimensional world coordinate system, establishing a corresponding relationship between the state of the defective component and the actual position, and recording the result in a data structure, where the fields of the data structure include a serial number, an actual position of the component, a name of a line in which the component is located, a pole tower number, time, a component name, and a state.
In the step S3, defect recognition is carried out by adopting an intelligent recognition model based on YoloV5 deep learning; the intelligent recognition model comprises an attention module, an up-sampling module, a CSP1_ i module, a CSP2_ i module and an SPP module, wherein the CSP1_ i module comprises a CBL module and i residual error connecting units; CSP2_ i module includes i CBL modules; the SPP module comprises three maximum pooling layers for deep mining of image features;
when the intelligent recognition model carries out defect recognition, the image to be recognized is input into the detection model, the vertex coordinates of the defect part and the rectangular target frame in the visible light image are obtained through the processing of the convolutional neural network, the position of the defect part is represented by the central point coordinates, and the coordinates of the central point of the part are obtained through calculating the vertex coordinates of the target frame.
In the step S2, performing coordinate transformation on the target tower by using the transformation matrix between the pixel coordinate system and the world coordinate system in the step S1 through information extraction of the visible light image; the specific method comprises the following steps: the coordinate transformation comprises transformation from a space rectangular geodetic coordinate system to a unmanned aerial vehicle geographic coordinate system and also comprises transformation from the unmanned aerial vehicle geographic coordinate system to a camera coordinate system;
the tower is set as a point t1 under a space rectangular geodetic coordinate system XYZ, and the coordinate is (t) x1 ,t y1 ,t z1 ) The corresponding longitude is B, latitude L and height H; c1 is the coordinate of the camera center under the spatial rectangular earth coordinate system as (c) x1 ,c y1 ,c z1 ) And c2 is the coordinate of the image center under the space rectangular geodetic coordinate system as (c) x2 ,c y2 ,c z2 ) Wherein f ═ c 2 -c 1 L, f is the focal length of the camera; under the condition of horizontal shooting of the unmanned aerial vehicle, the pitch angle and the roll angle can be set to be 0, and only the yaw angle omega of the holder is considered; in the geographical coordinate system X 'Y' Z 'of the unmanned aerial vehicle, c1 is an origin, c 1-Z'Pointing to the center of the earth, c1-Y 'pointing to the true east, c 1-X' pointing to the true north. In a camera coordinate system X ' Y ' Z ', c1 is an origin, c1-Z ' points to the center of the earth, c1-Y ' points to the shooting direction of the camera, and c1-X ' and c1-Y ' are in the same plane and are perpendicular to each other;
the transformation from the spatial rectangular-geodetic coordinate system to the geographical coordinate system of the unmanned aerial vehicle takes the formula
Figure BDA0002951492580000071
e is the first eccentricity of the earth ellipsoid, N is the curvature radius of the prime circle, and the longitude is B, the latitude L and the height H;
the conversion from the geographical coordinate system of the unmanned aerial vehicle to the camera coordinate system adopts the formula
Figure BDA0002951492580000072
Alpha and beta are included angles between the camera and the unmanned aerial vehicle in the horizontal and vertical directions, and when the shooting direction of the camera is consistent with the direction of the unmanned aerial vehicle, the shooting direction can be set to be 0; theta is the roll angle, omega is the yaw angle, psi is the pitch angle, and theta-psi-0 when the fuselage is held horizontally, i.e. there is
Figure BDA0002951492580000073
Converting the camera coordinate system into a pixel coordinate system according to the camera calibration parameters in the step S1, wherein the formula is
Figure BDA0002951492580000074
f is the focal length of the camera, u 0 ,v 0 Is a reference origin in a pixel coordinate system;
after the coordinate transformation, the corresponding coordinates of the target tower in the two-dimensional image F' are obtained as follows:
Figure BDA0002951492580000075
(t x1 ,t y1 ,t z1 ) As the coordinates of the actual tower, (u ', v') as the coordinates of the tower in the pixel coordinate system.
In the step S5, the hough transform is used to extract the main object features in the visible light image F, and then the features are matched with the image F', in the following manner:
step A1, carrying out gray processing and noise filtering on the visible light image F; wherein the noise filtering adopts a formula mean filtering method, and the formula is
Figure BDA0002951492580000076
Wherein, the pixel point (p, q) is a point in the original image, S is a neighborhood of the point, n is the number of midpoints in the neighborhood, and the filtered image is
Figure BDA0002951492580000081
A2, extracting contour features of the main body target; adopting a canny algorithm to extract edges, then carrying out hough transformation and mapping to Hough space, forming thresholds such as the number of the minimum points of a straight line by setting the minimum curve intersection points required by the straight line, and filtering an interference straight line; carrying out contour feature matching; step A3, converting the size of the input image F' to be the same as that of the image F, and then realizing the superposition matching of the characteristics by adopting a translation mode; is given by the formula
Figure BDA0002951492580000082
Selecting a translation vector with the highest contact ratio, namely the smallest D value, as a target offset value delta M (delta x, delta y) in the two graphs in a traversal optimization mode;
step A4, intelligently identifying and obtaining the visible light image of the target frame Fni according to the visible light imageCenter coordinate (u) of F ci ,v ci ) And Δ M, the position coordinates of the target center in F' can be obtained according to the following formula;
Figure BDA0002951492580000083
in step S6, the coordinate of the target coordinate is inversely transformed according to the transformation matrix used for the coordinate transformation, and the pixel coordinate system is transformed into the world coordinate system according to the formula
Figure BDA0002951492580000084
After coordinate calculation is completed, by combining the information extracted and generated in each step, a defect record can be generated, wherein the defect record comprises the contents of time, tower number, line name, actual position of a defective part, part name, part state and the like, and the contents are shown in the following table
Grid line defect component recording
Figure BDA0002951492580000085
The above description is only a preferred embodiment of the present invention, and all equivalent changes and modifications made in accordance with the claims of the present invention should be covered by the present invention.

Claims (7)

1. A power grid line part defect positioning method fusing three-dimensional point cloud and two-dimensional images is used for unmanned aerial vehicle aerial photography inspection and is characterized in that: the positioning method comprises the following steps;
step S1: calibrating camera parameters of a camera carried by the unmanned aerial vehicle, and converting a pixel coordinate system of the camera and a space rectangular geodetic coordinate system;
step S2: extracting shooting point location information corresponding to the image from a visible light image F acquired by the unmanned aerial vehicle for aerial photography and routing inspection of the power grid line, and acquiring three-dimensional point cloud data of a routing inspection target in the visible light image F according to the information;
step S3: intelligently identifying a defective component in a routing inspection target to obtain a center coordinate of the defective component;
step S4: performing projection transformation on the three-dimensional point cloud data to process three-dimensional image information of the three-dimensional point cloud data to obtain coordinates of the three-dimensional point cloud data in a coordinate system with a camera as an origin, and simultaneously performing transformation from a space rectangular geodetic coordinate system to an image coordinate system according to camera calibration parameters to obtain a two-dimensional image F';
step S5: firstly, carrying out feature matching on the whole of the visible light image F and the two-dimensional image F' to obtain the position offset of the whole target in the two images; then, calculating the center coordinates and offset of the defect part identified in the image F to obtain the coordinate position of the defect part in the image F';
step S6: transforming the coordinate position of the defective component in the image F' to a position under a space rectangular geodetic coordinate system, and establishing a corresponding relation between the state of the component and the actual position;
the shooting location information in the step S2 includes the shooting position of the unmanned aerial vehicle and the shooting orientation of the camera; the unmanned aerial vehicle shooting position comprises the height of the unmanned aerial vehicle and the GPS coordinate of the unmanned aerial vehicle shooting point;
in step S4, a three-dimensional point cloud projection conversion process is performed on the target tower in the visible light image F, and the method includes the following steps;
step S41: according to the shooting position of the unmanned aerial vehicle and the shooting direction of the camera, and in combination with the three-dimensional point cloud data of the shooting target tower, coordinate system transformation is carried out on the three-dimensional point cloud data of the tower;
step S42: further performing coordinate system transformation on the three-dimensional point cloud data by combining a transformation matrix obtained by camera calibration to obtain a two-dimensional image under a pixel coordinate system;
in the step S3, defect recognition is carried out by adopting an intelligent recognition model based on YoloV5 deep learning; the intelligent recognition model comprises an attention module, an up-sampling module, a CSP1_ i module, a CSP2_ i module and an SPP module, wherein the CSP1_ i module comprises a CBL module and i residual error connecting units; CSP2_ i module includes i CBL modules; the SPP module comprises three maximum pooling layers and is used for deeply mining the image characteristics;
when the intelligent recognition model carries out defect recognition, inputting an image to be recognized into the detection model, processing the image through a convolutional neural network to obtain a defect component in a visible light image and a vertex coordinate of a rectangular target frame, representing the position of the defect component by adopting a central point coordinate, and obtaining a central point coordinate of the component by calculating the vertex coordinate of the target frame;
the feature matching in step S5 is performed as follows:
step A1, carrying out gray processing and noise filtering on the visible light image F; wherein the noise filtering adopts a formula mean filtering method, and the formula is
Figure FDA0003756790410000021
The image filtering method comprises the following steps that a pixel point (p, q) is a point in an original image, S is a neighborhood of the point, n is the number of points in the neighborhood, and f% (u, v) is obtained after filtering;
a2, performing edge extraction, then transforming and mapping to Hough space, forming a threshold value of the number of the minimum points of a straight line by setting the minimum curve intersection points required by the straight line, and filtering an interference straight line;
step A3, converting the size of an input image F' to be the same as that of the image F, and then realizing the superposition matching of the characteristics by adopting a translation mode; is given by the formula
Figure FDA0003756790410000022
Selecting the translation vector with the highest contact ratio, namely the translation vector with the smallest D value as a target offset value delta M (delta x, delta y) in the two graphs;
step A4, intelligently identifying and obtaining the center coordinate (u) of the target frame Fni in the visible light image F according to the visible light image ci ,v ci ) And Δ M, the position coordinates of the target center in F' can be obtained according to the following formula;
Figure FDA0003756790410000023
2. the method for locating the defects of the power grid line component by fusing the three-dimensional point cloud and the two-dimensional image according to claim 1, wherein the method comprises the following steps: the inspection target in the step S2 is a tower in the power grid line, and the step S2 extracts three-dimensional point cloud corresponding to the tower of the inspection target based on the visible light image information.
3. The method for locating the defects of the power grid line component by fusing the three-dimensional point cloud and the two-dimensional image according to claim 1, wherein the method comprises the following steps: in step S3, after the deep learning model intelligently identifies the defective component in the inspection target, a defective target frame is output at the coordinates of the center position in the visible light image F to identify the defective component.
4. The method for positioning the defects of the power grid line component by fusing the three-dimensional point cloud and the two-dimensional image according to claim 1, wherein the method comprises the following steps: in the step S5, the method for matching the characteristics of the visible light image F and the two-dimensional image F' includes the steps of:
step S51: extracting the tower outline in the visible light image and the tower shape in the two-dimensional image F' for feature matching, and simultaneously calculating the deviation of the target positions in the two images;
step S52: further, the position coordinates of the defective part in F' are calculated based on the obtained deviation and the target frame center coordinates obtained in step S3.
5. The method for positioning the defects of the power grid line component by fusing the three-dimensional point cloud and the two-dimensional image according to claim 1, wherein the method comprises the following steps: in step S6, after transforming the coordinate position of the defective part in the image F' to a position in a spatial rectangular geodetic coordinate system, establishing a corresponding relationship between the state of the defective part and the actual position, and recording the result in a data structure, where the fields of the data structure include a serial number, an actual position of the part, a line name of the defective part, a pole tower number, time, a part name, and a state.
6. The method for positioning the defects of the power grid line component by fusing the three-dimensional point cloud and the two-dimensional image according to claim 1, wherein the method comprises the following steps: the specific method in step S4 is as follows: the coordinate transformation comprises transformation from a space rectangular geodetic coordinate system to a unmanned aerial vehicle geographic coordinate system and also comprises transformation from the unmanned aerial vehicle geographic coordinate system to a camera coordinate system;
the tower is set as a point t1 under a space rectangular geodetic coordinate system XYZ, and the coordinate is (t) x1 ,t y1 ,t z1 ) The corresponding longitude is B, latitude L and height H; c1 represents the coordinate of the camera center in the space rectangular ground coordinate system as (c) x1 ,c y1 ,c z1 ) And c2 is the coordinate of the image center under the space rectangular geodetic coordinate system as (c) x2 ,c y2 ,c z2 ) Wherein f ═ c 2 -c 1 L, f is the focal length of the camera; under the condition of horizontal shooting of the unmanned aerial vehicle, setting a pitch angle and a roll angle as 0 and a pan-tilt yaw angle as omega; in the geographical coordinate system X 'Y' Z 'of the unmanned aerial vehicle, c1 is the origin, c 1-Z' points to the center of the earth, c1-Y 'points to the true east, and c 1-X' points to the true north; in a camera coordinate system X ' Y ' Z ', c1 is an origin, c1-Z ' points to the center of the earth, c1-Y ' points to the shooting direction of the camera, and c1-X ' and c1-Y ' are in the same plane and are perpendicular to each other;
the transformation from the spatial rectangular-geodetic coordinate system to the geographical coordinate system of the unmanned aerial vehicle takes the formula
Figure FDA0003756790410000031
e is the first eccentricity of the earth ellipsoid, N is the curvature radius of the prime circle, and the longitude is B, the latitude L and the height H;
the conversion from the geographical coordinate system of the unmanned aerial vehicle to the camera coordinate system adopts the formula
Figure FDA0003756790410000032
Alpha and beta are included angles between the camera and the unmanned aerial vehicle in the horizontal and vertical directions, and when the shooting direction of the camera is consistent with the direction of the unmanned aerial vehicle, the shooting direction can be set to be 0; theta is the roll angle, omega is the yaw angle, psi is the pitch angle, and theta is 0, that is, when the body is held horizontally
Figure FDA0003756790410000033
Converting the camera coordinate system into a pixel coordinate system according to the camera calibration parameters in the step S1, wherein the formula is
Figure FDA0003756790410000034
f is the focal length of the camera, u 0 ,v 0 Is a reference origin in a pixel coordinate system;
after the coordinate transformation, the corresponding coordinates of the target tower in the two-dimensional image F' are obtained as follows:
Figure FDA0003756790410000035
(t x1 ,t y1 ,t z1 ) As coordinates of the actual tower, (u ', v') as coordinates of the tower in the pixel coordinate system.
7. The method for positioning the defects of the power grid line component by fusing the three-dimensional point cloud and the two-dimensional image according to claim 5, wherein the method comprises the following steps: in step S6, the coordinate of the target is inversely transformed according to the transformation matrix used in the coordinate transformation, and the pixel coordinate system is transformed into a spatial rectangular-earth coordinate system according to the formula
Figure FDA0003756790410000041
CN202110211115.5A 2021-02-25 2021-02-25 Power grid line part defect positioning method integrating three-dimensional point cloud and two-dimensional image Active CN112767391B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110211115.5A CN112767391B (en) 2021-02-25 2021-02-25 Power grid line part defect positioning method integrating three-dimensional point cloud and two-dimensional image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110211115.5A CN112767391B (en) 2021-02-25 2021-02-25 Power grid line part defect positioning method integrating three-dimensional point cloud and two-dimensional image

Publications (2)

Publication Number Publication Date
CN112767391A CN112767391A (en) 2021-05-07
CN112767391B true CN112767391B (en) 2022-09-06

Family

ID=75704205

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110211115.5A Active CN112767391B (en) 2021-02-25 2021-02-25 Power grid line part defect positioning method integrating three-dimensional point cloud and two-dimensional image

Country Status (1)

Country Link
CN (1) CN112767391B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113014824B (en) * 2021-05-11 2021-09-24 北京远度互联科技有限公司 Video picture processing method and device and electronic equipment
CN113359815A (en) * 2021-05-19 2021-09-07 上海电机学院 Fan blade unmanned aerial vehicle autonomous obstacle avoidance inspection method and system based on RTK positioning
CN113239842A (en) * 2021-05-25 2021-08-10 三门峡崤云信息服务股份有限公司 Image recognition-based swan detection method and device
CN113191336B (en) * 2021-06-04 2022-01-14 绍兴建元电力集团有限公司 Electric power hidden danger identification method and system based on image identification
CN113591876B (en) * 2021-06-25 2023-08-08 东莞市鑫泰仪器仪表有限公司 Three-dimensional full-acoustic anomaly detection imaging method and device
CN113624133A (en) * 2021-08-05 2021-11-09 合肥阳光智维科技有限公司 Fault positioning method and device and electronic equipment
CN113813170B (en) * 2021-08-30 2023-11-24 中科尚易健康科技(北京)有限公司 Method for converting target points among cameras of multi-camera physiotherapy system
CN113516660B (en) * 2021-09-15 2021-12-07 江苏中车数字科技有限公司 Visual positioning and defect detection method and device suitable for train
CN113837124B (en) * 2021-09-28 2023-12-05 中国有色金属长沙勘察设计研究院有限公司 Automatic extraction method for geotechnical cloth inspection route of sludge discharging warehouse
CN113601536B (en) * 2021-10-11 2022-03-18 国网智能科技股份有限公司 Distribution network vehicle-mounted intelligent inspection robot system and method
WO2023108210A1 (en) * 2021-12-14 2023-06-22 Geobotica Survey Pty Ltd Infrastructure safety inspection system
CN114494806A (en) * 2021-12-17 2022-05-13 湖南国天电子科技有限公司 Target identification method, system, device and medium based on multivariate information fusion
CN114355378B (en) * 2022-03-08 2022-06-07 天津云圣智能科技有限责任公司 Autonomous navigation method and device for unmanned aerial vehicle, unmanned aerial vehicle and storage medium
CN114779679A (en) * 2022-03-23 2022-07-22 北京英智数联科技有限公司 Augmented reality inspection system and method
CN114862957B (en) * 2022-07-08 2022-09-27 西南交通大学 Subway car bottom positioning method based on 3D laser radar
CN115511807B (en) * 2022-09-16 2023-07-28 北京远舢智能科技有限公司 Method and device for determining position and depth of groove
CN115240093B (en) * 2022-09-22 2022-12-23 山东大学 Automatic power transmission channel inspection method based on visible light and laser radar point cloud fusion
CN115965579B (en) * 2022-11-14 2023-09-22 中国电力科学研究院有限公司 Substation inspection three-dimensional defect identification and positioning method and system
CN117495712A (en) * 2024-01-02 2024-02-02 天津天汽模志通车身科技有限公司 Method, system and equipment for enhancing generated data of vehicle body part quality model

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105333861A (en) * 2015-12-02 2016-02-17 中国测绘科学研究院 Pole and tower skew detection method and device based on laser-point cloud
CN106856003A (en) * 2016-12-31 2017-06-16 南京理工大学 The expansion bearing calibration of shaft-like workpiece side surface defects detection image
CN108389256A (en) * 2017-11-23 2018-08-10 千寻位置网络有限公司 Two three-dimensional interactive unmanned plane electric force pole tower inspection householder methods
CN109945853A (en) * 2019-03-26 2019-06-28 西安因诺航空科技有限公司 A kind of geographical coordinate positioning system and method based on 3D point cloud Aerial Images
CN110703800A (en) * 2019-10-29 2020-01-17 国网江苏省电力有限公司泰州供电分公司 Unmanned aerial vehicle-based intelligent identification method and system for electric power facilities
CN110727288A (en) * 2019-11-13 2020-01-24 昆明能讯科技有限责任公司 Point cloud-based accurate three-dimensional route planning method for power inspection

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140340427A1 (en) * 2012-01-18 2014-11-20 Logos Technologies Llc Method, device, and system for computing a spherical projection image based on two-dimensional images
US9558559B2 (en) * 2013-04-05 2017-01-31 Nokia Technologies Oy Method and apparatus for determining camera location information and/or camera pose information according to a global coordinate system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105333861A (en) * 2015-12-02 2016-02-17 中国测绘科学研究院 Pole and tower skew detection method and device based on laser-point cloud
CN106856003A (en) * 2016-12-31 2017-06-16 南京理工大学 The expansion bearing calibration of shaft-like workpiece side surface defects detection image
CN108389256A (en) * 2017-11-23 2018-08-10 千寻位置网络有限公司 Two three-dimensional interactive unmanned plane electric force pole tower inspection householder methods
CN109945853A (en) * 2019-03-26 2019-06-28 西安因诺航空科技有限公司 A kind of geographical coordinate positioning system and method based on 3D point cloud Aerial Images
CN110703800A (en) * 2019-10-29 2020-01-17 国网江苏省电力有限公司泰州供电分公司 Unmanned aerial vehicle-based intelligent identification method and system for electric power facilities
CN110727288A (en) * 2019-11-13 2020-01-24 昆明能讯科技有限责任公司 Point cloud-based accurate three-dimensional route planning method for power inspection

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于无人机可见光点云的输电线路树障隐患智能分析研究;曾忱等;《电力设备管理》;20200325(第03期);全文 *
基于无人机多传感器数据采集的电力线路安全巡检及智能诊断;彭向阳等;《高电压技术》;20150131(第01期);全文 *
无人机载多载荷输电线路巡检方法研究;陈科羽等;《电力大数据》;20200221(第02期);全文 *

Also Published As

Publication number Publication date
CN112767391A (en) 2021-05-07

Similar Documents

Publication Publication Date Title
CN112767391B (en) Power grid line part defect positioning method integrating three-dimensional point cloud and two-dimensional image
Cheng et al. 3D building model reconstruction from multi-view aerial imagery and lidar data
WO2022078240A1 (en) Camera precise positioning method applied to electronic map, and processing terminal
CN106529538A (en) Method and device for positioning aircraft
CN115439424A (en) Intelligent detection method for aerial video image of unmanned aerial vehicle
CN113192646B (en) Target detection model construction method and device for monitoring distance between different targets
CN113192193A (en) High-voltage transmission line corridor three-dimensional reconstruction method based on Cesium three-dimensional earth frame
CN114004977A (en) Aerial photography data target positioning method and system based on deep learning
CN105550994A (en) Satellite image based unmanned aerial vehicle image rapid and approximate splicing method
EP4068210A1 (en) System and method for automated estimation of 3d orientation of a physical asset
CN113642463B (en) Heaven and earth multi-view alignment method for video monitoring and remote sensing images
CN116503705B (en) Fusion method of digital city multi-source data
CN115330594A (en) Target rapid identification and calibration method based on unmanned aerial vehicle oblique photography 3D model
TW202225730A (en) High-efficiency LiDAR object detection method based on deep learning through direct processing of 3D point data to obtain a concise and fast 3D feature to solve the shortcomings of complexity and time-consuming of the current voxel network model
CN116563377A (en) Mars rock measurement method based on hemispherical projection model
CN111462310B (en) Bolt defect space positioning method based on multi-view geometry
CN112767459A (en) Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion
CN116524382A (en) Bridge swivel closure accuracy inspection method system and equipment
CN115375762A (en) Three-dimensional reconstruction method for power line based on trinocular vision
Li et al. Low-cost 3D building modeling via image processing
Zhang et al. A vision-centric approach for static map element annotation
CN112766068A (en) Vehicle detection method and system based on gridding labeling
Tang et al. Automatic geo‐localization framework without GNSS data
CN217918421U (en) Intelligent identification monitoring devices suitable for ecological environment destruction problem of remote sensing image
KR102249380B1 (en) System for generating spatial information of CCTV device using reference image information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant