CN114998245A - Method for detecting galloping of power transmission line based on binocular distance measurement and image segmentation - Google Patents

Method for detecting galloping of power transmission line based on binocular distance measurement and image segmentation Download PDF

Info

Publication number
CN114998245A
CN114998245A CN202210590232.1A CN202210590232A CN114998245A CN 114998245 A CN114998245 A CN 114998245A CN 202210590232 A CN202210590232 A CN 202210590232A CN 114998245 A CN114998245 A CN 114998245A
Authority
CN
China
Prior art keywords
binocular
image
camera
galloping
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210590232.1A
Other languages
Chinese (zh)
Inventor
刘东波
李正波
张永
吴纯泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Beiken Intelligent Technology Co ltd
Original Assignee
Shanghai Beiken Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Beiken Intelligent Technology Co ltd filed Critical Shanghai Beiken Intelligent Technology Co ltd
Priority to CN202210590232.1A priority Critical patent/CN114998245A/en
Publication of CN114998245A publication Critical patent/CN114998245A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/08Locating faults in cables, transmission lines, or networks
    • G01R31/081Locating faults in cables, transmission lines, or networks according to type of conductors
    • G01R31/085Locating faults in cables, transmission lines, or networks according to type of conductors in power transmission or distribution lines, e.g. overhead
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/08Locating faults in cables, transmission lines, or networks
    • G01R31/088Aspects of digital computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a detection method for galloping of a power transmission line based on binocular distance measurement and image segmentation, which relates to the technical field of image identification, and the detection method can carry out reasoning identification through a deep learning image segmentation algorithm model, then transmits an identification result to a binocular distance measurement system, the binocular distance measurement system judges whether the distance needs to be calculated through the identification result, finally transmits the identification result and calculated obstacle distance information to a mobile phone APP end and an equipment terminal, a picture is checked at the mobile phone APP end, outlines and distance information of a wire and an obstacle can be marked in different colors and compared with an original image, and according to difference of fixed obstacle distances at different moments in the same scene, whether the wire gallops and the violent degree of galloping can be judged.

Description

Method for detecting galloping of power transmission line based on binocular distance measurement and image segmentation
Technical Field
The invention relates to the technical field of image recognition, in particular to a method for detecting galloping of a power transmission line based on binocular distance measurement and image segmentation.
Background
The construction of the power transmission network in China is rapidly expanded on a large scale, accidents caused by galloping of the power transmission line are gradually highlighted, flashover and tripping are caused on a light occasion, metal clamps are damaged on a heavy occasion, strands and wires of the wires are broken, pole and tower bolts are loosened and fall off, even the pole is reversed, and the like, so that large-area power failure of the power transmission network is easily caused, and further huge economic loss is directly or indirectly caused to social life production. However, for monitoring the galloping of the power transmission line, a large amount of manpower and material resources are inevitably consumed in the traditional inspection mode adopting field inspection, and more importantly, the fault or hidden danger cannot be found in time, so that the work is delayed, and even the fault range is expanded.
With the development of computer technology and optoelectronic technology, machine vision technology has come up. The binocular distance measurement technology utilizes two cameras to shoot the same scene, so that image parallax is generated, and then an object distance measurement model is established through the parallax, so that the real-time calculation of the scene distance is realized.
Image segmentation, which is a process of dividing an image into several regions with similar properties, is an important research direction in the field of computer vision. The technology related to the technology, such as scene object segmentation, human body front background segmentation, three-dimensional reconstruction and the like, is widely applied to the industries of unmanned driving, augmented reality, security monitoring and the like. The image segmentation algorithm research object based on the deep learning is in a pixel level, the segmentation accuracy is very high in the application of the power transmission line external damage prevention monitoring system, and the method is suitable for multi-target detection segmentation of outdoor power transmission line scenes.
The existing waving detection technology can adopt sensor detection, satellite positioning detection and the like. The measurement accuracy of the sensor is easily influenced by temperature, the observation jitter error is large, the anti-interference capability is weak, and the sensor is used for detecting the galloping of the high-voltage overhead transmission line, so that the problems of large error and poor capability of predicting the accident risk of the high-voltage overhead transmission line exist. The existing method for detecting the conductor galloping state by satellite positioning also has a plurality of problems, including larger positioning height error, lower positioning precision, larger calculation error, long algorithm feedback time and the like of a positioning module. In conclusion, the existing high-voltage overhead transmission line galloping detection technology has various problems.
Disclosure of Invention
In order to solve the problems in the background technology, the invention provides a method for detecting the galloping of the power transmission line based on the binocular distance measurement and image segmentation technology, the detection method provided by the invention can carry out reasoning and identification through a deep learning image segmentation algorithm model, then the identification result is transmitted to a binocular distance measurement system, the binocular distance measurement system judges whether the distance needs to be calculated or not through the identification result, finally the identification result and the calculated obstacle distance information are transmitted to a mobile phone APP end and an equipment terminal, the picture is checked at the mobile phone APP end, the outlines and the distance information of the conducting wire and the obstacle can be marked in different colors and compared with the original image, and the galloping of the conducting wire and the galloping degree can be judged according to the violent difference of the fixed obstacle distances in the same scene and at different moments. The technical scheme of the invention is as follows:
a detection method for galloping of a power transmission line based on binocular distance measurement and image segmentation is characterized in that working condition image information such as conductor galloping, vibration and windage yaw is collected in real time based on an online monitoring device installed on the power transmission line, three-dimensional operation attitude information of the power transmission line is constructed, a detection system comprising a hardware equipment end, AI image recognition, a binocular distance measurement system, a rear end service module and an APP end five-power module is applied to the detection process, and the detection method comprises the following steps:
s1, installing image acquisition terminal equipment on the power transmission line, and calibrating an accurate binocular camera;
s2, the terminal equipment acquires the binocular picture and forwards the binocular picture to the AI server, and after the AI server acquires the picture, the AI server deduces whether a dangerous target exists in the picture and the position and the category information of the dangerous target through an image segmentation technology;
s3, the binocular ranging system acquires the position information of the dangerous target in the picture provided by the AI server, calculates the distance between the dangerous target and the lead and the distance between the dangerous target and the equipment according to a formula, and simultaneously can construct the three-dimensional attitude information of the dangerous target, the lead and the equipment;
s4, the early warning system utilizes the target category provided by the image segmentation technology and the danger level provided by the binocular distance measurement technology; meanwhile, the waving intensity of the wire is calculated according to the change condition in a period of time.
Further, a semantic segmentation model of deep lab V3+ is used in the image segmentation technology in S2, and the semantic segmentation model is composed of an Encoder and a Decoder, a DCNN and an ASPP portion of deep lab is regarded as the Encoder, a portion of the original image size obtained by fusing a high-level semantic feature output by the ASPP and low-level high-resolution information in the DCNN and upsampling the fused high-level semantic feature is regarded as the Decoder, and the upsampling mode used in the semantic interpolation is bilinear interpolation.
Further, in S2, the convolution upsampling mode of the acquired image is calculated by a hole convolution: for each position i on the feature y of the convolution output and the corresponding convolution kernel w, the computation of the hole convolution for the input x is as follows:
Figure DEST_PATH_IMAGE002
where r is the void rate, representing the sample step size of the convolution kernel at the input x of the convolution operation; k denotes the position of the convolution kernel parameter, e.g. with a convolution kernel size of 3, k =0,1, 2; kernel _ size represents the convolution kernel size.
Further, the distance between the measured point and the camera in the binocular ranging system in S3 is calculated as follows: z is the distance from the measured point P to the camera, and the optical centers of the left camera and the right camera respectively, b is the distance between the two optical centers, and the imaging points of the measured point P on the left image and the right image respectively, f is the focal length of the camera, and the distance between the imaging points and the middle point of the images; according to the triangle similarity theorem, formula (1) can be obtained:
Figure DEST_PATH_IMAGE004
formula (1) is simplified, and d is used for replacing d to obtain formula 2, namely a principle formula of binocular distance measurement:
Figure DEST_PATH_IMAGE006
wherein d is called parallax, so when the base length, the focal length of the camera and the parallax of the image point are known, the actual distance from the measured point to the camera can be calculated by formula (2).
Further, the camera data calibrated in S1 includes intrinsic parameters of itself and extrinsic parameters of relative camera position.
The invention has the beneficial effects that:
the method for detecting the galloping of the power transmission line based on the binocular distance measurement and image segmentation technology calculates the position and the category of a dangerous target through the image segmentation technology; calculating the actual distance between the dangerous target and the wire and the equipment by using a binocular ranging technology; calculating the galloping state and intensity of the lead, and monitoring the safety of the power transmission line; meanwhile, inspection personnel can obtain alarm reminding information in real time to master the field situation, and the monitoring efficiency is greatly improved.
Drawings
Fig. 1 is a flow chart of the working principle of the detection method of galloping of power transmission lines based on binocular ranging and image segmentation of the present invention.
FIG. 2 is a schematic diagram of the network structure of deep Lab V3+ in the embodiment of the present invention.
FIG. 3 is a schematic diagram of a hole convolution in an embodiment of the present invention.
Fig. 4 is a schematic diagram of a depth convolution in an embodiment of the present invention.
FIG. 5 is a diagram of an overall structure of image semantic segmentation Xcaption in an embodiment of the present invention.
Fig. 6 is a graph of the results of testing on the PASCAL VOC 2012 data set in an embodiment of the present invention.
FIG. 7 is a model diagram of the spatial relationship between the binocular left and right cameras and the measured point p in the embodiment of the present invention.
Fig. 8 is a schematic diagram of a binocular ranging system in an embodiment of the present invention.
Fig. 9 is a diagram illustrating a result of detecting the field tree barrier in the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Examples
At present, for power transmission lines under strong wind working conditions, icing working conditions and ultra-high tree threats, the power transmission lines mostly depend on manual site to confirm hidden danger points, routine inspection and special inspection organization are carried out, daily monitoring is carried out, and labor intensity and management difficulty are increased. The reason investigation and accident evaluation after the accident occurs lack the support data, and the reduction degree of difficulty is big, the cycle length, and is with high costs. The detection method provided by the invention can carry out reasoning identification through a deep learning image segmentation algorithm model, then transmits an identification result to a binocular distance measurement system, the binocular distance measurement system judges whether distance needs to be calculated through the identification result, finally sends the identification result and calculated obstacle distance information to a mobile phone APP end and an equipment terminal, a picture is checked at the mobile phone APP end, the outlines and the distance information of the conducting wires and the obstacles can be marked out in different colors and compared with original images, and whether the conducting wires sway or not and the degree of the sway can be judged according to the difference of the fixed obstacle distances at different moments in the same scene. The technical scheme of the invention is as follows:
referring to fig. 1, a detection method for power transmission line galloping based on binocular distance measurement and image segmentation is characterized in that working condition image information such as conductor galloping, vibration and windage yaw is collected in real time based on an online monitoring device installed on a power transmission line, three-dimensional operation attitude information of the power transmission line is constructed, a detection system comprising a hardware device end, an AI image recognition module, a binocular distance measurement system, a rear end service module and an APP end module is applied in a detection process, and the detection method comprises the following steps:
s1, installing image acquisition terminal equipment on the power transmission line, and calibrating an accurate binocular camera;
s2, the terminal equipment acquires the binocular picture and forwards the binocular picture to the AI server, and after the AI server acquires the picture, the AI server deduces whether a dangerous target exists in the picture and the position and the category information of the dangerous target through an image segmentation technology;
s3, the binocular ranging system acquires the position information of the dangerous target in the picture provided by the AI server, calculates the distance between the dangerous target and the lead and the distance between the dangerous target and the equipment according to a formula, and simultaneously can construct the three-dimensional attitude information of the dangerous target, the lead and the equipment;
s4, the early warning system utilizes the target category provided by the image segmentation technology and the danger level provided by the binocular distance measurement technology; meanwhile, the intensity of the galloping of the lead is calculated according to the change condition in a period of time.
The working mechanisms under different working conditions can be roughly classified into four types:
a, when the line load is increased, the temperature of a lead is increased, and the inclination angle and the sag are changed, the monitoring device automatically adjusts the acquisition period, starts the distance measurement function and evaluates the safety distance of an external damage or hidden danger point;
and B, when the line meets a strong wind condition and the frequency and amplitude of conductor galloping are obviously increased, the monitoring device gradually shortens the acquisition period, acquires and records the galloping frequency and amplitude of the conductor in real time if necessary, simultaneously starts an image acquisition function and a binocular distance measurement function, evaluates the safety distance of the hidden danger points, and gives an on-site and remote alarm when the safety distance exceeds a set threshold value.
C, in the strong wind working condition, although the line does not swing, the wind deflection is increased, the wire abnormal movement working mechanism can be triggered, and the working mechanism is similar to the working mechanism in the swinging state.
And D, under the ice coating working condition, the change of the wind surface of the wire due to ice coating increases the wind deflection, and the front-end device can start a different-action working mechanism due to the waving phenomenon with a certain probability. And adjusting the image acquisition period according to the change of the sag and the amplitude of the waving, starting a binocular distance measurement function, and judging the safe distance.
Referring to fig. 3 and 4, in the image segmentation technique in S2, a semantic segmentation model of deep lab V3+ is used, and the semantic segmentation model is composed of an Encoder and a Decoder, where a DCNN portion and an ASPP portion of deep lab are regarded as an Encoder, a portion that is up-sampled to the size of an original image after high-layer semantic features output by the ASPP are fused with low-and low-layer high-resolution information in the DCNN is regarded as a Decoder, and an up-sampling manner used therein is bilinear interpolation. Bilinear interpolated sampling can give better results relative to transposed convolution upsampling at lower computation/memory overhead.
Further, in S2, the convolution upsampling manner of the acquired image is calculated by a hole convolution (aperture Conv), where the hole convolution (aperture Conv) is a tool capable of effectively controlling the resolution of the feature map output by the deep neural network and capable of adjusting the receptive field of the convolution kernel to capture multi-scale information, and the hole convolution is an extension of the standard convolution: for each position i on the feature y of the convolution output and the corresponding convolution kernel w, the computation of the hole convolution for the input x is as follows:
Figure DEST_PATH_IMAGE002A
where r is the void rate, representing the sample step size of the convolution kernel at the input x of the convolution operation; k denotes the position of the convolution kernel parameter, e.g. with a convolution kernel size of 3, k =0,1, 2; kernel _ size represents the convolution kernel size.
Referring to fig. 4, the standard convolution is a hole convolution with a hole rate of 1. The field of the convolution kernel changes as the hole rate changes.
The depth separable convolution splits a standard convolution into depth convolution +1 × 1 convolution, greatly reducing the computational complexity. Particularly, the deep convolution independently performs convolution operation on each channel of the input feature, and then performs fusion operation between channels on the output of the deep convolution by using convolution of 1 × 1, so that a standard convolution operation is replaced, namely spatial information is fused, and information between different channels is fused. In fig. 4, (a) is a deep convolution, and a convolution operation is performed for each channel separately; (b) it is the aforementioned 1 x 1 convolution that is used to fuse the information between channels. (a) And (b) constitute a depth separable convolution.
DeepLabV3 as Encoder DeepLabV3 uses hole convolution to perform feature extraction on feature of arbitrary resolution output from a deep neural network. The ratio of the spatial resolution of the model input image and the output feature map (before the global pooling or full-connection layer) is represented here using the output stride (output stride). For the classification task, the spatial resolution of the final feature map is often 1/32 for the model input image, so the output step size is 32. For the semantic segmentation task, denser features can be extracted by removing the step size of the last 1 to 2 modules of the network and correspondingly using hole convolution (for example, using hole convolution with hole rates of 2 and 4 for the last two network modules to achieve an output step size of 8) to reduce the output step size of the entire model to reach an output step size of 8 or 16. In addition, the deep lab v3 adds a hole space pyramid module (ASPP) with image-level features, which can acquire multi-scale convolution features with different hole rates. The feature map last output before the logits block of the original DeepLabV3 is used as the output of the Encoder part in the Encoder-Decoder. It should be noted that feature map of the Encoder output contains 256 channels and rich semantic information. In addition, hole convolution can be used to extract features at any resolution of the input, depending on computational power.
DeepLabV3 is proposed as the feature output of Encoder, the output step length is usually 16, in the previous research work, the feature map restores the output feature map to the model input size by upsampling 16 times through bilinear interpolation, which can be regarded as a simple Decode module. However, such a simple Decoder module may not be able to recover object segmentation details well. Thus, a simple but efficient Decoder module is proposed, as shown in the whole structure diagram of deplab v3+ in fig. 2, the feature of the Encoder output is first up-sampled by 4 times of bilinear interpolation, then spliced in channel dimension with the lower level (shallow) feature of the same size in the backbone in the Encoder (e.g., the output of the Conv2 module of respet-101), and the lower level feature is first convolved 1 before splicing in order to reduce the number of channels of the lower level feature, since the lower level feature usually contains a large number of channels (e.g., 256 or 512), so that the importance of the lower level feature may exceed the semantically rich feature of the Encoder output (only 256 channels in this model) and make training more difficult. After concatenating the Encoder output features and the low-level features, the result of the concatenation is subjected to several 3 x 3 convolution operations to refine the features, followed by a 4-fold bilinear interpolation upsampling. In the following experiments it is demonstrated that the best trade-off between speed and accuracy is achieved when the Encoder output step size is 16. When the output step length of the Encoder is 8, the model effect is slightly improved, but the cost of additional calculation complexity is correspondingly increased.
Referring to fig. 5 and 6, the Xception model is adopted to perform semantic segmentation task, and as a result, the input flow network structure (entry flow network structure) of Xception is not modified in this document for faster computation and efficient memory application; (2) the max pooling operation is replaced by using a depth separable convolution with a step size, or the depth separable convolution can be replaced by the hole separable convolution described above to extract features at any resolution of the input (or alternatively the max pooling operation with hole rate is used instead of the original pooling operation). (3) Additional batch normalization and ReLU operations were added after each 3 x 3 deep convolution, similar to the MobileNet design.
Referring to fig. 7 and 8, further, the distance between the measured point and the camera in the binocular ranging system in S3 is calculated as follows: z is the distance from the measured point P to the camera, and the optical centers of the left camera and the right camera respectively, b is the distance between the two optical centers, and the imaging points of the measured point P on the left image and the right image respectively, f is the focal length of the camera, and the distance between the imaging points and the middle point of the images; according to the triangle similarity theorem, formula (1) can be obtained:
Figure DEST_PATH_IMAGE004A
equation (1) is simplified, and will be replaced by d, equation 2, i.e. the principle equation of binocular range finding, is obtained:
Figure DEST_PATH_IMAGE006A
where d is called parallax, when the base length, the focal length of the camera and the image point parallax are known, the actual distance from the measured point to the camera can be calculated by formula (2).
In order to ensure the ranging precision of the binocular ranging system, the calibration of binocular ranging is a crucial and important step, and the ranging accuracy of the binocular ranging system can be ensured only by ensuring the calibration accuracy of the binocular ranging. The measured target is subjected to image acquisition through the camera, a camera model is established, the corresponding relation between the measured target and the image can be obtained, the acquired image is analyzed, and three-dimensional reconstruction can be completed. In the process of three-dimensional reconstruction, the most important part is to establish a calibration camera model, and after the camera model is established successfully, the model parameters of the camera can be obtained through analysis and research on the camera model, so that calibration is completed. The camera calibration is to calibrate the intrinsic parameters of the camera and the extrinsic parameters of the relative position of the camera; currently, commonly used calibration techniques include: camera self-calibration methods, traditional camera calibration methods, and active vision camera calibration methods.
Referring to fig. 1 and 9, the system includes five modules, namely a hardware device side, an AI image recognition, a binocular ranging system, a back-end service, and an APP side. The equipment terminal shoots images of visual angles in various directions by a plurality of cameras, once a harmful target enters a monitoring range and is shot, the images are transmitted to a rear-end server through a wireless SIM card, after the rear end receives image information, AI image recognition service is called through an interface API, inference recognition is carried out by a deep learning image segmentation algorithm model, a recognition result is transmitted to a binocular ranging system, the binocular ranging system judges whether distance needs to be calculated or not according to the recognition result, finally, the recognition result and calculated obstacle distance information are transmitted to a mobile phone APP end and an equipment terminal, pictures are checked at the mobile phone APP end, outlines and distance information of a lead and an obstacle can be marked in different colors and compared with original images, whether the lead waves and the intensity of the wave motion can be judged according to the difference of fixed obstacle distances of the same scene at different moments, and whether dangerous moving objects and moving objects generate dangerous moving objects to the lead lines can be judged according to the classification of the divided objects And the degree of risk. At the moment, the patrol personnel can obtain the alarm reminding information in real time to master the field situation, and the monitoring efficiency is greatly improved.

Claims (5)

1. A detection method for transmission line galloping based on binocular distance measurement and image segmentation is characterized in that the detection method is based on the fact that an online monitoring device installed on a transmission line collects working condition image information such as conductor galloping, vibration and windage yaw in real time to construct three-dimensional operation attitude information of the transmission line, a detection system comprising a hardware equipment end, AI image recognition, a binocular distance measurement system, a rear end service module and an APP end five-power module is applied in the detection process, and the detection method comprises the following steps:
s1, installing image acquisition terminal equipment on the power transmission line, and calibrating an accurate binocular camera;
s2, the terminal equipment acquires the binocular picture and forwards the binocular picture to the AI server, and after the AI server acquires the picture, whether a dangerous target exists in the picture or not and the position and the category information of the dangerous target are deduced through an image segmentation technology;
s3, the binocular ranging system acquires the position information of the dangerous target in the picture provided by the AI server, calculates the distance between the dangerous target and the lead and the distance between the dangerous target and the equipment according to a formula, and simultaneously can construct the three-dimensional attitude information of the dangerous target, the lead and the equipment;
s4, the early warning system utilizes the target category provided by the image segmentation technology and the [ A1] danger level provided by the binocular ranging technology; and meanwhile, calculating the waving intensity of the lead according to the change condition of the A2 in a period of time.
2. The method for detecting galloping of power transmission lines based on binocular ranging and image segmentation as claimed in claim 1, wherein a semantic segmentation model of deep lab V3+ is utilized in the image segmentation technology in S2, and the semantic segmentation model consists of an encor and a decor, wherein DCNN and ASPP portions of deep lab are regarded as encor, a portion that is up-sampled to the size of an original after high-level semantic features output by ASPP are fused with low-level and low-level high-resolution information in DCNN is regarded as decor, and an up-sampling mode used therein is bilinear interpolation.
3. The binocular range finding and image segmentation-based detection method of power line galloping according to claim 2, wherein the manner of convolution upsampling the collected image in S2 is calculated by a hole convolution: for each position i on the feature y of the convolution output and the corresponding convolution kernel w, the computation of the hole convolution for the input x is as follows:
Figure DEST_PATH_IMAGE001
where r is the void rate, representing the sample step size of the convolution kernel at the input x of the convolution operation; k denotes the position of the convolution kernel parameter, e.g. with a convolution kernel size of 3, k =0,1, 2; kernel _ size represents the convolution kernel size.
4. The binocular range finding and image segmentation-based detection method for galloping of power transmission lines as claimed in claim 1, wherein the distance between the measured point and the camera in the binocular range finding system in S3 is calculated in a manner that: z is the distance from the measured point P to the camera, and the optical centers of the left camera and the right camera respectively, b is the distance between the two optical centers, and the imaging points of the measured point P on the left image and the right image respectively, f is the focal length of the camera, and the distance between the imaging points and the middle point of the images; according to the triangle similarity theorem, equation (1) can be obtained:
Figure 585842DEST_PATH_IMAGE002
formula (1) is simplified, and d is used for replacing d to obtain formula 2, namely a principle formula of binocular distance measurement:
Figure DEST_PATH_IMAGE003
where d is called parallax, when the base length, the focal length of the camera and the image point parallax are known, the actual distance from the measured point to the camera can be calculated by formula (2).
5. The binocular range finding and image segmentation based detection method for galloping of power transmission lines as claimed in claim 4, wherein the camera data calibrated in S1 includes intrinsic parameters of itself and extrinsic parameters of relative camera positions.
CN202210590232.1A 2022-05-26 2022-05-26 Method for detecting galloping of power transmission line based on binocular distance measurement and image segmentation Pending CN114998245A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210590232.1A CN114998245A (en) 2022-05-26 2022-05-26 Method for detecting galloping of power transmission line based on binocular distance measurement and image segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210590232.1A CN114998245A (en) 2022-05-26 2022-05-26 Method for detecting galloping of power transmission line based on binocular distance measurement and image segmentation

Publications (1)

Publication Number Publication Date
CN114998245A true CN114998245A (en) 2022-09-02

Family

ID=83029343

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210590232.1A Pending CN114998245A (en) 2022-05-26 2022-05-26 Method for detecting galloping of power transmission line based on binocular distance measurement and image segmentation

Country Status (1)

Country Link
CN (1) CN114998245A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115620496A (en) * 2022-09-30 2023-01-17 北京国电通网络技术有限公司 Fault alarm method, device, equipment and medium applied to power transmission line
CN116046076A (en) * 2023-03-09 2023-05-02 合肥工业大学 Online detection system for power transmission line galloping based on machine vision technology

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115620496A (en) * 2022-09-30 2023-01-17 北京国电通网络技术有限公司 Fault alarm method, device, equipment and medium applied to power transmission line
CN115620496B (en) * 2022-09-30 2024-04-12 北京国电通网络技术有限公司 Fault alarm method, device, equipment and medium applied to power transmission line
CN116046076A (en) * 2023-03-09 2023-05-02 合肥工业大学 Online detection system for power transmission line galloping based on machine vision technology

Similar Documents

Publication Publication Date Title
CN114998245A (en) Method for detecting galloping of power transmission line based on binocular distance measurement and image segmentation
CN108109385B (en) System and method for identifying and judging dangerous behaviors of power transmission line anti-external damage vehicle
CN114419825B (en) High-speed rail perimeter intrusion monitoring device and method based on millimeter wave radar and camera
Yang et al. Deep learning‐based bolt loosening detection for wind turbine towers
CN113469278B (en) Strong weather target identification method based on deep convolutional neural network
CN115034324B (en) Multi-sensor fusion perception efficiency enhancement method
CN112528979A (en) Transformer substation inspection robot obstacle distinguishing method and system
CN115620239B (en) Point cloud and video combined power transmission line online monitoring method and system
CN115809986A (en) Multi-sensor fusion type intelligent external damage detection method for power transmission corridor
CN114267155A (en) Geological disaster monitoring and early warning system based on video recognition technology
CN113947555A (en) Infrared and visible light fused visual system and method based on deep neural network
CN115797408A (en) Target tracking method and device fusing multi-view image and three-dimensional point cloud
CN105516661B (en) Principal and subordinate's target monitoring method that fisheye camera is combined with ptz camera
CN111582069B (en) Track obstacle zero sample classification method and device for air-based monitoring platform
CN115995058A (en) Power transmission channel safety on-line monitoring method based on artificial intelligence
CN115984672A (en) Method and device for detecting small target in high-definition image based on deep learning
CN111898671B (en) Target identification method and system based on fusion of laser imager and color camera codes
CN112926415A (en) Pedestrian avoiding system and pedestrian monitoring method
CN115083209B (en) Vehicle-road cooperation method and system based on visual positioning
CN114779794B (en) Street obstacle identification method based on unmanned patrol vehicle system in typhoon scene
CN116129553A (en) Fusion sensing method and system based on multi-source vehicle-mounted equipment
CN115909285A (en) Radar and video signal fused vehicle tracking method
CN115984768A (en) Multi-target pedestrian real-time detection positioning method based on fixed monocular camera
CN114552601A (en) Binocular vision power transmission line oscillation monitoring and three-dimensional reconstruction method
CN114912536A (en) Target identification method based on radar and double photoelectricity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination