CN113409252A - Obstacle detection method for overhead transmission line inspection robot - Google Patents

Obstacle detection method for overhead transmission line inspection robot Download PDF

Info

Publication number
CN113409252A
CN113409252A CN202110601511.9A CN202110601511A CN113409252A CN 113409252 A CN113409252 A CN 113409252A CN 202110601511 A CN202110601511 A CN 202110601511A CN 113409252 A CN113409252 A CN 113409252A
Authority
CN
China
Prior art keywords
obstacle
robot
ground wire
image
obstacles
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110601511.9A
Other languages
Chinese (zh)
Other versions
CN113409252B (en
Inventor
蒋轩
张斌
黄国方
薛栋良
侯建国
温祥青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NARI Group Corp
Nari Technology Co Ltd
State Grid Electric Power Research Institute
Original Assignee
NARI Group Corp
Nari Technology Co Ltd
State Grid Electric Power Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NARI Group Corp, Nari Technology Co Ltd, State Grid Electric Power Research Institute filed Critical NARI Group Corp
Priority to CN202110601511.9A priority Critical patent/CN113409252B/en
Publication of CN113409252A publication Critical patent/CN113409252A/en
Application granted granted Critical
Publication of CN113409252B publication Critical patent/CN113409252B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for detecting obstacles of an overhead transmission line inspection robot, which is characterized in that the obstacles on a walking ground wire of the robot are detected in real time through a video acquisition sensor, a mode of combining a visual feature analysis algorithm and a deep neural network model is adopted, the existence of the obstacles on the ground wire in front of the robot is pre-judged based on contour features, then a target detection model based on a convolutional neural network is used for detecting the types and the positions of the obstacles, and finally the distance measurement is carried out on the existing obstacles, so that the robot is assisted to autonomously cross the obstacles by adopting different obstacle crossing actions. The invention adopts the obstacle-existence pre-judging module and the obstacle-existence judging algorithm based on the ground wire contour characteristics, can detect the sudden change of the edge of the ground wire with high accuracy, has good robustness for detecting the abnormal obstacles of unknown types, avoids the long-time operation reasoning of a deep neural network, obviously reduces the consumption of computing resources and improves the identification efficiency of the obstacles.

Description

Obstacle detection method for overhead transmission line inspection robot
Technical Field
The invention relates to the technical field of machine vision, in particular to a method for detecting obstacles of an overhead transmission line inspection robot.
Background
With the improvement of the intelligent level of the power grid, the intelligent routing inspection of the power transmission line becomes an indispensable part for the operation of the power grid. The development of the overhead transmission line inspection robot plays an important role in intelligent inspection, the overhead transmission line inspection robot can replace manual inspection, is not influenced by weather and environment, and can operate in all weather and long endurance. In order to realize continuous autonomous inspection, the robot needs to recognize obstacles existing on a walking line in front in advance and execute corresponding obstacle crossing actions according to the types and position information of the obstacles. Conventional obstacles on the ground wire comprise a shockproof hammer, wire clamps and other line hardware fittings, the reliability of obstacle detection is very important to the safe operation of the robot, and the wrong judgment can cause huge potential safety hazards. In addition, any abnormal obstacles such as plastic films, kites, balloons and the like must be accurately identified, and the robot is warned to decelerate and stop in real time.
The current commonly used method for detecting the line obstacle mainly comprises the following steps: the method comprises a traditional image processing method based on feature extraction and reclassification and a deep learning method based on a convolutional neural network. The feature-based method includes: the method comprises the steps of a structure constraint method based on geometrical characteristics of an obstacle, and two-stage detection models of an Adaboost cascade classifier based on Haar-like characteristics and an SVM classifier based on HOG characteristics; the method based on deep learning mostly uses a manually labeled line obstacle data set to train a classical target detection model, including YOLO, SSD, fast-RCNN, and the like. Because the color and texture features of the target barrier are not obvious and the shape difference of the similar barriers is large, the identification accuracy rate is low by a method based on feature extraction, and the abnormal barriers with variable shapes are difficult to identify; the model based on deep learning only needs to be trained by using a data set containing all obstacle types, robustness of the abnormal obstacles of unknown types is low, the calculated amount is large, and instantaneity is difficult to guarantee.
The document "a high-voltage line obstacle identification method, device and inspection robot" (chinese patent application No. 201811133352.9) discloses a high-voltage line obstacle identification method. The method is used for identifying the type and the position of the obstacle based on the geometric primitive characteristics and the line structure information of the obstacle. The method can only be suitable for the inspection image with a simple background, and only can identify the line hardware, but can not identify the abnormal barrier possibly existing on the ground wire.
Document two, namely an inspection robot obstacle crossing method, system, storage medium and computing equipment (Chinese patent application No. 201911234036.5), discloses an obstacle identification and detection method based on a yolov3 lightweight model. The method uses the generated confrontation network expansion data set for training a convolutional neural network-based detection model to detect obstacles on a line. Although a light-weight model is used, the method still has large parameter quantity, depends on a high-performance computing platform and has large power consumption after long-time operation; and the influence of the data richness of the training set is large, and the robustness is difficult to ensure.
Disclosure of Invention
The purpose of the invention is as follows: the invention provides a method for detecting obstacles of an overhead transmission line inspection robot, which comprehensively combines the working principle of a visual feature analysis algorithm and a deep neural network model to solve the technical problems that the obstacle identification type is single, abnormal obstacles cannot be identified, a network model operation system is complex, the calculation time is long and the robustness is low in the prior art.
The technical scheme is as follows: a method for detecting obstacles of an overhead transmission line inspection robot comprises the following working steps:
collecting video information on a walking ground wire of the robot;
according to the video information, pre-judging whether an obstacle exists on the robot walking ground wire or not based on the outline characteristics of the robot walking ground wire;
if the robot walks on the ground line, inputting the image which is pre-judged to have the obstacle into a trained convolutional neural network model so as to obtain the obstacle type, the position of an obstacle target frame and the confidence coefficient of the obstacle target frame;
and calculating the distance between the obstacle and the robot according to the position of the obstacle, and outputting the obstacle type and the distance between the obstacle and the robot as detection results.
In a further embodiment, the method for predicting whether an obstacle exists on the walking ground line of the robot comprises the following steps:
positioning the area where the ground wire of the robot walking in the image is located; performing region segmentation on the region where the robot walking ground wire is located, and extracting the outline of the robot walking ground wire;
if the contour of the robot walking ground line is suddenly changed, judging that the robot walking ground line has an obstacle; and if the contour form of the robot walking ground wire is not complete, judging that the robot walking ground wire has the obstacle.
In a further embodiment, the method for locating the area of the robot walking ground line in the image comprises the following steps: extracting a single-frame original image containing a robot walking ground wire from the video information; performing Gaussian filtering and gray level conversion on the extracted single-frame original image to obtain a corresponding gray level image; carrying out edge detection on the gray level image to obtain an edge image; extracting straight lines in the edge image through a probabilistic Hough transform straight line detection algorithm; and clustering all the detected straight lines based on the slope to obtain an aggregated parallel line group for positioning the area of the walking ground line of the robot.
Extracting a core area of which the center of a single-frame original image contains a section of ground wire according to an original video acquired by a video acquisition sensor; because the video acquisition sensor and the robot body are fixedly connected, the angle alpha between the optical axis in the video acquisition sensor and the vertical direction is a fixed included angle range, and the relative position of the robot body and the ground wire where the robot body walks is also fixed, so that the relative position of the ground wire is always near the central axis of the whole image in a single-frame image acquired by the video acquisition sensor; even if the robot swings within a certain angle range, the relative position of the ground wire is still within a small controllable range from left to right of the central axis of the image; therefore, cutting out 416 × 416 core areas from the 1920 × 1080 pixel original image as an input of a module for judging whether the obstacles exist or not; the extraction of the core region effectively reduces the size of the image to be processed and retains more image details than direct scaling; meanwhile, the distance range of the identification area is limited, the influence of the remote line image on the current state judgment is reduced, and meanwhile, the model calculation time is reduced.
In a further embodiment, the method for performing region segmentation on the region where the robot walking ground wire is located comprises the following steps:
using an OTSU maximum inter-class variance method to perform foreground and background segmentation on a gray image, when an included angle alpha between an optical axis of a video acquisition sensor and the vertical direction is equal to 45 degrees, shooting the video acquisition sensor to the sky upwards at an angle of 45 degrees, in order to prevent flying birds, airplanes and other air flying interference objects from being classified into a foreground in a picture acquired by the video acquisition sensor, further filtering background pixel points based on a pixel depth threshold value in a depth image after image binarization, and generating a binary image only keeping the foreground; and performing morphology opening operation on the binary image, separating possibly-adhered areas and further removing noise.
In a further embodiment, the method for judging the sudden change of the walking ground line contour of the robot comprises the following steps:
generating a convex hull of the outline of the ground wire area according to the outline of the walking ground wire of the robot, detecting convex defects of the convex hull according to the outline of the walking ground wire of the robot, and judging that the outline of the walking ground wire of the robot changes suddenly if the convex defects exceed a set threshold value; because the obstacle on the walking ground wire of the robot is attached to the ground wire, if the obstacle exists on the ground wire, the convex hull detection result contains obvious convex defects.
In addition, when the ground wire is covered by a white plastic film and a transparent object, the incomplete ground wire shape can occur, and the shape of the ground wire can present two separated sections; the ground wire area cannot be divided, the extracted contour of the ground wire area is subjected to morphological analysis, and if the ground wire is incomplete, the situation is judged to be that an obstacle exists.
In a further embodiment, the convolutional neural network model comprises: the system comprises a grouping convolution layer, a fusion convolution module and a batch normalization layer, wherein the grouping convolution layer, the fusion convolution module and the batch normalization layer are alternately combined; the number of the grouped convolutional layers is 8, and the number of the fusion convolutional modules is 3; each fused convolution module contains 4 packet convolution layers.
In a further embodiment, the training method of the convolutional neural network model includes:
replacing the last two grouped convolution layers of the convolutional neural network model with a global mean pooling layer, a full connection layer and a softmax layer to form a classification model, training the classification model by using a public data set, and freezing the parameters of a feature extraction layer after training is finished to serve as the initial parameters of the main network of the convolutional neural network model;
and fine-tuning the convolutional neural network model by using the labeled line obstacle data set.
In a further embodiment, after the obstacle is judged to exist in the motion direction, the acquired network image is input into a convolution neural network model, and the fusion convolution module outputs two characteristic diagram branches with N/2 channels based on the characteristic diagram input of the N channels through a 1 × 1 convolution kernel; one branch uses a layer of 3 × 3 packet convolution, and the other branch uses two layers of 3 × 3 packet convolution; the output results of the two branches are combined to generate a feature map of N channels, the image size is scaled to 384 × 256 pixels in the present implementation, the feature map is divided into 6 × 4 grids, each grid is set to be responsible for detecting two obstacles, an obstacle detection model based on the neural network outputs a matrix with a size of 6 × 4 × 22, and each grid detects 2 targets; each target output is represented by 11 parameters including target box confidence, center point coordinates, length and width, and probability of belonging to 6 classes.
In a further embodiment, the types of obstacles include: 3 line conventional fittings, and 3 known types of abnormal obstacles; the conventional hardware fitting for the line comprises: the device comprises a vibration damper, a wire clamp and a bridge fitting; the abnormal obstacle includes: balloons, plastic bags, and kites.
The loss function of the obstacle detection model is:
Figure BDA0003092856990000041
wherein, therein
Figure BDA0003092856990000042
Indicates whether or not the target, σ, is present in the jth candidate box of the ith latticexAnd σyRepresenting the coordinate error of the center point of the predicted target frame, wiRepresents the width of the predicted target box,
Figure BDA0003092856990000043
indicates the actual width of the target box, hiRepresents the predicted height of the target box,
Figure BDA0003092856990000044
indicating actual existing target box height, CF indicating predicted target box confidenceThe degree of the magnetic field is measured,
Figure BDA0003092856990000045
representing the probability that the predicted target belongs to class c,
Figure BDA0003092856990000046
indicates the probability that the actual object belongs to class c, classes indicates all object classes, δobjIndicating a penalty factor, δ, in the presence of a targetnoobjIndicating the penalty factor when no target is present,
Figure BDA0003092856990000047
representing the confidence of the actual target box.
In a further embodiment, the method of estimating the distance of the obstacle from the robot comprises:
taking a frame containing the obstacle in the image as a target frame, and clustering pixels in the target frame based on pixel depth to realize pixel level segmentation of the obstacle region;
removing pixel points of the area where the robot walking ground wire is located; solving the minimum value of the residual pixels to obtain the distance from the obstacle to the video acquisition sensor;
and converting the distance from the obstacle to the video acquisition sensor into the distance from the obstacle to the front walking wheels of the robot.
Has the advantages that: compared with the prior art, the invention has the following advantages:
(1) the obstacle-free pre-judging module is adopted, so that scenes without obstacles are efficiently eliminated, long-time operation reasoning of a deep neural network is avoided, consumption of computing resources is remarkably reduced, and the obstacle recognition efficiency is improved;
(2) the method has the advantages that the method adopts an obstacle-existence judgment algorithm based on the outline characteristics of the ground wire, can detect the sudden change of the edge of the ground wire with high accuracy, and has good robustness for detecting the abnormal obstacle of unknown type;
(3) the obstacle detection model is composed of a small number of grouping convolution layers and a fusion convolution module, the detection precision is guaranteed, meanwhile, the parameter quantity of the network is effectively reduced, and the real-time performance is improved.
Drawings
Fig. 1 is a complete flow chart of the obstacle detection method of the present invention.
Fig. 2 is a flowchart of an algorithm for determining the presence or absence of an obstacle according to the present invention.
Fig. 3 is a diagram of an obstacle detection neural network model of the present invention.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the invention.
In order to realize continuous autonomous inspection, the robot needs to recognize obstacles existing on a walking line in front in advance and execute corresponding obstacle crossing actions according to the types and position information of the obstacles. The conventional obstacles on the ground wire comprise a shockproof hammer, wire clamps and other line hardware fittings, the reliability of obstacle detection is crucial to the safe operation of the robot, the wrong judgment can cause huge potential safety hazards, and the applicant finds that the existing inspection robot adopting the visual identification principle has limited type and capacity of identifying the obstacles in the obstacle judgment, and the judgment and calculation process is too slow, so that the robustness of the unknown type abnormal obstacles is low, and the real-time performance is difficult to control.
The work flow of the obstacle detection method of the overhead transmission line inspection robot shown in fig. 1 is as follows:
step 1: the left and right sides fixed mounting video acquisition sensor at robot body, and the contained angle of video acquisition sensor optical axis and vertical direction is alpha, video acquisition sensor shoots and gathers the video unanimous with robot direction of motion, sends to and has or not the barrier to judge the module in advance and be used for the barrier to detect.
Step 2: and (3) pre-judging whether obstacles exist according to the video information acquired in the step (1), positioning the area where the ground wire is located, extracting the outline of the ground wire area, and judging whether obstacles exist on the ground wire by detecting the abnormity of the outline of the ground wire and the integrity.
And step 3: and after judging that the obstacle exists in the movement direction, inputting the acquired data into a convolutional neural network model, zooming the image which is judged to have the obstacle to be in a specified size, inputting the image into a trained model, and predicting the type, position and confidence coefficient of the target frame of the obstacle in the image.
And 4, step 4: according to the type and position of the obstacle predicted by the obstacle detection model and the confidence coefficient of the target frame, distance measurement and early warning are carried out on the obstacle; calculating the distance between the obstacle and the sensor, and returning the category and distance information to the robot control node; and if the obstacle of unknown type appears, returning an early warning signal to prompt the robot to brake emergently.
In the invention, under the normal condition in actual use, the line hardware such as a vibration damper, a wire clamp and the like on the ground wire only exists near the tower head, so that the robot can travel on the ground wire without obstacles for most of time; if the obstacle existence pre-judging module does not recognize the existence of the front obstacle, ending the detection flow of the current frame and continuing the detection of the next frame; most frames will be quickly excluded in the pre-determination module because they contain no obstacles, significantly reducing the consumption of computing resources.
The obstacles on the line include: known line fittings such as vibration dampers, wire clamps and bridge fittings; known abnormal obstacles such as kites, balloons and plastic films; abnormal obstacles of unknown type such as abnormal protrusions, depressions. If the obstacle existence pre-judging module identifies that an obstacle exists on the front ground wire, inputting the current frame image into an obstacle detecting module, and predicting the type of the obstacle, the position of a frame and the confidence coefficient of a target frame in the image through an obstacle detecting neural network; if the obstacle detection module detects conventional circuit hardware, the position of the obstacle frame is input into the obstacle ranging module, then the type of the obstacle and the distance from the obstacle to the robot are output, and the robot takes different obstacle crossing actions according to the type of the hardware and autonomously passes through the obstacle; if the obstacle is an abnormal obstacle of a known type, outputting the type, distance and alarm information of the obstacle after executing the obstacle ranging module, stopping the robot after the robot decelerates to a specified distance away from the obstacle, shooting a high-definition obstacle image, returning the high-definition obstacle image to a terminal device of a field worker, and manually controlling the robot; if the obstacle detection module does not detect the known type of obstacle, alarm information is directly output to indicate the robot to brake emergently, and an image is returned to a field worker for manual control.
As shown in fig. 2, the module for pre-determining whether there is an obstacle in the present invention is implemented by analyzing the edge feature morphology based on the extraction of the ground line contour, and the method for pre-determining whether there is an obstacle in step 2 is further described with reference to fig. 2:
step 2.1: extracting a single-frame original image according to an original video acquired by a video acquisition sensor; the center of the extracted single-frame original image comprises a core area of a section of ground wire; because the video acquisition sensor and the robot body are fixedly connected, the angle alpha between the optical axis in the video acquisition sensor and the vertical direction is a fixed included angle range, and the relative position of the robot body and the ground wire where the robot body walks is also fixed, so that the relative position of the ground wire is always near the central axis of the whole image in a single-frame image acquired by the video acquisition sensor; even if the robot swings within a certain angle range, the relative position of the ground wire is still within a small controllable range from left to right of the central axis of the image; therefore, cutting out 416 × 416 core areas from the 1920 × 1080 pixel original image as an input of a module for judging whether the obstacles exist or not; the extraction of the core region effectively reduces the size of the image to be processed and retains more image details than direct scaling; meanwhile, the distance range of the identification area is limited, the influence of the remote line image on the current state judgment is reduced, and meanwhile, the model calculation time is reduced.
Step 2.2: performing Gaussian filtering on the extracted image to realize smoothing; and then carrying out gray level conversion on the image after the smoothing treatment.
Step 2.3: extracting an image edge by using an edge detection algorithm; in the invention, the Canny edge detection algorithm is adopted to extract the image edge.
Step 2.4: positioning the position and the area of the ground wire; the ground wire position location based on the parallel line group detection has obvious difference with other tower parts and backgrounds because the edges of the ground wires are two nearly parallel long straight lines which are close to each other, and can be used as the main basis for judging the position of the ground wire; straight lines in the edge extraction graph are detected through cumulative probability Hough transformation, all the detected straight lines are clustered based on the slope, and an aggregated parallel line group is obtained, namely the aggregated parallel line group can be used for positioning the position of a ground line.
Step 2.5: dividing the area where the positioned ground wire is located; preferably, the step 2.5 of dividing the area where the located ground line is located further comprises:
using an OTSU maximum inter-class variance method to perform foreground and background segmentation on a gray image, when an included angle alpha between an optical axis of a video acquisition sensor and the vertical direction is equal to 45 degrees, shooting the video acquisition sensor to the sky upwards at an angle of 45 degrees, in order to prevent flying birds, airplanes and other air flying interference objects from being classified into a foreground in a picture acquired by the video acquisition sensor, further filtering background pixel points based on a pixel depth threshold value in a depth image after image binarization, and generating a binary image only keeping the foreground; performing morphological opening operation on the binary image, separating a possibly adhered area and further removing noise; and extracting the outline of each area in the graph, and determining the ground wire area based on the position of the ground wire confirmed by the method in the step 2.4.
Step 2.6: extracting the contour of the divided ground wire area; detecting whether there is a significant abrupt change in the ground profile further comprises:
firstly, generating a convex hull of the outline of the ground wire area, and then detecting convex defects, wherein the convex defects exceeding a threshold value are obvious sudden changes of the outline; because the obstacle is attached to the ground wire, the overall outline is convex, and if the obstacle exists on the ground wire, the convex hull detection result contains obvious convex defects.
Step 2.7: detecting whether the ground line contour has obvious mutation according to the extracted ground line contour, and judging the ground line contour as an obstacle; in addition, when the ground wire is covered by a white plastic film and a transparent object, the incomplete ground wire shape can occur, and the shape of the ground wire can present two separated sections; the ground wire area cannot be divided, the extracted contour of the ground wire area is subjected to morphological analysis, and if the ground wire is incomplete, the situation is judged to be that an obstacle exists.
Step 2.8: and if the obvious contour mutation is not detected, performing morphological analysis on the extracted contour of the ground wire area, and if the ground wire is incomplete, judging that the obstacle exists.
The convolutional neural network model of the invention comprises: the system comprises a grouping convolution layer, a fusion convolution module and a batch normalization layer, wherein the grouping convolution layer, the fusion convolution module and the batch normalization layer are alternately combined; the number of the grouped convolutional layers is 8, and the number of the fusion convolutional modules is 3; each fused convolution module contains 4 packet convolution layers. All convolution layers in the network use the grouping convolution to replace the traditional convolution, and the grouping convolution can effectively reduce the parameter quantity and also can reduce the redundant convolution kernel and prevent the model from being over-fitted. The specific structure of the fusion convolution module is shown in a fusion convolution module part in fig. 3, and the fusion convolution module outputs two characteristic diagram branches with N/2 channels based on the characteristic diagram input of the N channels through a convolution kernel of 1 × 1; one branch uses a layer of 3 × 3 packet convolution, and the other branch uses two layers of 3 × 3 packet convolution; and the output results of the two branches are combined to generate feature maps of N channels. The fusion convolution module improves the feature extraction capability of the model by fusing feature layers with different depths, and meanwhile, the branch structure also reduces the parameter quantity of the model.
Preferably, after judging that the moving direction has an obstacle, inputting the acquired network image into a convolutional neural network model, zooming the image size to 384 × 256 pixels, dividing the image into 6 × 4 grids, setting each grid to be responsible for detecting two obstacles, setting the type of the obstacle in the data set to be 3 line conventional hardware fittings and 3 known type abnormal obstacles, and setting the line hardware fittings to be a shockproof hammer, a wire clamp and a bridge fitting; abnormal obstacles include balloons, plastic bags and kites. The output structure of the obstacle detection model network is a matrix with the size of 6 multiplied by 4 multiplied by 22, and each grid detects 2 targets; each target output is represented by 11 parameters, including target frame confidence, center point coordinates, length and width, and probability of belonging to 6 categories;
the loss function of the obstacle detection model is:
Figure BDA0003092856990000081
wherein, therein
Figure BDA0003092856990000091
Indicates whether or not the target, σ, is present in the jth candidate box of the ith latticexAnd σyRepresenting the coordinate error of the center point of the predicted target frame, wiRepresents the width of the predicted target box,
Figure BDA0003092856990000092
indicates the actual width of the target box, hiRepresents the predicted height of the target box,
Figure BDA0003092856990000093
indicating the actual existing target box height, CF indicates the predicted target box confidence,
Figure BDA0003092856990000094
representing the probability that the predicted target belongs to class c,
Figure BDA0003092856990000095
indicates the probability that the actual object belongs to class c, classes indicates all object classes, δobjIndicating a penalty factor, δ, in the presence of a targetnoobjIndicating the penalty factor when no target is present,
Figure BDA0003092856990000096
representing the confidence of the actual target box.
In a further embodiment, the obstacle detection model method is as follows:
step 3.1: and replacing the last two grouped convolution layers of the convolutional neural network model with a global mean pooling layer, connecting the global mean pooling layer with a full connection layer and a softmax layer to form a classification model, training the classification model by using a public data set, and freezing the parameters of the feature extraction layer after the training is finished to serve as the initial parameters of the main network of the network.
Step 3.2: and fine-tuning the network by using the marked line obstacle data set.
The step 4 is further as follows:
taking a frame containing the obstacle in the image as a target frame, and clustering pixels in the target frame based on pixel depth to realize pixel level segmentation of the obstacle region; removing pixel points of the area where the robot walking ground wire is located; solving the minimum value of the residual pixels, namely obtaining the distance from the barrier to the video acquisition sensor; and converting the distance from the obstacle to the video acquisition sensor into the distance from the obstacle to the front walking wheels of the robot.
The invention adopts the obstacle-free pre-judging module, efficiently eliminates the scenes without obstacles, avoids the long-time operation reasoning of the deep neural network, obviously reduces the consumption of computing resources and improves the recognition efficiency of the obstacles; the method has the advantages that the method adopts an obstacle-existence judgment algorithm based on the outline characteristics of the ground wire, can detect the sudden change of the edge of the ground wire with high accuracy, and has good robustness for detecting the abnormal obstacle of unknown type; the obstacle detection model is composed of a small number of grouping convolution layers and a fusion convolution module, the detection precision is guaranteed, meanwhile, the parameter quantity of the network is effectively reduced, and the real-time performance is improved.
The preferred embodiments of the present invention have been described in detail with reference to the accompanying drawings, however, the present invention is not limited to the specific details of the embodiments, and various equivalent changes can be made to the technical solution of the present invention within the technical idea of the present invention, and these equivalent changes are within the protection scope of the present invention.

Claims (10)

1. The method for detecting the obstacle of the overhead transmission line inspection robot is characterized by comprising the following steps of:
collecting video information on a walking ground wire of the robot;
according to the video information, pre-judging whether an obstacle exists on the robot walking ground wire or not based on the outline characteristics of the robot walking ground wire;
if the robot walks on the ground line, inputting the image which is pre-judged to have the obstacle into a trained convolutional neural network model so as to obtain the obstacle type, the position of an obstacle target frame and the confidence coefficient of the obstacle target frame;
and calculating the distance between the obstacle and the robot according to the position of the obstacle, and outputting the obstacle type and the distance between the obstacle and the robot as detection results.
2. The overhead transmission line inspection robot obstacle detection method according to claim 1, wherein the method for prejudging whether an obstacle exists on a robot walking ground line comprises the following steps:
positioning the area where the ground wire of the robot walking in the image is located;
performing region segmentation on the region where the robot walking ground wire is located, and extracting the outline of the robot walking ground wire;
if the contour of the robot walking ground line is suddenly changed, judging that the robot walking ground line has an obstacle; and if the contour form of the robot walking ground wire is not complete, judging that the robot walking ground wire has the obstacle.
3. The overhead transmission line inspection robot obstacle detection method according to claim 2, wherein the method of locating the area where the robot walking ground wire is located in the image comprises:
extracting a single-frame original image containing a robot walking ground wire from the video information;
performing Gaussian filtering and gray level conversion on the extracted single-frame original image to obtain a corresponding gray level image;
carrying out edge detection on the gray level image to obtain an edge image;
extracting straight lines in the edge image through a probabilistic Hough transform straight line detection algorithm;
and clustering all the detected straight lines based on the slope to obtain an aggregated parallel line group for positioning the area of the walking ground line of the robot.
4. The overhead transmission line inspection robot obstacle detection method according to claim 3, wherein the method for performing area segmentation on the area where the robot walking ground wire is located comprises the following steps: carrying out foreground and background segmentation on the gray level map by using an OTSU maximum inter-class variance method;
filtering background pixel points based on a pixel depth information threshold value to generate a binary image only keeping the foreground;
and performing morphology opening operation on the binary image to separate the adhered areas.
5. The overhead transmission line inspection robot obstacle detection method according to claim 2, wherein the method for judging the sudden change of the outline of the walking ground wire of the robot comprises the following steps:
generating a robot walking ground wire outline convex hull according to the robot walking ground wire outline;
and detecting convex defects of the convex hull of the robot walking ground wire outline, and if the convex defects exceed a set threshold value, judging that the robot walking ground wire outline has sudden change.
6. The overhead power transmission line inspection robot obstacle detection method according to claim 1, wherein the convolutional neural network model includes: the device comprises a grouping convolution layer, a fusion convolution module and a batch normalization layer, wherein the grouping convolution layer, the fusion convolution module and the batch normalization layer are alternately combined.
7. The overhead transmission line inspection robot obstacle detection method according to claim 6, wherein the convolutional neural network model includes 8 individual packet convolutional layers and 3 convolutional fused modules, each convolutional fused module containing 4 packet convolutional layers.
8. The overhead power transmission line inspection robot obstacle detection method according to claim 7, wherein the training method of the convolutional neural network model includes:
replacing the last two grouped convolution layers of the convolutional neural network model with a global mean pooling layer, a full connection layer and a softmax layer to form a classification model, training the classification model by using a public data set, and freezing the parameters of a feature extraction layer after training is finished to serve as the initial parameters of a main network of the convolutional neural network model;
and fine-tuning the convolutional neural network model by using the labeled line obstacle data set.
9. The overhead transmission line inspection robot obstacle detection method according to claim 6, wherein the fusion convolution module outputs two characteristic diagram branches with N/2 channels based on characteristic diagram input of N channels through a 1 x 1 convolution kernel; one branch uses a layer of 3 × 3 packet convolution, and the other branch uses two layers of 3 × 3 packet convolution; and the output results of the two branches are combined to generate feature maps of N channels.
10. The overhead power transmission line inspection robot obstacle detection method according to claim 1, wherein the method of measuring and calculating the distance between the obstacle and the robot includes: taking a frame containing the obstacle in the image as a target frame, and clustering pixels in the target frame based on pixel depth to realize pixel level segmentation of the obstacle region;
removing pixel points in the area where the robot walking ground wire is located, solving the minimum value of the residual pixels, and obtaining the distance from the obstacle to the video acquisition sensor; and converting the distance from the obstacle to the video acquisition sensor into the distance from the obstacle to the front walking wheels of the robot.
CN202110601511.9A 2021-05-31 2021-05-31 Obstacle detection method for overhead transmission line inspection robot Active CN113409252B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110601511.9A CN113409252B (en) 2021-05-31 2021-05-31 Obstacle detection method for overhead transmission line inspection robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110601511.9A CN113409252B (en) 2021-05-31 2021-05-31 Obstacle detection method for overhead transmission line inspection robot

Publications (2)

Publication Number Publication Date
CN113409252A true CN113409252A (en) 2021-09-17
CN113409252B CN113409252B (en) 2022-08-26

Family

ID=77675439

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110601511.9A Active CN113409252B (en) 2021-05-31 2021-05-31 Obstacle detection method for overhead transmission line inspection robot

Country Status (1)

Country Link
CN (1) CN113409252B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114046796A (en) * 2021-11-04 2022-02-15 南京理工大学 Intelligent wheelchair autonomous walking algorithm, device and medium
CN116185079A (en) * 2023-04-28 2023-05-30 西安迈远科技有限公司 Unmanned aerial vehicle construction inspection route planning method based on self-adaptive cruising
CN117095411A (en) * 2023-10-16 2023-11-21 青岛文达通科技股份有限公司 Detection method and system based on image fault recognition
CN117951599A (en) * 2024-01-16 2024-04-30 北京市科学技术研究院 Underground piping diagram generation method and device based on radar image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110298330A (en) * 2019-07-05 2019-10-01 东北大学 A kind of detection of transmission line polling robot monocular and localization method
CN111191559A (en) * 2019-12-25 2020-05-22 国网浙江省电力有限公司泰顺县供电公司 Overhead line early warning system obstacle identification method based on time convolution neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110298330A (en) * 2019-07-05 2019-10-01 东北大学 A kind of detection of transmission line polling robot monocular and localization method
CN111191559A (en) * 2019-12-25 2020-05-22 国网浙江省电力有限公司泰顺县供电公司 Overhead line early warning system obstacle identification method based on time convolution neural network

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114046796A (en) * 2021-11-04 2022-02-15 南京理工大学 Intelligent wheelchair autonomous walking algorithm, device and medium
CN116185079A (en) * 2023-04-28 2023-05-30 西安迈远科技有限公司 Unmanned aerial vehicle construction inspection route planning method based on self-adaptive cruising
CN116185079B (en) * 2023-04-28 2023-08-04 西安迈远科技有限公司 Unmanned aerial vehicle construction inspection route planning method based on self-adaptive cruising
CN117095411A (en) * 2023-10-16 2023-11-21 青岛文达通科技股份有限公司 Detection method and system based on image fault recognition
CN117095411B (en) * 2023-10-16 2024-01-23 青岛文达通科技股份有限公司 Detection method and system based on image fault recognition
CN117951599A (en) * 2024-01-16 2024-04-30 北京市科学技术研究院 Underground piping diagram generation method and device based on radar image
CN117951599B (en) * 2024-01-16 2024-07-23 北京市科学技术研究院 Underground piping diagram generation method and device based on radar image

Also Published As

Publication number Publication date
CN113409252B (en) 2022-08-26

Similar Documents

Publication Publication Date Title
CN113409252B (en) Obstacle detection method for overhead transmission line inspection robot
CN107563372B (en) License plate positioning method based on deep learning SSD frame
Kumar et al. Review of lane detection and tracking algorithms in advanced driver assistance system
CN112801022B (en) Method for rapidly detecting and updating road boundary of unmanned mining card operation area
EP3633615A1 (en) Deep learning network and average drift-based automatic vessel tracking method and system
CN110866430B (en) License plate recognition method and device
CN104680559B (en) The indoor pedestrian tracting method of various visual angles based on motor behavior pattern
CN112101221A (en) Method for real-time detection and identification of traffic signal lamp
CN108230254A (en) A kind of full lane line automatic testing method of the high-speed transit of adaptive scene switching
CN111402632B (en) Risk prediction method for pedestrian movement track at intersection
CN112560852A (en) Single-stage target detection method with rotation adaptive capacity based on YOLOv3 network
CN113128476A (en) Low-power consumption real-time helmet detection method based on computer vision target detection
Qing et al. A novel particle filter implementation for a multiple-vehicle detection and tracking system using tail light segmentation
CN112085101A (en) High-performance and high-reliability environment fusion sensing method and system
CN113936210A (en) Anti-collision method for tower crane
CN116664851A (en) Automatic driving data extraction method based on artificial intelligence
Burlacu et al. Stereo vision based environment analysis and perception for autonomous driving applications
CN113239962A (en) Traffic participant identification method based on single fixed camera
CN110033050B (en) Real-time target detection and calculation method for unmanned surface vehicle
CN114066855A (en) Crack segmentation and tracking method based on lightweight model
CN113657286A (en) Power transmission line monitoring method and device based on unmanned aerial vehicle
Hou et al. Multi-level and multi-modal feature fusion for accurate 3D object detection in connected and automated vehicles
Zuo et al. A SECI method based on improved YOLOv4 for traffic sign detection and recognition
CN116630900B (en) Passenger station passenger streamline identification method, system and equipment based on machine learning
CN116092038B (en) Point cloud-based large transportation key road space trafficability judging method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant