WO2021114508A1 - 一种巡线机器人视觉导航巡检和避障方法 - Google Patents

一种巡线机器人视觉导航巡检和避障方法 Download PDF

Info

Publication number
WO2021114508A1
WO2021114508A1 PCT/CN2020/081422 CN2020081422W WO2021114508A1 WO 2021114508 A1 WO2021114508 A1 WO 2021114508A1 CN 2020081422 W CN2020081422 W CN 2020081422W WO 2021114508 A1 WO2021114508 A1 WO 2021114508A1
Authority
WO
WIPO (PCT)
Prior art keywords
inspection
image
visual navigation
robot
line
Prior art date
Application number
PCT/CN2020/081422
Other languages
English (en)
French (fr)
Inventor
李方
贾绍春
樊广棉
薛家驹
吴积贤
杨帆
Original Assignee
广东科凯达智能机器人有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东科凯达智能机器人有限公司 filed Critical 广东科凯达智能机器人有限公司
Priority to US17/432,131 priority Critical patent/US11958197B2/en
Publication of WO2021114508A1 publication Critical patent/WO2021114508A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • B25J9/1676Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/48Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/758Involving statistics of pixels or of feature values, e.g. histogram matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Definitions

  • the invention relates to the technical field of line-following robots, in particular to a line-following robot visual navigation inspection and obstacle avoidance method.
  • the traditional transmission line inspection method is mainly manual inspection.
  • the inspection efficiency is low, the labor intensity is high, the workers often work in the field, the working environment is harsh, and the inspection is difficult for the transmission line sections that cross mountains, dense forests, and large rivers. Bigger.
  • the efficiency of helicopter inspection is higher, but its economic benefits are poor, and it is easy to ignore the subtle damage of the transmission line.
  • Line inspection robot is a special robot used to inspect high-voltage transmission lines. It can be used to replace manual inspection. Its inspection efficiency is high and the imaging effect is good. It is an inevitable trend of the combination of robot technology and the development of transmission line inspection technology.
  • the purpose of the present invention is to propose a visual navigation inspection and obstacle avoidance method of the line inspection robot, which has the characteristics of smooth inspection and obstacle crossing process.
  • the present invention adopts the following technical solutions:
  • a visual navigation inspection and obstacle avoidance method for a line inspection robot is equipped with a motion control system, a visual navigation system and an inspection vision system.
  • the motion control system and the visual navigation system establish a communication connection with the inspection vision system;
  • the method includes the following steps:
  • the inspection camera of the inspection vision system collects real-time inspection images, judges and recognizes the type of towers based on the inspection images, inspects the wires and auxiliary structures between the towers, and inspects the insulator string and the tower fittings;
  • the visual navigation system shoots the visual navigation image in front of the line-following robot in real time, and preprocesses the image; extracts and recognizes the features of the target object in the preprocessed image, and obtains the type of the target object;
  • the vision navigation system uses the monocular ranging method to measure the distance between the target object and the inspection robot body to achieve coarse positioning;
  • the motion control system adjusts the traveling speed of the line-following robot according to the rough positioning distance, and performs precise positioning for collision detection at a safe speed;
  • the visual navigation system sends the corresponding obstacle crossing strategy to the motion control system according to the tower type and the target object type, so that the inspection robot can complete the obstacle crossing.
  • step (2) the step of preprocessing the image is: sequentially performing grayscale processing, restoration processing, denoising processing, enhancement processing, segmentation processing, and normalization processing on the collected image.
  • step (2) the step of extracting and recognizing the features of the target object in the preprocessed image is: decomposing the preprocessed character image by using statistical wavelet transform to extract the character image that reflects the character
  • the feature vector of statistical and structural features is extracted, and the target object that matches the feature vector in the template library is extracted, and the type of the target object is obtained.
  • the geometric method is used for detection and obstacle determination is performed.
  • step (1) when the inspection vision system detects that the inspection robot enters the tower inspection area through object recognition, it initiates the specific inspection path planning for the insulator string and the tower fittings, and completes the object inspection;
  • the inspection camera needs to be tracked and adjusted in real time so that it can always place the inspection object in the center of the image.
  • the method for real-time tracking and adjustment of the inspection camera is: the inspection images are sequentially processed as follows: image grayscale, image restoration, image denoising, image enhancement, inspection object detection, object contour extraction, contour
  • image grayscale image restoration
  • image denoising image enhancement
  • image enhancement inspection object detection
  • object contour extraction contour
  • the geometric center detects and calculates the center offset, and then adjusts the camera angle according to the center offset distance.
  • the types of poles and towers include linear poles and towers.
  • the wires and insulators of the linear tower are approximately at a 90-degree angle, and the wires and insulators of the tensile tower are approximately at a 0-degree angle.
  • step (1) the inspection vision system recognizes the insulator string in the inspection image, and sends the identified target type to the visual navigation system, then step (3) is entered.
  • the line patrol robot in the present invention is provided with a visual navigation system and a patrol vision system at the same time, information can be transmitted between the two systems, the patrol vision system performs routine inspections, and the visual navigation system is used for real-time acquisition
  • the types of target objects on the inspection route are then coarsely positioned and combined with the motion control system for precise positioning, and then the obstacle crossing strategy is selected to make the inspection robot complete the obstacle crossing.
  • FIG. 1 is a schematic diagram of the flow of real-time tracking and adjustment of the inspection camera of the present invention
  • Figure 2 is a schematic diagram of the inspection scheme of the inspection vision system of the present invention.
  • FIG. 3 is a flowchart of the insulator identification algorithm of the invention.
  • Figure 4 is a schematic diagram of the process of cooperating with the visual navigation system and the motion control system to cross obstacles;
  • Figure 5 is a diagram of the line-following robot's monocular range finding model and a simplified diagram of the geometric relationship between the camera and the wire;
  • Figure 6 is a graph and image frame of visual ranging
  • Figure 7 is a template image diagram and a variation diagram of each component of the feature vector used in the establishment of the template library
  • Fig. 8 is a diagram showing the matching result of the cylindrical shock-proof hammer and the mean feature vector Fm in 64 frames of images;
  • Figure 9 is an image of each stage of the anti-vibration hammer detection
  • Figure 10 is a flowchart of obstacle detection by geometric method
  • Figure 11 is a diagram of the result of open operation and image difference
  • Figure 12 is a graph showing the results of the warping detection.
  • the invention provides a visual navigation inspection and obstacle avoidance method for a line inspection robot.
  • the line inspection robot is provided with a motion control system, a visual navigation system and an inspection vision system.
  • the motion control system and the visual navigation system establish communication with the inspection vision system. connection;
  • the method includes the following steps:
  • the inspection camera of the inspection vision system collects real-time inspection images, judges and recognizes the type of towers based on the inspection images, inspects the wires and auxiliary structures between the towers, and inspects the insulator string and the tower fittings.
  • the types of poles and towers include linear poles and insulators.
  • the conductors and insulators of the straight towers form an angle of approximately 90 degrees, and the wires and insulators of the tensile tower form an angle of approximately 0 degrees.
  • the type of tower is identified by detecting the relative positions of the conductors and insulators.
  • the inspection vision system detects that the inspection robot enters the tower inspection area through object recognition, it starts the specific inspection path planning for the insulator string and the tower fittings, and completes the object inspection; the inspection robot needs to perform the inspection during the walking process.
  • the camera performs real-time tracking and adjustment so that it can always place the inspection object in the center of the image.
  • the method for real-time tracking and adjustment of the inspection camera is as follows: the inspection image is processed in sequence, image graying, image restoration, image denoising, image enhancement, inspection object detection, object contour Extract and detect the geometric center of the contour and calculate the center offset, and then adjust the camera angle according to the center offset distance.
  • the inspection plan of the inspection vision system is shown in Figure 2.
  • the inspection path planning is started by identifying the type of the pole and tower, and then according to the tracking strategy of the camera of the inspection vision system to the object, the insulator string and the fittings and other objects are inspected.
  • step (1) the inspection vision system recognizes the insulator string in the inspection image, and sends the identified target type to the visual navigation system, then step (3) is entered.
  • the camera of the visual navigation system is fixed on the arm of the robot, and the visual navigation image presents the scene in a certain angle of view in front of the robot, and the insulator is generally outside the visual navigation angle of view. Therefore, the identification of the insulator must be analyzed and identified by the image of the robot's inspection vision system.
  • the flow of the insulator recognition algorithm is shown in Figure 3.
  • the image of the inspection vision system is a video frame. First, the video frame is grayed out, and then downsampling, edge extraction, line detection, interest area determination, light influence elimination, and interest area are sequentially performed. Binarization, morphological processing, connected domain labeling, feature extraction, matching the extracted features with the features in the template library.
  • the visual navigation system captures the visual navigation image in front of the line-following robot in real time, and preprocesses the image; extracts and recognizes the features of the target object in the preprocessed image, and obtains the type of the target object.
  • the steps of preprocessing the image are: sequentially performing grayscale processing, restoration processing, denoising processing, enhancement processing, segmentation processing, and normalization processing on the acquired image.
  • the steps of extracting and recognizing the features of the target object in the preprocessed image are: using wavelet transform of statistical method to decompose the character image formed after preprocessing, extracting the feature vector that can reflect the character statistics and structural features, and extracting The target object that matches the feature vector in the template library is used to obtain the target object type.
  • the geometric method is used to detect and judge the obstacle.
  • the vision navigation system uses the monocular ranging method to measure the distance between the target object and the inspection robot body to achieve coarse positioning.
  • a two-dimensional image is a projection of a three-dimensional world on a two-dimensional image plane.
  • the depth information is lost, and the depth information cannot be obtained with only one image.
  • To obtain depth information there must be a known quantity, and it is possible to obtain depth information based on the known quantity.
  • the distance from the lens closest to the lens on the image wire is measured, combined with the principle of small hole imaging and the direct geometric relationship between the corresponding size of the robot, and the distance from the obstacle to the lens along the wire is obtained.
  • the left picture in Fig. 5 is the robot monocular ranging model
  • the right picture is a simplified diagram of the geometric relationship between the camera and the wire
  • d1 is the known distance
  • d is the distance to be measured.
  • the imaging model from the small hole has the following formula:
  • v 1 , v 2 is the difference of the ordinates of the edge lines on both sides of the wire at B. Since Zc>>Xc,Zc>>Yc, then:
  • step (4) the motion control system adjusts the traveling speed of the line-following robot according to the rough positioning distance, and performs precise positioning for collision detection at a safe speed;
  • the visual navigation system sends the corresponding obstacle crossing strategy to the motion control system according to the tower type and the target object type, so that the inspection robot can complete the obstacle crossing.
  • the process of the visual navigation system and the motion control system to cross obstacles is as follows: first perform object recognition, determine the type of target object, and then measure the distance between the target object and the inspection robot body (the relative position of the object and the robot) Ranging) to perform rough positioning of the line-following robot; then, according to the rough positioning, the line-following robot is subjected to motion control and speed adjustment, and the line-following robot performs precise positioning of collision detection at a safe speed to determine whether to trigger according to the priori model of the line
  • the designed obstacle-crossing strategy (the obstacle-crossing strategy is formulated according to the constructed route prior model), and finally the obstacle-crossing is completed.
  • Gray-scale of video frame change color image into gray-scale image.
  • the video frame image taken by the camera is a color image, which contains three components of RGB.
  • the first step of processing is to convert the color image into a gray image, that is, the value of each point of the image is a gray level between 0-255 .
  • the straight line ⁇ , ⁇ > in the original image becomes ⁇ /k, ⁇ > after downsampling at an interval of k points, the angle remains unchanged, and the distance ⁇ becomes 1/k of the original, so the points to be processed are the original image 1/k 2 , and the processing time becomes 1/k 2 of the original.
  • the sampling interval k used here is 2. Get a reduced image with an area of only 1/k 2 of the original image by downsampling, then use HOUGH transformation to get a straight line ⁇ /k, ⁇ >, and then multiply ⁇ /k by the sampling interval k to get the original image Straight line parameters in ⁇ , ⁇ >.
  • Edge extraction The CANNY algorithm is used to obtain the edge of the image, which is convenient for subsequent straight line extraction.
  • the accuracy of the edge extraction is directly related to the accuracy of the extraction of the straight lines on both sides of the wire, which has a great impact on the accuracy of the subsequent monocular ranging.
  • the current algorithm with better effect is CANNY algorithm.
  • Obstacles that affect the robot's travel must be located near the wire, and the area with a certain height on one side of the wire (40 pixels in the application) is the area of interest, then the obstacle must be located in the area of interest, which can reduce processing Range, speed up the processing speed.
  • Elimination of light influence Eliminate light influence and ensure the integrity of the extracted target. GAMMA correction algorithm is used in the algorithm.
  • Binarization of the region of interest The target is transformed into a binarized image to prepare for feature extraction.
  • the process is expressed by the formula:
  • T is determined by the more classic OTSU algorithm.
  • Morphological processing Use circular structural elements to process the area of interest. The main purpose is to fill small holes, smooth edges, and eliminate burrs.
  • Connected domain labeling segment each target to facilitate subsequent extraction of target features. Connected domain marking is to mark each connected target in the image as the same gray scale, so that each target is distinguished by different gray scales.
  • Feature extraction extract the features of each target.
  • the first 4 components of the HU moment feature are used here.
  • the latter three components are not stable.
  • template library the template matching method is adopted, and the template library needs to be established in advance, that is, the features of known obstacles are extracted and stored as a template (as shown in Figure 7).
  • the template needs to consider factors such as scale (image size) and various angle changes of the image under the same viewing angle. Finally, the feature values of the extracted images are averaged, and the average value is used as the feature vector of the obstacle (as shown in Figure 8).
  • Feature matching compare the similarity between each target and the features of the known template.
  • Both the target feature and the template use the feature vector shown in the following formula, that is, the vector contains 6 components, then the similarity between the target and template i is:
  • n the number of template species known obstacle
  • d i the number of template species known obstacle
  • Figure 8 shows the similarity of the cylindrical anti-vibration hammer after using the average of the feature values extracted from 64 images as a template. It can be seen from the figure that when the target position is moderate, the similarity is high, and the target is too far away. Too close, the similarity will decrease. Too far is because the target is too small to show enough features in the image, and too close is because the target is incomplete due to the influence of light.
  • the images at each stage of the detection are shown in Figure 9.
  • Figure 11(b) and Figure 11( c) make difference, get Figure 11(d), after edge extraction processing, Figure 11(e) get a complete small target, including warped stocks and scattered stocks, there are incomplete large targets, through the detection
  • the initial area of the target and the processed area can be excluded from the large target, and then the warped strands and loose strands can be determined by geometric features in the remaining small targets (shown in Figure 11(e)).
  • Opening angle detection Calculate the opening angles of the two points closest to the wire and the point farthest from the wire in the edge image of each connected domain, and use the opening angle to determine whether the strands are warped or loose. If the opening angle is less than the threshold, it is a warped stock, and if it is greater than the threshold, it is a loose stock. The opening angle is calculated by the formula:
  • the two points B and C are the two points where the target contour is closest to the wire and the farthest from each other.
  • A is the point on the target contour with the largest product of the distance to the two points B and C.
  • Figure 12 The upper two pictures of warped strand detection are one set of test results, and the lower two pictures are another set of test results.
  • the opening angle of the test picture is 4.2°.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Automation & Control Theory (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

一种巡线机器人视觉导航巡检和避障方法,巡线机器人设置有运动控制***、视觉导航***和巡检视觉***;该方法包括以下步骤:(1)巡检视觉***根据巡检图像判断识别杆塔类型进行巡检;(2)视觉导航***实时拍摄视觉导航图像,得出目标对象的种类;(3)粗定位;(4)精准定位;(5)视觉导航***根据杆塔类型和目标对象种类将对应的越障策略发送至运动控制***,使巡检机器人完成越障。该巡检和越障方法实时高效。

Description

一种巡线机器人视觉导航巡检和避障方法 技术领域
本发明涉及巡线机器人技术领域,尤其涉及一种巡线机器人视觉导航巡检和避障方法。
背景技术
传统的输电线路巡检方法主要以人工巡线为主,其巡线效率低,劳动强度大,工人经常野外工作,工作环境恶劣,并且跨越高山,密林,大河的输电线路档段的巡检难度更大。采用直升机巡检效率较高,但是其经济效益差,并且容易忽略输电线路的细微损坏。巡线机器人是一种用于巡检高压输电线路的特种机器人,可用于代替人工巡检,其巡检效率高,成像效果好,是机器人技术与输电线路巡检技术发展相结合的必然趋势。
在机器人的巡线过程中会经过绝缘子、防震锤以及线材发生散股/翘股等障碍物,障碍物的出现需调整巡线机器人的行进速度和行进形态。现有的巡检机器人仅设置巡线视觉***,该巡检***难以在完成巡检任务的同时执行较好的越障策略。
发明内容
本发明的目的在于提出一种巡线机器人视觉导航巡检和避障方法,具有巡检和越障过程顺畅的特点。
为达此目的,本发明采用以下技术方案:
一种巡线机器人视觉导航巡检和避障方法,巡线机器人设置有运动控制***、视觉导航***和巡检视觉***,运动控制***和视觉导航***均与巡检视觉***建立通信连接;
该方法包括以下步骤:
(1)巡检视觉***的巡检摄像头采集实时采集巡检图像,根据巡检图像判断识别杆塔类型,对杆塔之间的导线及其附属结构巡检和对绝缘子串及杆塔金具巡检;
(2)视觉导航***实时拍摄巡线机器人前方的视觉导航图像,并对图像进行预处理;对预处理后的图像中的目标对象的特征进行提取和识别,得出目标对象的种类;
(3)在确定目标对象种类后,视觉导航***采用单目测距的方式测量目标对象与巡检机器人本体的距离,实现粗定位;
(4)运动控制***根据粗定位距离调整巡线机器人的行进速度,以安全速度进行碰撞检测的精准定位;
(5)视觉导航***根据杆塔类型和目标对象种类将对应的越障策略发送至运动控制***,使巡检机器人完成越障。
进一步的,步骤(2)中,对图像进行预处理的步骤为:将采集得到的图像的依次进行灰度化处理、复原处理、去噪处理、增强处理、分割处理和规格化处理。
进一步的,步骤(2)中,对预处理后的图像中的目标对象的特征进行提取和识别的步骤为:利用统计方法的小波变换对预处理后形成的字符图像进行分解,提取能反映字符统计和结构特征的特征向量,提取模板库中与该特征向量相匹配的目标对象,得出目标对象的种类。
进一步的,当对预处理后的图像中的目标对象与模板库目标对象均不匹配时,采用几何方法进行检测,进行障碍物判定。
进一步的,步骤(1)中,当巡检视觉***通过物体识别检测到巡检机器人进入杆塔巡检区域时,启动针对绝缘子串和杆塔金具的特定巡检路径规划,通 过完成对象巡检;
巡检机器人行走过程中需要对巡检摄像头进行实时地跟踪调整,以使其能够将巡检对象始终置于图像的中央位置。
进一步的,对巡检摄像头进行实时地跟踪调整的方法为:将巡检图像依次进行以下处理,图像灰度化、图像复原、图像去噪、图像增强、巡检对象检测、对象轮廓提取、轮廓几何中心检测和计算中心偏移,然后根据中心偏移距离调整摄像机角度。
进一步的,步骤(1)中,杆塔类型包括直线杆塔和耐张杆塔,直线塔的导线和绝缘子大致成90度角,耐张杆塔的导线和绝缘子大致成0度角,通过检测导线和绝缘子相对位置识别杆塔类型。
进一步的,在步骤(1)中,巡检视觉***对巡检图像中绝缘子串进行识别,将识别出的该目标种类发送至视觉导航***,则进入步骤(3)。
本发明的有益效果为:本发明中的巡线机器人同时设置视觉导航***和巡检视觉***,两***之间可实现信息传递,巡检视觉***进行常规巡检,视觉导航***用于实时获取巡检路线上的目标对象种类,然后进行粗定位和结合运动控制***进行精准定位,之后选择越障策略使巡检机器人完成越障。本发明将巡检和越障分别有两个视觉***进行识别处理,这种巡检和越障方法实时高效。
附图说明
图1是本发明巡检摄像头实时地跟踪调整的流程示意图;
图2是本发明巡检视觉***的巡检方案的示意图;
图3是发明绝缘子识别算法流程图;
图4是视觉导航***和运动控制***像配合进行越障的流程示意图;
图5是巡线机器人单目测距模型图及摄像机与导线几何关系简图;
图6是视觉测距的曲线图和图像帧;
图7是建立模板库所采用的模板图像图和特征矢量各分量变化图;
图8是64帧图像中圆柱形防震锤与均值特征矢量Fm匹配结果图;
图9是防震锤检测的检测各阶段图像;
图10是几何方法对障碍物检测流程图;
图11是开运算和图像差分结果图;
图12是翘股检测结果图。
具体实施方式
下面结合附图及具体实施方式进一步说明本发明的技术方案。
本发明提供一种巡线机器人视觉导航巡检和避障方法,巡线机器人设置有运动控制***、视觉导航***和巡检视觉***,运动控制***和视觉导航***均与巡检视觉***建立通信连接;
该方法包括以下步骤:
(1)巡检视觉***的巡检摄像头采集实时采集巡检图像,根据巡检图像判断识别杆塔类型,对杆塔之间的导线及其附属结构巡检和对绝缘子串及杆塔金具巡检。杆塔类型包括直线杆塔和耐张杆塔,直线塔的导线和绝缘子大致成90度角,耐张杆塔的导线和绝缘子大致成0度角,通过检测导线和绝缘子相对位置识别杆塔类型。
当巡检视觉***通过物体识别检测到巡检机器人进入杆塔巡检区域时,启动针对绝缘子串和杆塔金具的特定巡检路径规划,通过完成对象巡检;巡检机器人行走过程中需要对巡检摄像头进行实时地跟踪调整,以使其能够将巡检对象始终置于图像的中央位置。
如图1所示,对巡检摄像头进行实时地跟踪调整的方法为:将巡检图像依次进行以下处理,图像灰度化、图像复原、图像去噪、图像增强、巡检对象检测、对象轮廓提取、轮廓几何中心检测和计算中心偏移,然后根据中心偏移距离调整摄像机角度。
巡检视觉***的巡检方案如图2所示,巡线机器人在沿地线行走时,选件导线及其附属结构,当巡检视觉***通过物体识别和机器人定位检测到巡检机器人进入杆塔巡检区域时,通过识别杆塔类型启动巡检路径规划,然后根据巡检视觉***的摄像头对物体的跟踪策略,巡检绝缘子串和金具等对象。
进一步的,在步骤(1)中,巡检视觉***对巡检图像中绝缘子串进行识别,将识别出的该目标种类发送至视觉导航***,则进入步骤(3)。
本发明中,将视觉导航***的摄像头固定在机器人的手臂上,视觉导航图像呈现机器人前方一定视角内场景,而绝缘子一般是在视觉导航视角之外。故绝缘子的识别须借由机器人的巡检视觉***的图像进行分析和识别。绝缘子识别算法流程如图3所示,巡检视觉***的图像为视频帧,首先将视频帧灰度化,然后依次进行降采样、边缘提取、直线检测、兴趣区确定、消除光照影响、兴趣区二值化、形态学处理、连通域标记、特征提取,将提取的特征与模板库中的特征进行匹配。
(2)视觉导航***实时拍摄巡线机器人前方的视觉导航图像,并对图像进行预处理;对预处理后的图像中的目标对象的特征进行提取和识别,得出目标对象的种类。
对图像进行预处理的步骤为:将采集得到的图像的依次进行灰度化处理、复原处理、去噪处理、增强处理、分割处理和规格化处理。对预处理后的图像中的目标对象的特征进行提取和识别的步骤为:利用统计方法的小波变换对预 处理后形成的字符图像进行分解,提取能反映字符统计和结构特征的特征向量,提取模板库中与该特征向量相匹配的目标对象,得出目标对象的种类。
当对预处理后的图像中的目标对象与模板库目标对象均不匹配时,采用几何方法进行检测,进行障碍物判定。
(3)在确定目标对象种类后,视觉导航***采用单目测距的方式测量目标对象与巡检机器人本体的距离,实现粗定位。
单目测距的原理为:二维图像是三维世界在二维图像平面上的投影,在投影过程中,丢失了深度信息,仅凭一副图像是无法获取深度信息的。要获取深度信息,必须有已知量,根据已知量才有可能获取深度信息。算法中通过测取图像导线上离镜头最近处到镜头的距离,结合小孔成像原理及机器人相应尺寸直接的几何关系,得到障碍物沿导线到镜头的距离。
如图5所示,图5中左图是机器人单目测距模型,右图是摄像机与导线几何关系简图,d1为已知距离,d为需测量距离。由小孔成像模型有下式:
Figure PCTCN2020081422-appb-000001
(u,v)为图像中像素点坐标,[Xc,Yc,Zc]为摄像机坐标系中点的三维坐标。由上式可得:
Figure PCTCN2020081422-appb-000002
Figure PCTCN2020081422-appb-000003
v 1,v 2为B处导线两侧边缘线的纵坐标之差。由于Zc>>Xc,Zc>>Yc,则:
Figure PCTCN2020081422-appb-000004
其中d c1上式所示,同理在障碍物所在的C处有:
Figure PCTCN2020081422-appb-000005
即得:d 2=k·d 1+(k-1)f
获取距离d 2,其中d 1可预先测得,k为图5中两线B处和障碍物C处纵坐标差之比。有了障碍物到镜头的距离d2,根据机器人的尺寸可得到障碍物沿导线到摄像机镜头的距离,如图5中上图所示A和C点间距离d,即:
Figure PCTCN2020081422-appb-000006
为验证算法的有效性,进行了实验验证,机器人以速度v前进,从某时间以速度和行驶时间得到已走过距离s R,以视觉测距得到离障碍物距离s V,两者之和为定值,即:
Figure PCTCN2020081422-appb-000007
s R+s v=s 0,机器人转速为500、700、900r/min,摄像机每秒拍摄25帧,每5帧测一次离障碍物距离,并与已走过距离相加。算法中准确的检测导线边缘直线是测距的关键。其结果图为图6所示,其中,斜向上直线按行驶速度计算的机器人已走过距离s R,三条线对应三种速度;斜向下曲线为通过视觉所测距离s V,即机器人到障碍物距离;水平/竖直曲线为两者之和s 0,为定值,即为一条水平线。通过在导线上做标记,得到距离的实际值和通过视觉测量值如下表所示。
  1 2 3 4 5 6 7 8 9
实际(mm) 500 1000 1500 2000 2500 3000 3500 4000 4500
测量(mm) 517 991 1541 2070 2551 2892 3384 3863 4314
误差 3.4% 0.9% 2.7% 3.5% 2.0% 3.6% 3.2% 4.1% 4.2%
本发明的方法中,步骤(4)运动控制***根据粗定位距离调整巡线机器人的行进速度,以安全速度进行碰撞检测的精准定位;
(5)视觉导航***根据杆塔类型和目标对象种类将对应的越障策略发送至运动控制***,使巡检机器人完成越障。
如图4所示,视觉导航***和运动控制***像配合进行越障的过程为:首先进行物体识别,确定目标对象的种类,然后测量目标对象与巡检机器人本体的距离(物体和机器人相对位置测距),对巡线机器人进行粗定位;然后,根据粗定位对巡线机器人进行运动控制和速度调节,巡线机器人以安全速度进行碰撞检测的精确定位,以确定是否触发根据线路先验模型设计的越障策略(该越障策略是根据构建的路线先验模型制定的),最终完成越障。
以检测防震锤检测为例,其流程与绝缘子识别流程相似,具体如下:
1)视频帧的灰度化:将彩色图像变为灰度图像。由摄像机拍摄的视频帧图像为彩色图像,包含RGB三个分量,处理的第一步为将彩色图像转换为灰度图像,即图像每一个点的值为介于0-255的一个灰度级。
2)降采样:为加快直线检测的速度,对图像进行降采样处理,其与HOUGH变化的关系为:
Figure PCTCN2020081422-appb-000008
原始图像中直线<ρ,α>经过间隔k点的降采样后,变为<ρ/k,α>,角度不变,距离ρ变为原来的1/k,这样需处理的点为原始图像的1/k 2,处理时间也变为原来的1/k 2。这里采用的采样间隔k为2。通过降采样得到面积只有原始图像1/k 2的缩小了的图像,再在其中使用HOUGH变换得到直线<ρ/k,α>,然后将ρ/k乘以采样间距k,即可得到原图像中的直线参数<ρ,α>。
3)边缘提取:采用CANNY算法,得到图像的边缘,便于后续的直线提取。边缘提取的准确程度直接关系到导线两侧边缘直线的提取精确程度,对后面的单目测距精度影响非常大。目前效果比较好的算法为CANNY算法。
4)直线提取:通过HOUGH算法,得到导/地线的两侧边缘直线。算法中以检 测出的最长的直线为导线边缘。直线的提取是确定兴趣区和单目测距的基础。
5)兴趣区确定:影响机器人行进的障碍物必定位于导线附近,以导线的一侧一定高度(应用中采用40像素)的区域为兴趣区,则障碍物必定位于兴趣区内,这样可以减少处理范围,加快处理速度。
6)光照影响的消除:消除光照影响,保证提取目标的完整性,算法中采用GAMMA矫正算法。
7)兴趣区二值化:将目标转变为二值化的图像,为特征提取作准备,该过程用公式表示为:
Figure PCTCN2020081422-appb-000009
其中T通过比较经典的OTSU算法确定。
8)形态学处理:用圆形结构元素对兴趣区进行处理,主要目的为填补小的孔洞、平滑边缘,消除毛刺。
9)连通域标记:分割出各个目标,便于后续提取目标的特征。连通域标记也就是将图像中的每一个联通的目标标记为相同的灰度,这样各个目标通过不同的灰度区分开。
10)特征提取:提取各目标的特征,这里采用HU矩特征的前4个分量,后三个分量稳定性不好,配合目标长宽比及质心到导线距离与长度之比,可表示为:
Figure PCTCN2020081422-appb-000010
这些特征都具有尺度、方向不变性。其中
Figure PCTCN2020081422-appb-000011
决定目标的形状,d/l确定目标相对于导线的位置。各量的含义为:l/w为防震锤长宽比,d/l为防震锤质心到导线边缘线距离与其长度之比,
Figure PCTCN2020081422-appb-000012
为HU矩的前四项,HU矩共有7项,测试中发现后三项值很小,且变化很大,所以只使用了前四项。
11)模板库的建立:采用模板匹配的方法,需要预先建立模板库,也就是将 已知障碍物的特征提取出来,作为模板保存起来(如图7所示)。模板需考虑尺度(图像大小)、相同视角下图像的各种角度变化等因素。最后将提取的各图像的特征值取均值,以均值作为该障碍物的特征矢量(如图8所示)。
12)特征匹配:采用比较各目标和已知模板特征的相似度。设目标特征为X=[x 1,x 2,x 3,…x N],模板i的特征为Mi=[m 1,m 2,m 3,…m N](N=6),此处目标特征与模板均采用如下公式所示特征向量,即矢量中含6个分量,则目标和模板i间的相似度为:
Figure PCTCN2020081422-appb-000013
计算出d k(k=1,2,3…n),n为已知障碍物模板的种类数,取d i的最大值,如果d k=max(di)>T,T为选定的阈值,则认为待测目标属于模板k,算法中选定T为85%,即待测目标和模板Mi相关性为85%时,认为目标属于模板Mi。
图8为采用64幅图像提取特征值的均值作为模板后,用此模板识别圆柱形防震锤的时的相似度,由图中可知,当目标位置适中时,相似度较高,目标太远或太近,相似度都会降低。太远是因为目标太小,图像中显示不出足够特征,太近是由于光照的影响,目标不完整。检测各阶段图像如图9所示。
以散股/翘股为例,采用几何方法对障碍物进行判定的方法如下,检测流程如图10所示。具体的步骤如下:
1)开运算、图像差分的方法:采用直径为导线成像最粗处相同的圆形结构元素进行开运算,将较细的翘股和散股去掉,然后由图像差分得到剩余的部分。如图11(a-f)中,先用形态学平滑运算处理兴趣区,使细小噪声点减少,同时目标区域连接成一个平滑的整体(如图11(b)所示),再对二值图用开运算处理,将翘股或散股去掉,剩下图11(c)中结果,这时只剩下大目标的一部分,细小的 目标被完全去掉,然后将图11(b)和图11(c)做差分,得到图11(d),经过边缘提取处理后,图11(e)中得到完整的细小目标,翘股和散股就包括在其中,还有不完整的大目标,通过检测目标的最初面积和处理后面积,可以排除大目标,然后在剩下的细小目标中通过几何特征确定翘股和散股(图11(e)所示)。
2)张角检测:计算每一个连通域的边缘图像中离导线最近的两点和离导线最远的一点的张角,以张角来判断是翘股还是散股。若张角小于阈值,则为翘股,大于阈值,则为散股。张角通过公式计算:
Figure PCTCN2020081422-appb-000014
张角计算图几何模型中ABC三点连线组成的三角形,求A点所成张角。B、C两点为目标轮廓距导线最近且彼此相距最远的两点,A为目标轮廓上到B、C两点距离之积最大的点。图12翘股检测上部两图为一组检测结果,下部两图为检测的另一组结果,检测图中张角为4.2°。
以上结合具体实施例描述了本发明的技术原理。这些描述只是为了解释本发明的原理,而不能以任何方式解释为对本发明保护范围的限制。基于此处的解释,本领域的技术人员不需要付出创造性的劳动即可联想到本发明的其它具体实施方式,这些方式都将落入本发明的保护范围之内。

Claims (8)

  1. 一种巡线机器人视觉导航巡检和避障方法,其特征在于,所述巡线机器人设置有运动控制***、视觉导航***和巡检视觉***,所述运动控制***和视觉导航***均与巡检视觉***建立通信连接;
    该方法包括以下步骤:
    (1)所述巡检视觉***的巡检摄像头采集实时采集巡检图像,根据巡检图像判断识别杆塔类型,对杆塔之间的导线及其附属结构巡检和对绝缘子串及杆塔金具巡检;
    (2)所述视觉导航***实时拍摄巡线机器人前方的视觉导航图像,并对图像进行预处理;对预处理后的图像中的目标对象的特征进行提取和识别,得出目标对象的种类;
    (3)在确定目标对象种类后,所述视觉导航***采用单目测距的方式测量目标对象与巡检机器人本体的距离,实现粗定位;
    (4)所述运动控制***根据粗定位距离调整巡线机器人的行进速度,以安全速度进行碰撞检测的精准定位;
    (5)所述视觉导航***根据杆塔类型和目标对象种类将对应的越障策略发送至运动控制***,使巡检机器人完成越障。
  2. 根据权利要求1所述的巡线机器人视觉导航巡检和避障方法,其特征在于,所述步骤(2)中,对所述图像进行预处理的步骤为:将采集得到的图像的依次进行灰度化处理、复原处理、去噪处理、增强处理、分割处理和规格化处理。
  3. 根据权利要求1所述的巡线机器人视觉导航巡检和避障方法,其特征在于,所述步骤(2)中,对预处理后的图像中的目标对象的特征进行提取和识别的步骤为:利用统计方法的小波变换对预处理后形成的字符图像进行分解,提 取能反映字符统计和结构特征的特征向量,提取模板库中与该特征向量相匹配的目标对象,得出目标对象的种类。
  4. 根据权利要求3所述的巡线机器人视觉导航巡检和避障方法,其特征在于,当对预处理后的图像中的目标对象与模板库目标对象均不匹配时,采用几何方法进行检测,进行障碍物判定。
  5. 根据权利要求1所述的巡线机器人视觉导航巡检和避障方法,其特征在于,所述步骤(1)中,当巡检视觉***通过物体识别检测到巡检机器人进入杆塔巡检区域时,启动针对绝缘子串和杆塔金具的特定巡检路径规划,通过完成对象巡检;
    巡检机器人行走过程中需要对巡检摄像头进行实时地跟踪调整,以使其能够将巡检对象始终置于图像的中央位置。
  6. 根据权利要求5所述的巡线机器人视觉导航巡检和避障方法,其特征在于,所述对巡检摄像头进行实时地跟踪调整的方法为:将巡检图像依次进行以下处理,图像灰度化、图像复原、图像去噪、图像增强、巡检对象检测、对象轮廓提取、轮廓几何中心检测和计算中心偏移,然后根据中心偏移距离调整摄像机角度。
  7. 根据权利要求1所述的巡线机器人视觉导航巡检和避障方法,其特征在于,所述步骤(1)中,杆塔类型包括直线杆塔和耐张杆塔,直线塔的导线和绝缘子大致成90度角,耐张杆塔的导线和绝缘子大致成0度角,通过检测导线和绝缘子相对位置识别杆塔类型。
  8. 根据权利要求1所述的巡线机器人视觉导航巡检和避障方法,其特征在于,在步骤(1)中,所述巡检视觉***对巡检图像中绝缘子串进行识别,将识别出的该目标种类发送至视觉导航***,则进入步骤(3)。
PCT/CN2020/081422 2019-12-09 2020-03-26 一种巡线机器人视觉导航巡检和避障方法 WO2021114508A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/432,131 US11958197B2 (en) 2019-12-09 2020-03-26 Visual navigation inspection and obstacle avoidance method for line inspection robot

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911247121.5 2019-12-09
CN201911247121.5A CN110687904B (zh) 2019-12-09 2019-12-09 一种巡线机器人视觉导航巡检和避障方法

Publications (1)

Publication Number Publication Date
WO2021114508A1 true WO2021114508A1 (zh) 2021-06-17

Family

ID=69117730

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/081422 WO2021114508A1 (zh) 2019-12-09 2020-03-26 一种巡线机器人视觉导航巡检和避障方法

Country Status (3)

Country Link
US (1) US11958197B2 (zh)
CN (1) CN110687904B (zh)
WO (1) WO2021114508A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114088088A (zh) * 2021-11-15 2022-02-25 贵州大学 一种基于单目视觉的角速率与角加速度测量方法
CN114851209A (zh) * 2022-06-21 2022-08-05 上海大学 一种基于视觉的工业机器人工作路径规划优化方法及***
CN117786439A (zh) * 2024-02-23 2024-03-29 艾信智慧医疗科技发展(苏州)有限公司 医用搬运机器人视觉智能导航***

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110687904B (zh) * 2019-12-09 2020-08-04 广东科凯达智能机器人有限公司 一种巡线机器人视觉导航巡检和避障方法
CN112000094A (zh) * 2020-07-20 2020-11-27 山东科技大学 一种单双目结合的高压输电线路金具在线识别定位***及方法
CN112229845A (zh) * 2020-10-12 2021-01-15 国网河南省电力公司濮阳供电公司 基于视觉导航技术的无人机高精度饶塔智能巡检方法
CN112102395B (zh) * 2020-11-09 2022-05-20 广东科凯达智能机器人有限公司 一种基于机器视觉的自主巡检的方法
CN112508865B (zh) * 2020-11-23 2024-02-02 深圳供电局有限公司 一种无人机巡检避障方法、装置、计算机设备和存储介质
CN113222838A (zh) * 2021-05-07 2021-08-06 国网山西省电力公司吕梁供电公司 基于视觉定位的无人机自主巡线方法
CN114001738A (zh) * 2021-09-28 2022-02-01 浙江大华技术股份有限公司 视觉巡线定位方法、***以及计算机可读存储介质
CN113821038A (zh) * 2021-09-28 2021-12-21 国网福建省电力有限公司厦门供电公司 一种用于机器人的智能导航路径规划***及方法
CN114179096A (zh) * 2021-10-29 2022-03-15 国网山东省电力公司武城县供电公司 变电站巡检机器人
CN114237224A (zh) * 2021-11-19 2022-03-25 深圳市鑫疆基业科技有限责任公司 自动巡检方法、***、终端设备以及计算机可读存储介质
CN113946154B (zh) * 2021-12-20 2022-04-22 广东科凯达智能机器人有限公司 一种巡线机器人的视觉识别方法和***
CN114367996A (zh) * 2022-02-21 2022-04-19 南京理工大学 一种刀具损伤原位检测与换刀机器人
CN114905180A (zh) * 2022-06-30 2022-08-16 中船黄埔文冲船舶有限公司 一种中组立焊缝的避障焊接路径优化方法及装置
CN115588139B (zh) * 2022-11-22 2023-02-28 东北电力大学 一种电网安全智能巡航检测方法

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102116625A (zh) * 2009-12-31 2011-07-06 武汉大学 巡线机器人gis-gps导航方法
CN102114635A (zh) * 2009-12-31 2011-07-06 武汉大学 巡线机器人智能控制器
CN103762522A (zh) * 2014-02-24 2014-04-30 武汉大学 一种高压线路巡检机器人的找线装置及自主找线控制方法
WO2016060139A1 (ja) * 2014-10-14 2016-04-21 富士フイルム株式会社 橋梁検査ロボットシステム
CN107966985A (zh) * 2017-10-31 2018-04-27 成都意町工业产品设计有限公司 一种输电线巡检机器人自主定位***及其方法
CN107962577A (zh) * 2016-10-20 2018-04-27 哈尔滨工大天才智能科技有限公司 一种除冰机器人视觉***构建及控制方法
CN108508909A (zh) * 2018-03-07 2018-09-07 周文钰 一种基于综合导航的输电线路巡视机器人及其巡线方法
CN110687904A (zh) * 2019-12-09 2020-01-14 广东科凯达智能机器人有限公司 一种巡线机器人视觉导航巡检和避障方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN209329565U (zh) * 2019-01-30 2019-08-30 许济平 一种用于输电线路的自平衡转向越障机器人

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102116625A (zh) * 2009-12-31 2011-07-06 武汉大学 巡线机器人gis-gps导航方法
CN102114635A (zh) * 2009-12-31 2011-07-06 武汉大学 巡线机器人智能控制器
CN103762522A (zh) * 2014-02-24 2014-04-30 武汉大学 一种高压线路巡检机器人的找线装置及自主找线控制方法
WO2016060139A1 (ja) * 2014-10-14 2016-04-21 富士フイルム株式会社 橋梁検査ロボットシステム
CN107962577A (zh) * 2016-10-20 2018-04-27 哈尔滨工大天才智能科技有限公司 一种除冰机器人视觉***构建及控制方法
CN107966985A (zh) * 2017-10-31 2018-04-27 成都意町工业产品设计有限公司 一种输电线巡检机器人自主定位***及其方法
CN108508909A (zh) * 2018-03-07 2018-09-07 周文钰 一种基于综合导航的输电线路巡视机器人及其巡线方法
CN110687904A (zh) * 2019-12-09 2020-01-14 广东科凯达智能机器人有限公司 一种巡线机器人视觉导航巡检和避障方法

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WU, GONGPING ET AL.: "Research on High Voltage Transmission Line Inspection Robot and its Key Technologies", PROCEEDINGS OF THE 2005 NATIONAL SUMMIT FORUM ON ADVANCED MANUFACTURING EQUIPMENT AND ROBOTICS, 22 June 2006 (2006-06-22), pages 1 - 120, XP055821581 *
ZHU, YANHUAN: "Research on Visual Detection Method of High-Voltage Line Inspection Robot", ELECTRONIC TECHNOLOGY & INFORMATION SCIENCE, CHINA MASTER’S THESES FULL-TEXT DATABASE (ELECTRONIC JOURNALS), 15 February 2018 (2018-02-15), pages 1 - 83, XP055821577, ISSN: I138-1768 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114088088A (zh) * 2021-11-15 2022-02-25 贵州大学 一种基于单目视觉的角速率与角加速度测量方法
CN114088088B (zh) * 2021-11-15 2023-08-04 贵州大学 一种基于单目视觉的角速率与角加速度测量方法
CN114851209A (zh) * 2022-06-21 2022-08-05 上海大学 一种基于视觉的工业机器人工作路径规划优化方法及***
CN114851209B (zh) * 2022-06-21 2024-04-19 上海大学 一种基于视觉的工业机器人工作路径规划优化方法及***
CN117786439A (zh) * 2024-02-23 2024-03-29 艾信智慧医疗科技发展(苏州)有限公司 医用搬运机器人视觉智能导航***
CN117786439B (zh) * 2024-02-23 2024-05-03 艾信智慧医疗科技发展(苏州)有限公司 医用搬运机器人视觉智能导航***

Also Published As

Publication number Publication date
CN110687904B (zh) 2020-08-04
US20220152829A1 (en) 2022-05-19
US11958197B2 (en) 2024-04-16
CN110687904A (zh) 2020-01-14

Similar Documents

Publication Publication Date Title
WO2021114508A1 (zh) 一种巡线机器人视觉导航巡检和避障方法
WO2020151109A1 (zh) 基于点云带权通道特征的三维目标检测方法及***
US10290219B2 (en) Machine vision-based method and system for aircraft docking guidance and aircraft type identification
CN106709950B (zh) 一种基于双目视觉的巡线机器人跨越障碍导线定位方法
CN110866903B (zh) 基于霍夫圆变换技术的乒乓球识别方法
CN108597009B (zh) 一种基于方向角信息进行三维目标检测的方法
WO2015096507A1 (zh) 一种利用山体轮廓区域约束识别定位建筑物的方法
CN104217208A (zh) 目标检测方法和装置
CN106996748A (zh) 一种基于双目视觉的轮径测量方法
Sehestedt et al. Robust lane detection in urban environments
KR20180098945A (ko) 고정형 단일 카메라를 이용한 차량 속도 감지 방법 및 장치
TW202121331A (zh) 基於機器學習的物件辨識系統及其方法
CN104966302B (zh) 一种任意角度激光十字的检测定位方法
CN114820474A (zh) 一种基于三维信息的列车车轮缺陷检测方法
CN110675442B (zh) 一种结合目标识别技术的局部立体匹配方法及***
TWI543117B (zh) 物件辨識與定位方法
CN109671084B (zh) 一种工件形状的测量方法
CN114549549A (zh) 一种动态环境下基于实例分割的动态目标建模跟踪方法
CN112132884B (zh) 基于平行激光和语义分割的海参长度测量方法及***
JP2002175534A (ja) 道路の白線検出方法
CN111178210B (zh) 一种十字标记的图像识别及对准方法
CN110322508B (zh) 一种基于计算机视觉的辅助定位方法
Kochi et al. 3D modeling of architecture by edge-matching and integrating the point clouds of laser scanner and those of digital camera
CN116596987A (zh) 一种基于双目视觉的工件三维尺寸高精度测量方法
Juujarvi et al. Digital-image-based tree measurement for forest inventory

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20900152

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20900152

Country of ref document: EP

Kind code of ref document: A1